[agi] Re: David Jone's Design and Pseudo Tests Methodology: Was David Jone's Design and Psuedo Tests Methodology

2010-09-18 Thread Jim Bromer
On Fri, Sep 17, 2010 at 10:09 AM, Jim Bromer jimbro...@gmail.com wrote:

  Oh, by the way, it's description not decription.  (I am only having some
 fun!  But talk about a psychological slip.  What was on your mind when you
 wrote decription I wonder?)


It wasn't death, so it must have had something to do with interpreting
women.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] David Jone's Design and Psuedo Tests Methodology

2010-09-15 Thread Jim Bromer
So give us an example of something that you are going to test, and how your
experimental methodology would make it clear that the dots can be connected.


On Tue, Sep 14, 2010 at 3:36 PM, David Jones davidher...@gmail.com wrote:

 Jim,
  I was trying to backup my claim that the current approach is wrong by
 suggesting the right approach and making a prediction about its prospects.
 Then, when I do make progress or don't, you can evaluate my claims and
 decide whether I was right or wrong.

  On Tue, Sep 14, 2010 at 2:21 PM, David Jones davidher...@gmail.comwrote:

 But, I also claim that while bottom-up testing of algorithms will give us
 experience, it also take a very long time to generate AGI solutions. If it
 takes a group of researchers 5 years to test out a design idea based on
 neural nets, we may learn something, but I can see from the nature and
 structure of the problem that it is unlikely to give us enough info to find
 a solution very quickly.

 So, my proposal is to adapt to the complexity of the problem and create
 pseudo designs that can be tested without having to fully implement the
 idea. I believe that there are design ideas that can be tested without
 implementing them and that it can be clear from the design and the pseudo
 tests that the design will work.

 The reason that many AGI designs must be implemented and can't be proven
 on paper very easily, is because they are not very good designs. So, it is
 even clear to the designer that there is a problem and they can't see how to
 connect the dots and make it work. Instead of realizing that it has flaws,
 they just think they can get it to work after some is implemented. I think
 there are better designs that do not have this problem. There are designs
 where you can see why it will work without having to resort to obscure
 mathematics and emergence.

 Those are the sorts of designs that I am confident I can create using my
 methodology.

 Dave





---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Use Combinations of Constrained Methods

2010-09-05 Thread Jim Bromer
Various programs in the past have been very successful with problems that
were tightly constrained.  Sorry I don't have some examples however, I do
not think this is a controversial assertion.  The reason why these programs
worked was that they could assess any number of possibilities that the
problem space offered in a very short period of time.  I think that this
kind of problem model could be used in a test of a program to see if the
basic idea of the program was worthwhile.  Under these circumstances, where
the possibilities are limited, the computer program can use an exhaustive
search of the possibilities to test how good possible solutions
looked.  Although
many problems do not have a way to immediately rate the value of the best
candidates for a solution, the majority of the candidates typically can be
eliminated.



An awareness of these limited successes is important because it gives us
some sense of what the solution might look like.  On the other hand, the
application of these limited successes to the real world is so limited that
it may seem that they offer little that is worthwhile.  I believe that is
wrong.



Suppose that you had a variety of methods of image analysis that were useful
in different circumstances but these circumstances were very limited.  By
developing programs that combine these different methods you could have a
set of methods that might produce insight about an image that no one
particular method could produce, but the extended circumstances where
insight could be produced would be unusual and it would still not produce
the kinds of results that you would want.  Now suppose that you just kept
working on this project, finding other methods that produced some kinds of
useful information for particular circumstances.  As you continued working
in this way two things would likely occur.  Each new method would tend to be
a little less useful and the complexity that would result from combining
these methods would be a little more overwhelming for the computer to
examine the possibilities.  At some point, no matter how productive you
were, your sense of progress would come to an end.



At that point some genuine advancements would be necessary, but if you got
to that point you might find that some of those advancements were waiting
for you (if you had enough insight to realize it).



Jim Bromer



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Who's on first?

2010-08-18 Thread Jim Bromer
http://search.yahoo.com/search?p=who%27s+on+firstei=utf-8fr=ie8
YouTube - Who's on first?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Students' Understanding of the Equal Sign Not Equal, Professor Says

2010-08-17 Thread Jim Bromer
Students' Understanding of the Equal Sign Not Equal, Professor Says
http://www.sciencedaily.com/releases/2010/08/100810122200.htm


The equal sign is pervasive and fundamentally linked to mathematics from
kindergarten through upper-level calculus, Robert M. Capraro says. The
idea of symbols that convey relative meaning, such as the equal sign and
less than and greater than signs, is complex and they serve as a
precursor to ideas of variables, which also require the same level of
abstract thinking.

The problem is students memorize procedures without fully understanding the
mathematics, he notes.

Students who have learned to memorize symbols and who have a limited
understanding of the equal sign will tend to solve problems such as 4+3+2=(
)+2 by adding the numbers on the left, and placing it in the parentheses,
then add those terms and create another equal sign with the new answer, he
explains. So the work would look like 4+3+2=(9)+2=11.

This response has been called a running equal sign -- similar to how a
calculator might work when the numbers and equal sign are entered as they
appear in the sentence, he explains. However, this understanding is
incorrect. The correct solution makes both sides equal. So the understanding
should be 4+3+2=(7)+2. Now both sides of the equal sign equal 9.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread Jim Bromer
On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.comwrote:

 The ideological would still need be expressed mathematically.


I don't understand this.  Computers can represent related data objects that
may be best considered without using mathematical terms (or with only
incidental mathematical functions related to things like the numbers of
objects.)



 I said:  I think the more important question is how does a general concept
 be interpreted across a range of different kinds of ideas.  Actually this is
 not so difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?

 John said: Depends on the structure. We would want to build it such that
 this happens at various levels or the various multidimensional densities.
 But at the same time complex state is preserved until proven benefits show
 themselves.


Your use of the term 'densities' suggests that you are thinking about the
kinds of statistical relations that have been talked about a number of times
in this group.   The whole problem I have with statistical models is that
they don't typically represent the modelling variations that could be and
would need to be encoded into the ideas that are being represented.  For
example a Bayesian Network does imply that a resulting evaluation would
subsequently be encoded into the network evaluation process, but only in a
limited manner.  It doesn't for example show how an idea could change the
model, even though that would be easy to imagine.
Jim Bromer


On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 
  Well, if it was a mathematical structure then we could start developing
  prototypes using familiar mathematical structures.  I think the structure
 has
  to involve more ideological relationships than mathematical.

 The ideological would still need be expressed mathematically.

  For instance
  you can apply a idea to your own thinking in a such a way that you are
  capable of (gradually) changing how you think about something.  This
 means
  that an idea can be a compression of some greater change in your own
  programming.

 Mmm yes or like a key.

  While the idea in this example would be associated with a
  fairly strong notion of meaning, since you cannot accurately understand
 the
  full consequences of the change it would be somewhat vague at first.  (It
  could be a very precise idea capable of having strong effect, but the
 details of
  those effects would not be known until the change had progressed.)
 

 Yes. It would need to have receptors, an affinity something like that, or
 somehow enable an efficiency change.

  I think the more important question is how does a general concept be
  interpreted across a range of different kinds of ideas.  Actually this is
 not so
  difficult, but what I am getting at is how are sophisticated conceptual
  interrelations integrated and resolved?
  Jim

 Depends on the structure. We would want to build it such that this happens
 at various levels or the various multidimensional densities. But at the
 same
 time complex state is preserved until proven benefits show themselves.

 John





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread Jim Bromer
It would be easy to relativize a weighted network so that it could be used
to include ideas that can effectively reshape the network (or at least
reshape the virtual network) but it is not easy to see how this could be
done intelligently enough to produce actual intelligence.  But maybe I
should try it sometime just to get some idea of what it would do.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Single Neurons Can Detect Sequences

2010-08-13 Thread Jim Bromer
Single Neurons Can Detect Sequences
http://www.sciencedaily.com/releases/2010/08/100812151632.htm



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
David,
I am not a mathematician although I do a lot
of computer-related mathematical work of course.  My remark was directed
toward John who had suggested that he thought that there is some
sophisticated mathematical sub system that would (using my words here)
provide such a substantial benefit to AGI that its lack may be at the core
of the contemporary problem.  I was saying that unless this required
mathemagic then a scalable AGI system demonstrating how effective this kind
of mathematical advancement could probably be simulated using contemporary
mathematics.  This is not the same as saying that AGI is solvable by
sanitized formal representations any more than saying that your message is a
sanitized formal statement because it was dependent on a lot of computer
mathematics in order to send it.  In other words I was challenging John at
that point to provide some kind of evidence for his view.

I then went on to say, that for example, I think that fast SAT solutions
would make scalable AGI possible (that is, scalable up to a point that is
way beyond where we are now), and therefore I believe that I could create a
simulation of an AGI program to demonstrate what I am talking about.  (A
simulation is not the same as the actual thing.)

I didn't say, nor did I imply, that the mathematics would be all there is to
it.  I have spent a long time thinking about the problems of applying formal
and informal systems to 'real world' (or other world) problems and the
application of methods is a major part of my AGI theories.  I don't expect
you to know all of my views on the subject but I hope you will keep this in
mind for future discussions.
Jim Bromer

On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.



I agree that disassociated theories have not proved to be very successful at
AGI, but then again what has?

I would use a mathematical method that gave me the number or percentage of
True cases that satisfy a propositional formula as a way to check the
internal logic of different combinations of logic-based conjectures.  Since
methods that can do this with logical variables for any logical system that
goes (a little) past 32 variables are feasible the potential of this method
should be easy to check (although it would hit a rather low ceiling of
scalability).  So I do think that logic and other mathematical methods would
help in true AGI programs.  However, the other major problem, as I see it,
is one of application. And strangely enough, this application problem is so
pervasive, that it means that you cannot even develop artificial opinions!
You can program the computer to jump on things that you expect it to see,
and you can program it to create theories about random combinations of
objects, but how could you have a true opinion without child-level
judgement?

This may sound like frivolous philosophy but I think it really shows that
the starting point isn't totally beyond us.

Jim Bromer


On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking

Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I don't feel that a non-programmer can actually define what true AGI
criteria would be.  The problem is not just oriented around a consumer
definition of a goal, because it involves a fundamental comprehension of the
tools available to achieve that goal.  I appreciate your idea that AGI has
to be diversifiable but your inability to understand certain things that are
said about computer programming makes your proclamation look odd.
Jim Bromer

On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I think I may understand where the miscommunication occurred.  When we talk
about scaling up an AGI program we are - of course - referrring to improving
on an AGI program that can work effectively with a very limited amount of
referential knowledge so that it would be able to handle a much greater
diversification of referential knowledge.  You might say that is what
scalability means.
Jim Bromer

On Wed, Aug 11, 2010 at 2:43 PM, Jim Bromer jimbro...@gmail.com wrote:

 I don't feel that a non-programmer can actually define what true AGI
 criteria would be.  The problem is not just oriented around a consumer
 definition of a goal, because it involves a fundamental comprehension of the
 tools available to achieve that goal.  I appreciate your idea that AGI has
 to be diversifiable but your inability to understand certain things that are
 said about computer programming makes your proclamation look odd.
 Jim Bromer

 On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I've made two ultra-brilliant statements in the past few days.  One is that
a concept can simultaneously be both precise and vague.  And the other is
that without judgement even opinions are impossible.  (Ok, those two
statements may not be ultra-brilliant but they are brilliant right?  Ok,
maybe not truly brilliant,  but highly insightful and
perspicuously intelligent... Or at least interesting to the cognoscenti
maybe?.. Well, they were interesting to me at least.)

Ok, these two interesting-to-me comments made by me are interesting because
they suggest that we do not know how to program a computer even to create
opinions.  Or if we do, there is a big untapped difference between those
programs that show nascent judgement (perhaps only at levels relative to the
domain of their capabilities) and those that don't.

This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
to find something that is simple enough for me to start with and which can
lend itself to develop and test theories of AGI judgement and scalability.
By allowing an AGI program to participate more in the selection of its own
primitive 'interests' we will be able to interact with it, both as
programmer and as user, to guide it toward selecting those interests which
we can understand and seem interesting to us.  By creating an AGI program
that has a faculty for primitive judgement (as we might envision such an
ability), and then testing the capabilities in areas where the program seems
to work more effectively, we might be better able to develop more
powerful AGI theories that show greater scalability, so long as we are able
to understand what interests the program is pursuing.

Jim Bromer

On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic and other mathematical methods would
 help in true AGI programs.  However, the other major problem, as I see it,
 is one of application. And strangely enough, this application problem is so
 pervasive, that it means that you cannot even develop artificial opinions!
 You can program the computer to jump on things that you expect it to see,
 and you can program it to create theories about random combinations of
 objects, but how could you have a true opinion without child-level
 judgement?

 This may sound like frivolous philosophy but I think it really shows that
 the starting point isn't totally beyond us.

 Jim Bromer


  On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I guess what I was saying was that I can test my mathematical theory and my
theories about primitive judgement both at the same time by trying to find
those areas where the program seems to be good at something.  For example, I
found that it was easy to write a program that found outlines where there
was some contrast between a solid object and whatever was in the background
or whatever was in the foreground.  Now I, as an artist could use that to
create interesting abstractions.  However, that does not mean that an AGI
program that was supposed to learn and acquire greater judgement based on my
ideas for a primitive judgement would be able to do that.  Instead, I would
let it do what it seemed good at, so long as I was able to appreciate what
it was doing.  Since this would lead to something - a next step at least - I
could use this to test my theory that a good more general SAT solution would
be useful as well.
Jim Bromer

On Wed, Aug 11, 2010 at 3:57 PM, David Jones davidher...@gmail.com wrote:

 Slightly off the topic of your last email. But, all this discussion has
 made me realize how to phrase something... That is that solving AGI requires
 understand the constraints that problems impose on a solution. So, it's sort
 of a unbelievably complex constraint satisfaction problem. What we've been
 talking about is how we come up with solutions to these problems when we
 sometimes aren't actually trying to solve any of the real problems. As I've
 been trying to articulate lately is that in order to satisfy the constraints
 of the problems AGI imposes, we must really understand the problems we want
 to solve and how they can be solved(their constraints). I think that most of
 us do not do this because the problem is so complex, that we refuse to
 attempt to understand all of its constraints. Instead we focus on something
 very small and manageable with fewer constraints. But, that's what creates
 narrow AI, because the constraints you have developed the solution for only
 apply to a narrow set of problems. Once you try to apply it to a different
 problem that imposes new, incompatible constraints, the solution fails.

 So, lately I've been pushing for people to truly analyze the problems
 involved in AGI, step by step to understand what the constraints are. I
 think this is the only way we will develop a solution that is guaranteed to
 work without wasting undo time in trial and error. I don't think trial and
 error approaches will work. We must know what the constraints are, instead
 of guessing at what solutions might approximate the constraints. I think the
 problem space is too large to guess.

 Of course, I think acquisition of knowledge through automated means is the
 first step in understanding these constraints. But, unfortunately, few agree
 with me.

 Dave

 On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is
 that a concept can simultaneously be both precise and vague.  And the other
 is that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting
 because they suggest that we do not know how to program a computer even to
 create opinions.  Or if we do, there is a big untapped difference between
 those programs that show nascent judgement (perhaps only at levels relative
 to the domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Jim Bromer
The mind cannot determine whether or not -every- instance of a kind
of object is that kind of object.  I believe that the problem must be a
problem of complexity and it is just that the mind is much better at dealing
with complicated systems of possibilities than any computer program.  A
young child first learns that certain objects are called chairs, and that
the furniture objects that he sits on are mostly chairs.  In a few cases,
after seeing an odd object that is used as a chair for the first time (like
seeing an odd outdoor chair that is fashioned from twisted pieces of wood)
he might not know that it is a chair, or upon reflection wonder if it is or
not.  And think of odd furniture that appears and comes into fashion for a
while and then disappears (like the bean bag chair).  The question for me is
not what the smallest pieces of visual information necessary to represent
the range and diversity of kinds of objects are, but how would these diverse
examples be woven into highly compressed and heavily cross-indexed pieces of
knowledge that could be accessed quickly and reliably, especially for the
most common examples that the person is familiar with.
Jim Bromer

On Mon, Aug 9, 2010 at 2:16 AM, John G. Rose johnr...@polyplexic.comwrote:

  Actually this is quite critical.



 Defining a chair - which would agree with each instance of a chair in the
 supplied image - is the way a chair should be defined and is the way the
 mind processes it.



 John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Jim Bromer
Mike Tintner tint...@blueyonder.co.uk wrote:

  How do you reckon that will work for an infant or anyone who has only
 seen an example or two of the concept class-of-forms?


I do not reckon that it will work for an infant or anyone (or anything) who
(or that) has only seen an example or two of the concept class-of-forms.  I
haven't looked at your photos, but I did indicate that learning has to be
able to advance with new kinds of objects of a kind.  My previous comment
specifically dealt with the problem of learning to recognize radically
different instances of the kind.

There was once a time when it was thought that domain-specific AI, using
general methods of reasoning would be more feasible than general AI.  This
optimism was not borne out by experiment.  The question is why not?  I
believe that domain specific AI needs to rely on so much general knowledge
(AGI) as a base, that until a certain level of success in AGI is achieved,
narrower domain specific AI will be limited to calculation-based reasoning
and the like (as in closed taxonomic AI or simple neural networks).

A similar situation occurred in space travel.  At the dawn of the space age
some people intuitively thought that traveling to the moon would be 2000
times more difficult than sending a space vehicle up a 100 miles (since it
was 2000 times further away) so if it took 10 years to get to the pont where
they could get a space capsule up 100 miles, it would take 2 years to
reach the moon.  It didn't work that way, because as the leading experts
realized, getting away from earth's gravity results in a significant and
geometric decrease in the force needed to continue.  Because this fact was
not intuitive to the naive critic it wasn't completely grasped by many
people until the first space vehicle escaped earth orbit a few years after
the first space shots.

I think a similar situation probably is at the center of the feasibility of
basic AGI.  As more and more examples are learned, the complications in
storing and accessing that information in a wise and intelligent manner
become more and more elusive.  But, for example, if domain specific
information is dependent on a certain level of general knowledge, then you
won't see domain specific AI really take off until that level of AGI becomes
feasible.  Why would this relationship occur?  Because each time you double
*all* knowledge (as is implied by a doubling of general knowledge) you have
a progressively more complicated load on the computer.  So to double that
general knowledge twice, you would have to create an AGI program that was
capable of dealing with four times as much complexity.  To double that
general knowledge again, you would have to create an AGI program that would
have to deal with 8 times the complexity as your first prototype.  Once you
get your AGI program to work at a certain level of complexity, then your
domain-specific AI program might start to take off and you would see the
kind of dazzling results which would make the critics more wary of
expressing their skepticism.

Jim Bromer

On Mon, Aug 9, 2010 at 8:13 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  How do you reckon that will work for an infant or anyone who has only
 seen an example or two of the concept class-of-forms?

 (You're effectively misreading the set of fotos -  altho. this needs making
 clear - a major point of the set is:  how will any concept/schema of chair,
 derived from any set of particular kinds of chairs, cope with a radically
 new kind of chair?  Just saying - well let's analyse the chairs we have -
 is not an answer. You can take it for granted that the new chair will have
 some feature[s]/form that constitutes a radical departure from existing
 ones. (as is amply illustrated by my set of fotos). And yet your - an AGI -
 mind can normally adapt and recognize the new object as a chair. ).

 *From:* Jim Bromer jimbro...@gmail.com
  *Sent:* Monday, August 09, 2010 12:50 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

   The mind cannot determine whether or not -every- instance of a kind
 of object is that kind of object.  I believe that the problem must be a
 problem of complexity and it is just that the mind is much better at dealing
 with complicated systems of possibilities than any computer program.  A
 young child first learns that certain objects are called chairs, and that
 the furniture objects that he sits on are mostly chairs.  In a few cases,
 after seeing an odd object that is used as a chair for the first time (like
 seeing an odd outdoor chair that is fashioned from twisted pieces of wood)
 he might not know that it is a chair, or upon reflection wonder if it is or
 not.  And think of odd furniture that appears and comes into fashion for a
 while and then disappears (like the bean bag chair).  The question for me is
 not what the smallest pieces of visual information necessary to represent
 the range and diversity of kinds

[agi] Compressed Cross-Indexed Concepts

2010-08-09 Thread Jim Bromer
On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either. It's
 just that it is a *very* sophisticated and dynamic mathematical structure
 IMO.

 John



Well, if it was a mathematical structure then we could start developing
prototypes using familiar mathematical structures.  I think the structure
has to involve more ideological relationships than mathematical.  For
instance you can apply a idea to your own thinking in a such a way that you
are capable of (gradually) changing how you think about something.  This
means that an idea can be a compression of some greater change in your own
programming.  While the idea in this example would be associated with a
fairly strong notion of meaning, since you cannot accurately understand the
full consequences of the change it would be somewhat vague at first.  (It
could be a very precise idea capable of having strong effect, but the
details of those effects would not be known until the change had
progressed.)

I think the more important question is how does a general concept be
interpreted across a range of different kinds of ideas.  Actually this is
not so difficult, but what I am getting at is how are sophisticated
conceptual interrelations integrated and resolved?
Jim



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer

 Jim: So, did Solomonoff's original idea involve randomizing whether the
 next bit would be a 1 or a 0 in the program?

Abram: Yep.
I meant, did Solomonoff's original idea involve randomizing whether the next
bit in the program's that are originally used to produce the *prior
probabilities* involve the use of randomizing whether the next bit would be
a 1 or a 0?  I have not been able to find any evidence that it was.
I thought that my question was clear but on second thought I guess it
wasn't. I think that the part about the coin flips was only a method to
express that he was interested in the probability that a particular string
would be produced from all possible programs, so that when actually testing
the prior probability of a particular string the program that was to be run
would have to be randomly generated.
Jim Bromer




On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

  Your function may be convergent but it is not a probability.


 True! All the possibilities sum to less than 1. There are ways of
 addressing this (ie, multiply by a normalizing constant which must also be
 approximated in a convergent manner), but for the most part adherents of
 Solomonoff induction don't worry about this too much. What we care about,
 mostly, is comparing different hyotheses to decide which to favor. The
 normalizing constant doesn't help us here, so it usually isn't mentioned.


 You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


 Your intuition is that running totally random programs to get predictions
 will just produce garbage, and that is fine. The idea of Solomonoff
 induction, though, is that it will produce systematically less garbage than
 just flipping coins to get predictions. Most of the garbage programs will be
 knocked out of the running by the data itself. This is supposed to be the
 least garbage we can manage without domain-specific knowledge

 This is backed up with the proof of dominance, which we haven't talked
 about yet, but which is really the key argument for the optimality of
 Solomonoff induction.


 And how does that prove that his original idea was convergent?


 The proofs of equivalence between all the different formulations of
 Solomonoff induction are something I haven't cared to look into too deeply.

 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.


 I gave you a proof of convergence for one such approximation, and if you
 wish I can modify it to include a normalizing constant to ensure that it is
 a probability measure. It would be helpful to me if your criticisms were
 more specific to that proof.

 So, did Solomonoff's original idea involve randomizing whether the next bit
 would be a 1 or a 0 in the program?


 Yep.

 Even ignoring the halting problem what kind of result would that give?


 Well, the general idea is this. An even distribution intuitively represents
 lack of knowledge. An even distribution over possible data fails horribly,
 however, predicting white noise. We want to represent the idea that we are
 very ignorant of what the data might be, but not *that* ignorant. To capture
 the idea of regularity, ie, similarity between past and future, we instead
 take an even distribution over *descriptions* of the data. (The distribution
 in the 2nd version of solomonoff induction that I gave, the one in which the
 space of possible programs is represented as a continuum, is an even
 distribution.) This appears to provide a good amount of regularity without
 too much.

 --Abram

 On Wed, Aug 4, 2010 at 8:10 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Thanks for the explanation.  I still don't get it.  Your function may be
 convergent but it is not a probability.  You said that Solomonoff's original
 construction involved flipping a coin for the next bit.  What good does that
 do?  And how does that prove that his original idea was convergent?  The
 thing that is wrong with these explanations is that they are not coherent.
 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.

 So, did Solomonoff's original idea involve randomizing whether the next
 bit would be a 1 or a 0 in the program?  Even ignoring the halting
 problem what kind of result would that give?  Have you ever solved the
 problem for some strings and have you ever tested the solutions using a
 simulation?

 Jim Bromer

 On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Interestingly, the formalization of Solomonoff induction I'm most
 familiar with uses a construction that relates the space of programs with
 the real numbers just as you say. This formulation may be due to Solomonoff

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer
I meant:
Did Solomonoff's original idea use randomization to determine the bits of
the programs that are used to produce the *prior probabilities*?  I think
that the answer to that is obviously no.  The randomization of the next bit
would used in the test of the prior probabilities as done using a random
sampling.  He probably found that students who had some familiarity with
statistics would initially assume that the prior probability was based on
some subset of possible programs as would be expected from a typical sample,
so he gave this statistics type of definition to emphasize the extent of
what he had in mind.

I asked this question just to make sure that I understood what Solomonoff
Induction was, because Abram had made some statement indicating that I
really didn't know.  Remember, this particular branch of the discussion was
originally centered around the question of whether Solomonoff
Induction would be convergent, even given a way around the incomputability
of finding only those programs that halted.  So while the random testing of
the prior probabilities is of interest to me, I wanted to make sure that
there is no evidence that Solomonoff Induction is convergent. I am not being
petty about this, but I also needed to make sure that I understood what
Solomonoff Induction is.

I am interested in hearing your ideas about your variation of
Solomonoff Induction because your convergent series, in this context, was
interesting.
Jim Bromer

On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer jimbro...@gmail.com wrote:

 Jim: So, did Solomonoff's original idea involve randomizing whether the
 next bit would be a 1 or a 0 in the program?

 Abram: Yep.
 I meant, did Solomonoff's original idea involve randomizing whether the
 next bit in the program's that are originally used to produce the *prior
 probabilities* involve the use of randomizing whether the next bit would
 be a 1 or a 0?  I have not been able to find any evidence that it was.
 I thought that my question was clear but on second thought I guess it
 wasn't. I think that the part about the coin flips was only a method to
 express that he was interested in the probability that a particular string
 would be produced from all possible programs, so that when actually testing
 the prior probability of a particular string the program that was to be run
 would have to be randomly generated.
 Jim Bromer




 On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

  Your function may be convergent but it is not a probability.


 True! All the possibilities sum to less than 1. There are ways of
 addressing this (ie, multiply by a normalizing constant which must also be
 approximated in a convergent manner), but for the most part adherents of
 Solomonoff induction don't worry about this too much. What we care about,
 mostly, is comparing different hyotheses to decide which to favor. The
 normalizing constant doesn't help us here, so it usually isn't mentioned.


 You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


 Your intuition is that running totally random programs to get predictions
 will just produce garbage, and that is fine. The idea of Solomonoff
 induction, though, is that it will produce systematically less garbage than
 just flipping coins to get predictions. Most of the garbage programs will be
 knocked out of the running by the data itself. This is supposed to be the
 least garbage we can manage without domain-specific knowledge

 This is backed up with the proof of dominance, which we haven't talked
 about yet, but which is really the key argument for the optimality of
 Solomonoff induction.


 And how does that prove that his original idea was convergent?


 The proofs of equivalence between all the different formulations of
 Solomonoff induction are something I haven't cared to look into too deeply.

 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.


 I gave you a proof of convergence for one such approximation, and if you
 wish I can modify it to include a normalizing constant to ensure that it is
 a probability measure. It would be helpful to me if your criticisms were
 more specific to that proof.

 So, did Solomonoff's original idea involve randomizing whether the next
 bit would be a 1 or a 0 in the program?


 Yep.

 Even ignoring the halting problem what kind of result would that give?


 Well, the general idea is this. An even distribution intuitively
 represents lack of knowledge. An even distribution over possible data fails
 horribly, however, predicting white noise. We want to represent the idea
 that we are very ignorant of what the data might be, but not *that*
 ignorant. To capture the idea of regularity, ie, similarity between past and
 future, we instead take an even distribution over *descriptions

Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread Jim Bromer
On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
*So, why computer vision? Why can't we just enter knowledge manually?

*a) The knowledge we require for AI to do what we want is vast and complex
and we can prove that it is completely ineffective to enter the knowledge we
need manually.
b) Computer vision is the most effective means of gathering facts about the
world. Knowledge and experience can be gained from analysis of these facts.
c) Language is not learned through passive observation. The associations
that words have to the environment and our common sense knowledge of the
environment/world are absolutely essential to language learning,
understanding and disambiguation. When visual information is available,
children use visual cues from their parents and from the objects they are
interacting with to figure out word-environment associations. If visual info
is not available, touch is essential to replace the visual cues. Touch can
provide much of the same info as vision, but it is not as effective because
not everything is in reach and it provides less information than vision.
There is some very good documentation out there on how children learn
language that supports this. One example is How Children Learn Language by
William O'grady.
d) The real world cannot be predicted blindly. It is absolutely essential to
be able to directly observe it and receive feedback.
e) Manual entry of knowledge, even if possible, would be extremely slow and
would be a very serious bottleneck(it already is). This is a major reason we
want AI... to increase our man power and remove man-power related
bottlenecks.


Discovering a way to get a computer program to interpret a human language is
a difficult problem.  The feeling that an AI program might be able to attain
a higher level of intelligence if only it could examine data from a variety
of different kinds of sensory input modalities it is not new.  It has been
tried and tried during the past 35 years.  But there is no experimental data
(that I have heard of) that suggests that this method is the only way anyone
will achieve intelligence.



I have tried to explain that I believe the problem is twofold.  First of
all, there have been quite a few AI programs that worked real well as long
as the problem was simple enough.  This suggests that the complexity of what
is trying to be understood is a critical factor.  This in turn suggests that
using different input modalities, would not -in itself- make AI
possible.  Secondly,
there is a problem of getting the computer to accurately model that which it
can know in such a way that it could be effectively utilized for higher
degrees of complexity.  I consider this to be a conceptual integration
problem.  We do not know how to integrate different kinds of ideas (or
idea-like knowledge) in an effective manner, and as a result we have not
seen the gradual advancement in AI programming that we would expect to see
given all the advances in computer technology that have been occurring.



Both visual analysis and linguistic analysis are significant challenges in
AI programming.  The idea that combining both of them would make the problem
1/2 as hard may not be any crazier than saying that it would make the
problem 2 times as hard, but without experimental evidence it isn't any
saner either.

Jim Bromer




On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:

 :D Thanks Jim for paying attention!

 One very cool thing about the human brain is that we use multiple feedback
 mechanisms to correct for such problems as observer movement. For example,
 the inner ear senses your bodies movement and provides feedback for visual
 processing. This is why we get nauseous when the ear disagrees with the eyes
 and other senses. As you said, eye muscles also provide feedback about how
 the eye itself has moved. In example papers I have read, such as Object
 Discovery through Motion, Appearance and Shape, the researchers know the
 position of the camera (I'm not sure how) and use that to determine which
 moving features are closest to the cameras movement, and therefore are not
 actually moving. Once you know how much the camera moved, you can try to
 subtract this from apparent motion.

 You're right that I should attempt to implement the system. I think I will
 in fact, but it is difficult because I have limited time and resources. My
 main goal is to make sure it is accomplished, even if not by me. So,
 sometimes I think that it is better to prove that it can be done than to
 actually spend a much longer amount of time to actually do it myself. I am
 struggling to figure out how I can gather the resources or support to
 accomplish the monstrous task. I think that I should work on the theoretical
 basis in addition to the actual implementation. This is likely important to
 make sure that my design is well grounded and reflects reality

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Jim Bromer
On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.com wrote:
I've suddenly realized that computer vision of real images is very much
solvable and that it is now just a matter of engineering...
I've also realized that I don't actually have to implement it, which is what
is most difficult because even if you know a solution to part of the problem
has certain properties and issues, implementing it takes a lot of time.
Whereas I can just assume I have a less than perfect solution with the
properties I predict from other experiments. Then I can solve the problem
without actually implementing every last detail...
*First*, existing methods find observations that are likely true by
themselves. They find data patterns that are very unlikely to occur by
coincidence, such as many features moving together over several frames of a
video and over a statistically significant distance. They use thresholds to
ensure that the observed changes are likely transformations of the original
property observed or to ensure the statistical significance of an
observation. These are highly likely true observations and not coincidences
or noise.
--
Just looking at these statements, I can find three significant errors. (I do
agree with some of what you said, like the significance of finding
observations that are likely true in themselves.)  When the camera moves (in
a rotation or pan) many features will appear 'to move together over a
statistically significant distance'.  One might suppose that the animal can
sense the movement of his own eyes but then again, he can rotate his head
and use his vision to gauge the rotation of his head.  So right away there
is some kind of serious error in your statement.  It might be resolvable, it
is just that your statement does not really do the resolution.  I do believe
that computer vision is possible with contemporary computers but you are
also saying that while you can't get your algorithms to work the way you had
hoped, it doesn't really matter because you can figure it all out without
the work of implementation.  My point of view is that these represent major
errors in reasoning.
I hope to get back to actual visual processing experiments again.  Although
I don't think that computer vision is necessary for AGI, I do think that
there is so much to be learned from experimenting with computer vision that
it is a serious mistake not to take advantage of opportunity.
Jim Bromer


On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.com wrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-04 Thread Jim Bromer
Abram,
Thanks for the explanation.  I still don't get it.  Your function may be
convergent but it is not a probability.  You said that Solomonoff's original
construction involved flipping a coin for the next bit.  What good does that
do?  And how does that prove that his original idea was convergent?  The
thing that is wrong with these explanations is that they are not coherent.
Since his idea is incomputable, there are no algorithms that can be run to
demonstrate what he was talking about so the basic idea is papered with all
sorts of unverifiable approximations.

So, did Solomonoff's original idea involve randomizing whether the next bit
would be a 1 or a 0 in the program?  Even ignoring the halting problem what
kind of result would that give?  Have you ever solved the problem for
some strings and have you ever tested the solutions using a simulation?

Jim Bromer

On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Interestingly, the formalization of Solomonoff induction I'm most familiar
 with uses a construction that relates the space of programs with the real
 numbers just as you say. This formulation may be due to Solomonoff, or
 perhaps Hutter... not sure. I re-formulated it to gloss over that in order
 to make it simpler; I'm pretty sure the version I gave is equivalent in the
 relevant aspects. However, some notes on the original construction.

 Programs are created by flipping coins to come up with the 1s and 0s. We
 are to think of it like this: whenever the computer reaches the end of the
 program and tries to continue on, we flip a coin to decide what the next bit
 of the program will be. We keep doing this for as long as the computer wants
 more bits of instruction.

 This framework makes room for infinitely long programs, but makes them
 infinitely improbable-- formally, they have probability 0. (We could alter
 the setup to allow them an infinitesimal probability.) Intuitively, this
 means that if we keep flipping a coin to tell the computer what to do,
 eventually we will either create an infinite loop-back (so the computer
 keeps executing the already-written parts of the program and never asks for
 more) or write out the HALT command. Avoiding doing one or the other
 forever is just too improbable.

 This also means all real numbers are output by some program! It just may be
 one which is infinitely long.

 However, all the programs that slip past my time bound as T increases to
 infinity will have measure 0, meaning they don't add anything to the sum.
 This means the convergence is unaffected.

 Note: in this construction, program space is *still* a well-defined entity.

 --Abram

 On Sun, Aug 1, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,

 This is a very interesting function.  I have spent a lot of time thinking
 about it.  However, I do not believe that does, in any way, prove or
 indicate that Solomonoff Induction is convergent.  I want to discuss the
 function but I need to take more time to study some stuff and to work
 various details out.  (Although I have thought a lot about it, I am writing
 this under a sense of deadline, so it may not be well composed.)



 My argument was that Solomonoff's conjecture, which was based (as far as I
 can tell) on 'all possible programs', was fundamentally flawed because the
 idea of 'all possible programs' is not a programmable definition.  All
 possible programs is a domain, not a class of programs that can be feasibly
 defined in the form of an algorithm that could 'reach' all the programs.



 The domain of all possible programs is trans-infinite just as the domain
 of irrational numbers are.  Why do I believe this?  Because if we imagine
 that infinite algorithms are computable, then we could compute irrational
 numbers.  That is, there are programs that, given infinite resources,
 could compute irrational numbers.  We can use the binomial theorem, for
 example to compute the square root of 2.  And we can use trial and error
 methods to compute the nth root of any number.  So that proves that there
 are infinite irrational numbers that can be computed by algorithms that run
 for infinity.



 So what does this have to do with Solomonoff's conjecture of all possible
 programs?  Well, if I could prove that any individual irrational number
 could be computed (with programs that ran through infinity) then I might be
 able to prove that there are trans-infinite programs.  If I could prove
 that some trans-infinite subset of irrational numbers could be computed then
 I might be able to prove that 'all possible programs' is a trans-infinite
 class.


 Now Abram said that since his sum, based on runtime and program length, is
 convergent it can prove that Solomonoff Induction is convergent.  Even
 assuming that his convergent sum method could be fixed up a little, I
 suspect that this time-length bound is misleading.  Since a Turing
 Machine allows for erasures this means that a program could

Re: [agi] Shhh!

2010-08-04 Thread Jim Bromer
I meant I am able to construct an algorithm that is capable of reaching
every expansion of a real number given infinite resources.  However, the
algorithm is never able to write any of them completely since they are all
infinite.  So in one sense, no computation is able to write any real number,
but in the other sense, the program will, given enough time, eventually
start writing out any real number.  Since the infinite must be an ongoing
process I can say that the algorithm is capable of reaching any real number
although it will never complete any of them.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Shhh!

2010-08-03 Thread Jim Bromer
I meant transfinite.  Thank you for correcting me on that.  However,
your suggestion to post when I actually solve the problem is one of the most
absurd comments I have ever seen in these groups given the absolute
necessity of posting about AI-related ideas that have yet to be solved, and
especially given your own history of posting about unverifiable conjectures
as if they were absolute truths.

I thought that it would be impossible to use a single computer program to
iterate every irrational number even if given infinite resources because the
irrationals are considered to be transfinite.  However, I found a way to do
it.  What really surprised me was that I also found an abbreviated method to
do it.  (And there may also be a super-abbreviated method that would look
humorously absurd.)  However, there are four issues.  One is that the
algorithm obviously will not make much progress, two is that the algorithm
can only output an infinity of finite representations, three the algorithm
has no known use and four is that the algorithm has one error.  It cannot
recognize that all binary numbers that have a fractional part .111(followed
by infinite ones) is equal to 1, or .999 (followed by infinite 9s) in for
decimal numbers is equal to 1.

However, the theoretical problem, while not directly related to AGI, is
interesting enough to make it worthwhile.  And I believe that there is a
chance that I might be able to use it to solve an important problem, but
that is purely speculative.
Jim Bromer
On Mon, Aug 2, 2010 at 4:12 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, you are thinking out loud. There is no such thing as
 trans-infinite. How about posting when you actually solve the problem.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, August 2, 2010 9:06:53 AM
 *Subject:* [agi] Re: Shhh!

 I think I can write an abbreviated version, but there would only be a few
 people in the world who would both believe me and understand why it would
 work.

 On Mon, Aug 2, 2010 at 8:53 AM, Jim Bromer jimbro...@gmail.com wrote:

 I can write an algorithm that is capable of describing ('reaching') every
 possible irrational number - given infinite resources.  The infinite is not
 a number-like object, it is an active form of incrementation or
 concatenation.  So I can write an algorithm that can write *every* finite
 state of *every* possible number.  However, it would take another
 algorithm to 'prove' it.  Given an irrational number, this other algorithm
 could find the infinite incrementation for every digit of the given number.
 Each possible number (including the incrementation of those numbers that
 cannot be represented in truncated form) is embedded within a single
 infinite infinite incrementation of digits that is produced by the
 algorithm, so the second algorithm would have to calculate where you would
 find each digit of the given irrational number by increment.  But the thing
 is, both functions would be computable and provable.  (I haven't actually
 figured the second algorithm out yet, but it is not a difficult problem.)

 This means that the Trans-Infinite Is Computable.  But don't tell anyone
 about this, it's a secret.



   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Shhh!

2010-08-03 Thread Jim Bromer
On Tue, Aug 3, 2010 at 3:34 AM, deepakjnath deepakjn...@gmail.com wrote:

 for {set i 0} {$i  infinity} {incr i} {
  print $i
 }


That's the basic idea, except there are one and a half axes, positive
integers, negative integers and fractional parts for all possible irrational
numbers.  (Well it is two axes but two parts one from one of the axes can be
joined together.  Positive increments of integers with positive increments
of fractional parts, and then just take the negative of each value to output
negative increments with negative increments of the fractional parts.

So it's not rocket science but it was a little more difficult than counting
to infinity (which is of course is itself impossible!)
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I guess the trans-infinite is computable, given infinite resources.  It
doesn't make sense to me except that the infinite does not exist as a
number-like object, it is an active process of incrementation or something
like that.  End of Count.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I see that erasure is from an alternative definition for a Turing Machine.
I am not sure if a four state Turing Machine could be used to
make Solomonoff Induction convergent.  If all programs that required working
memory greater than the length of the output string could be eliminated then
that would have an impact on convergent feasibility.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
On Mon, Aug 2, 2010 at 7:21 AM, Jim Bromer jimbro...@gmail.com wrote:

 I see that erasure is from an alternative definition for a Turing Machine.
 I am not sure if a four state Turing Machine could be used to
 make Solomonoff Induction convergent.  If all programs that required working
 memory greater than the length of the output string could be eliminated then
 that would have an impact on convergent feasibility.

But then again this is getting back to my whole thesis.  By constraining the
definition of all possible programs sufficiently, we should be left with a
definable subset of programs that could be used in an actual computations.

I want to study more to try to better understand Abrams definition of a
convergent derivation of Solomonoff Induction.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Shhh!

2010-08-02 Thread Jim Bromer
I can write an algorithm that is capable of describing ('reaching') every
possible irrational number - given infinite resources.  The infinite is not
a number-like object, it is an active form of incrementation or
concatenation.  So I can write an algorithm that can write *every* finite
state of *every* possible number.  However, it would take another algorithm
to 'prove' it.  Given an irrational number, this other algorithm could find
the infinite incrementation for every digit of the given number.  Each
possible number (including the incrementation of those numbers that cannot
be represented in truncated form) is embedded within a single infinite
infinite incrementation of digits that is produced by the algorithm, so the
second algorithm would have to calculate where you would find each digit of
the given irrational number by increment.  But the thing is, both functions
would be computable and provable.  (I haven't actually figured the second
algorithm out yet, but it is not a difficult problem.)

This means that the Trans-Infinite Is Computable.  But don't tell anyone
about this, it's a secret.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Shhh!

2010-08-02 Thread Jim Bromer
I think I can write an abbreviated version, but there would only be a few
people in the world who would both believe me and understand why it would
work.

On Mon, Aug 2, 2010 at 8:53 AM, Jim Bromer jimbro...@gmail.com wrote:

 I can write an algorithm that is capable of describing ('reaching') every
 possible irrational number - given infinite resources.  The infinite is not
 a number-like object, it is an active form of incrementation or
 concatenation.  So I can write an algorithm that can write *every* finite
 state of *every* possible number.  However, it would take another
 algorithm to 'prove' it.  Given an irrational number, this other algorithm
 could find the infinite incrementation for every digit of the given number.
 Each possible number (including the incrementation of those numbers that
 cannot be represented in truncated form) is embedded within a single
 infinite infinite incrementation of digits that is produced by the
 algorithm, so the second algorithm would have to calculate where you would
 find each digit of the given irrational number by increment.  But the thing
 is, both functions would be computable and provable.  (I haven't actually
 figured the second algorithm out yet, but it is not a difficult problem.)

 This means that the Trans-Infinite Is Computable.  But don't tell anyone
 about this, it's a secret.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-01 Thread Jim Bromer
Abram,

This is a very interesting function.  I have spent a lot of time thinking
about it.  However, I do not believe that does, in any way, prove or
indicate that Solomonoff Induction is convergent.  I want to discuss the
function but I need to take more time to study some stuff and to work
various details out.  (Although I have thought a lot about it, I am writing
this under a sense of deadline, so it may not be well composed.)



My argument was that Solomonoff's conjecture, which was based (as far as I
can tell) on 'all possible programs', was fundamentally flawed because the
idea of 'all possible programs' is not a programmable definition.  All
possible programs is a domain, not a class of programs that can be feasibly
defined in the form of an algorithm that could 'reach' all the programs.



The domain of all possible programs is trans-infinite just as the domain of
irrational numbers are.  Why do I believe this?  Because if we imagine that
infinite algorithms are computable, then we could compute irrational
numbers.  That is, there are programs that, given infinite resources, could
compute irrational numbers.  We can use the binomial theorem, for example to
compute the square root of 2.  And we can use trial and error methods to
compute the nth root of any number.  So that proves that there are infinite
irrational numbers that can be computed by algorithms that run for infinity.



So what does this have to do with Solomonoff's conjecture of all possible
programs?  Well, if I could prove that any individual irrational number
could be computed (with programs that ran through infinity) then I might be
able to prove that there are trans-infinite programs.  If I could prove that
some trans-infinite subset of irrational numbers could be computed then
I might be able to prove that 'all possible programs' is a trans-infinite
class.


Now Abram said that since his sum, based on runtime and program length, is
convergent it can prove that Solomonoff Induction is convergent.  Even
assuming that his convergent sum method could be fixed up a little, I
suspect that this time-length bound is misleading.  Since a Turing Machine
allows for erasures this means that a program could last longer than his
time parameter and still produce an output string that matches the given
string.  And if 'all possible programs' is a trans-infinite class then there
are programs that you are going to miss.  Your encoding method will miss
trans-infinite programs (unless you have trans-cended the trans-infinite.)

However, I want to study the function and some other ideas related to this
kind of thing a little more.  It is an interesting function.  Unfortunately
I also have to get back to other-worldly things.

Jim Bromer


On Mon, Jul 26, 2010 at 2:54 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 I'll argue that solomonoff probabilities are in fact like Pi, that is,
 computable in the limit.

 I still do not understand why you think these combinations are necessary.
 It is not necessary to make some sort of ordering of the sum to get it to
 converge: ordering only matters for infinite sums which include negative
 numbers. (Perhaps that's where you're getting the idea?)

 Here's my proof, rewritten from an earlier post, using the properties of
 infinite sums of non-negative numbers.

 (preliminaries)

 Define the computation as follows: we start with a string S which we want
 to know the Solomonoff probability of. We are given a time-limit T. We start
 with P=0, where P is a real number with precision 2*log_4(T) or more. We use
 some binary encoding for programs which (unlike normal programming
 languages) does not contain syntactically invalid programs, but still will
 (of course) contain infinite loops and so on. We run program 0 and 1 for
 T/4 each, 00, 01, 10 and 11 for T/16 each, and in general run each
 program of length N for floor[T/(4^N)] until T/(4^N) is less than 1. Each
 time we run a program and the result is S, we add 1/(4^N) to P.

 (assertion)

 P converges to some value as T is increased.

 (proof)

 If every single program were to output S, then T would converge to 1/4 +
 1/4 + 1/16 + 1/16 + 1/16 + 1/16 + ... that is, 2*(1/(4^1)) + 4*(1/(4^2)) +
 8*(1/(4^3)) + ... which comes to 1/2 + 1/4 + 1/8 + 1/16 + .. i.e. 1/(2^1) +
 1/(2^2) + 1/(2^3) + ... ; it is well-known that this sequence converges to
 1. Thus 1 is an upper bound for P: it could only get that high if every
 single program were to output S.

 0 is a lower bound, since we start there and never subtract anything. In
 fact, every time we add more to P, we have a new lower bound: P will never
 go below a number once it reaches it. The sum can only increase. Infinite
 sums with this property must either converge to a finite number, or go to
 infinity. However, we already know that 1 is an upper bound for P; so, it
 cannot go to infinity. Hence, it must converge.

 --Abram

   On Mon, Jul 26, 2010 at 9:14 AM, Jim Bromer jimbro...@gmail.com wrote

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-26 Thread Jim Bromer
As far as I can tell right now, my theories that Solomonoff Induction is
trans-infinite were wrong.  Now that I realize that the mathematics do not
support these conjectures, I have to acknowledge that I would not be able to
prove or even offer a sketch of a proof of my theories.  Although I did not
use rigourous mathematics while I have tried to make an assessment of the
Solomonoff method, the first principle of rigourous mathematics is to
acknowledge that the mathematics does not support your supposition when they
don't.

Solomonoff Induction isn't a mathematical theory because the desired results
are not computable.  As I mentioned before, there are a great many functions
that are not computable but which are useful and important because they tend
toward a limit which can be seen in with a reasonable number of calculations
using the methods available.  Pi is one such function.  (I am presuming that
pi would require an infinite expansion which seems right.)

I have explained, and I think it is a correct explanation, that there is no
way that you could make an apriori computation of all possible
combinations taken from infinite values.  So you could not even come up with
a theoretical construct that could take account of that level of
complexity.  It is true that you could come up with a theoretical
computational method that could take account of any finite number of values,
and that is what we are talking about when we talk about the infinite, but
in this context it only points to a theoretical paradox.  Your theoretical
solution could not take the final step of computing a probability for a
string until it had run through the infinite combinations and this is
impossible.  The same problem does not occur for pi because the function
that produces it tends toward a limit.

The reason I thought Solomonoff Induction was trans-infinite was because I
thought it used the Bayesian notion to compute the probability using all
possible programs that produced a particular substring following a given
prefix.  Now I understand that the desired function is the computation of
only the probability of a particular string (not all possible substrings
that are identical to the string) following a given prefix.  I want to study
the method a little further during the next few weeks, but I just wanted to
make it clear that, as far as now understand the program, that I do not
think that it is trans-infinite.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread Jim Bromer
Arthur,
The section from The Arthur T. Murray/Mentifex, FAQ, 2.3 What do
researchers in academia think of Murray’s work?, really puts you into a
whole other category in my view.  The rest of us can only dream of such
dismissals from experts who haven't achieved anything more than the rest
of us.
Congratulations on being honest, you have already achieved more than the
experts who aren't so.
Jim Bromer


On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray menti...@scn.org wrote:

 David Jones wrote:
 
 Arthur,
 
 Thanks. I appreciate that. I would be happy to aggregate some of those
 things. I am sometimes not good at maintaining the website because I get
 bored of maintaining or updating it very quickly :)
 
 Dave
 
 On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray menti...@scn.org wrote:
 
  The Web site of David Jones at
 
  http://practicalai.org
 
  is quite impressive to me
  as a kindred spirit building AGI.
  (Just today I have been coding MindForth AGI :-)
 
  For his Practical AI Challenge or similar
  ventures, I would hope that David Jones is
  open to the idea of aggregating or archiving
  representative AI samples from such sources as
  - TexAI;
  - OpenCog;
  - Mentifex AI;
  - etc.;
  so that visitors to PracticalAI may gain an
  overview of what is happening in our field.
 
  Arthur
  --
  http://www.scn.org/~mentifex/AiMind.html
  http://www.scn.org/~mentifex/mindforth.txt

 Just today, a few minutes ago, I updated the
 mindforth.txt AI souce code listed above.

 In the PracticalAi aggregates, you might consider
 listing Mentifex AI with copies of the above two
 AI source code pages, and with links to the
 original scn.org URL's, where visitors to
 PracticalAi could look for any more recent
 updates that you had not gotten around to
 transferring from scn.org to PracticalAi.
 In that way, theses releases of Mentifex
 free AI source code would have a more robust
 Web presence (SCN often goes down) and I
 could link to PracticalAi for the aggregates
 and other features of PracticalAI.

 Thanks.

 Arthur T. Murray



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I believe that trans-infinite would mean that there is no recursively
enumerable algorithm that could 'reach' every possible item in the
trans-infinite group.



Since each program in Solomonoff Induction, written for a Universal Turing
Machine could be written on a single role of tape, that means that every
possible combination of programs could also be written on the tape.  They
would therefore be recursively enumerable just as they could be enumerated
on a one to one basis with aleph null (counting numbers).



But, unfortunately for my criticism, there are algorithms that could reach
any finite combination of things which means that even though you could not
determine the ordering of programs that would be necessary to show that the
probabilities of each string approach a limit, it would be possible to write
an algorithm that could show trends, given infinite resources.  This doesn't
mean that the probabilities would approach the limit, it just means that if
they did, there would be an infinite algorithm that could make the best
determination given the information that the programs had already produced.
This would be a necessary step of a theoretical (but still
non-constructivist) proof.



So I can't prove that Solomonoff Induction is inherently trans-infinite.



I am going to take a few weeks to see if I can determine if the idea of
Solomonoff Induction makes hypothetical sense to me.
Jim Bromer



On Sat, Jul 24, 2010 at 6:04 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Solomonoff Induction may require a trans-infinite level of complexity
 just to run each program.

 Trans-infinite is not a mathematically defined term as far as I can tell.
 Maybe you mean larger than infinity, as in the infinite set of real numbers
 is larger than the infinite set of natural numbers (which is true).

 But it is not true that Solomonoff induction requires more than aleph-null
 operations. (Aleph-null is the size of the set of natural numbers, the
 smallest infinity). An exact calculation requires that you test aleph-null
 programs for aleph-null time steps each. There are aleph-null programs
 because each program is a finite length string, and there is a 1 to 1
 correspondence between the set of finite strings and N, the set of natural
 numbers. Also, each program requires aleph-null computation in the case that
 it runs forever, because each step in the infinite computation can be
 numbered 1, 2, 3...

 However, the total amount of computation is still aleph-null because each
 step of each program can be described by an ordered pair (m,n) in N^2,
 meaning the n'th step of the m'th program, where m and n are natural
 numbers. The cardinality of N^2 is the same as the cardinality of N because
 there is a 1 to 1 correspondence between the sets. You can order the ordered
 pairs as (1,1), (1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2),
 (4,1), (1,5), etc. See
 http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

 Furthermore you may approximate Solomonoff induction to any desired
 precision with finite computation. Simply interleave the execution of all
 programs as indicated in the ordering of ordered pairs that I just gave,
 where the programs are ordered from shortest to longest. Take the shortest
 program found so far that outputs your string, x. It is guaranteed that this
 algorithm will approach and eventually find the shortest program that
 outputs x given sufficient time, because this program exists and it halts.

 In case you are wondering how Solomonoff induction is not computable, the
 problem is that after this algorithm finds the true shortest program that
 outputs x, it will keep running forever and you might still be wondering if
 a shorter program is forthcoming. In general you won't know.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 3:59:18 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer


 On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
No, I might have been wrong about the feasibility of writing an algorithm
that can produce all the possible combinations of items when I wrote my last
message.  It is because the word combination is associated with more than
one mathematical method. I am skeptical of the possibility that there is a
re algorithm that can write out every possible combination of items taken
from more than one group of *types* when strings of infinite length are
possible.

So yes, I may have proved that there is no re algorithm, even if given
infinite resources, that can order the computation of Solomonoff Induction
in such a way to show that the probability (or probabilities) tend toward a
limit.  If my theory is correct, and right now I would say that there is a
real chance that it is, I have proved that Solmonoff Induction is
theoretically infeasible, illogical and therefore refuted.

Jim Bromer



On Sun, Jul 25, 2010 at 9:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 I believe that trans-infinite would mean that there is no recursively
 enumerable algorithm that could 'reach' every possible item in the
 trans-infinite group.



 Since each program in Solomonoff Induction, written for a Universal Turing
 Machine could be written on a single role of tape, that means that every
 possible combination of programs could also be written on the tape.  They
 would therefore be recursively enumerable just as they could be enumerated
 on a one to one basis with aleph null (counting numbers).



 But, unfortunately for my criticism, there are algorithms that could reach
 any finite combination of things which means that even though you could not
 determine the ordering of programs that would be necessary to show that the
 probabilities of each string approach a limit, it would be possible to write
 an algorithm that could show trends, given infinite resources.  This
 doesn't mean that the probabilities would approach the limit, it just means
 that if they did, there would be an infinite algorithm that could make the
 best determination given the information that the programs had already
 produced.  This would be a necessary step of a theoretical (but still
 non-constructivist) proof.



 So I can't prove that Solomonoff Induction is inherently trans-infinite.



 I am going to take a few weeks to see if I can determine if the idea of
 Solomonoff Induction makes hypothetical sense to me.
 Jim Bromer



 On Sat, Jul 24, 2010 at 6:04 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  Solomonoff Induction may require a trans-infinite level of complexity
 just to run each program.

 Trans-infinite is not a mathematically defined term as far as I can
 tell. Maybe you mean larger than infinity, as in the infinite set of real
 numbers is larger than the infinite set of natural numbers (which is true).

 But it is not true that Solomonoff induction requires more than aleph-null
 operations. (Aleph-null is the size of the set of natural numbers, the
 smallest infinity). An exact calculation requires that you test aleph-null
 programs for aleph-null time steps each. There are aleph-null programs
 because each program is a finite length string, and there is a 1 to 1
 correspondence between the set of finite strings and N, the set of natural
 numbers. Also, each program requires aleph-null computation in the case that
 it runs forever, because each step in the infinite computation can be
 numbered 1, 2, 3...

 However, the total amount of computation is still aleph-null because each
 step of each program can be described by an ordered pair (m,n) in N^2,
 meaning the n'th step of the m'th program, where m and n are natural
 numbers. The cardinality of N^2 is the same as the cardinality of N because
 there is a 1 to 1 correspondence between the sets. You can order the ordered
 pairs as (1,1), (1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2),
 (4,1), (1,5), etc. See
 http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

 Furthermore you may approximate Solomonoff induction to any desired
 precision with finite computation. Simply interleave the execution of all
 programs as indicated in the ordering of ordered pairs that I just gave,
 where the programs are ordered from shortest to longest. Take the shortest
 program found so far that outputs your string, x. It is guaranteed that this
 algorithm will approach and eventually find the shortest program that
 outputs x given sufficient time, because this program exists and it halts.

 In case you are wondering how Solomonoff induction is not computable, the
 problem is that after this algorithm finds the true shortest program that
 outputs x, it will keep running forever and you might still be wondering if
 a shorter program is forthcoming. In general you won't know.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 3:59:18 PM

 *Subject:* Re

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I got confused with the two kinds of combinations that I was thinking about.
Sorry.  However, while the reordering of the partial accumulation of a
finite number of probabilities, where each probability is taken just once,
can be done with a re algorithm, there is no re algorithm that can consider
all possible combinations for an infinite set of probabilities.  I believe
that this means that the probability of a particular string cannot be proven
to attain a stable value using general mathematical methods but that the
partial ordering of probabilities after any finite programs had been run can
be made, both with actual computed values and through the use of an a
priori methods made with general mathematical methods - if someone (like a
twenty second century AI program) was capable of dealing with the
extraordinary complexity of the problem.


So I haven't proven that there is a theoretical disconnect between the
desired function and the method.  Right now, no one has, as far as I can
tell, been able to prove that the method would actually produce the desired
function for all cases, but I haven't been able to sketch a proof that the
claimed relation between the method and the desired function is completely
unsound.

Jim Bromer


On Sun, Jul 25, 2010 at 9:36 AM, Jim Bromer jimbro...@gmail.com wrote:

 No, I might have been wrong about the feasibility of writing an algorithm
 that can produce all the possible combinations of items when I wrote my last
 message.  It is because the word combination is associated with more than
 one mathematical method. I am skeptical of the possibility that there is a
 re algorithm that can write out every possible combination of items taken
 from more than one group of *types* when strings of infinite length are
 possible.

 So yes, I may have proved that there is no re algorithm, even if given
 infinite resources, that can order the computation of Solomonoff Induction
 in such a way to show that the probability (or probabilities) tend toward a
 limit.  If my theory is correct, and right now I would say that there is a
 real chance that it is, I have proved that Solmonoff Induction is
 theoretically infeasible, illogical and therefore refuted.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Solomonoff Induction may require a trans-infinite level of complexity just
to run each program.  Suppose each program is iterated through the
enumeration of its instructions.  Then, not only do the infinity of possible
programs need to be run, many combinations of the infinite programs from
each simulated Turing Machine also have to be tried.  All the possible
combinations of (accepted) programs, one from any two or more of the
(accepted) programs produced by each simulated Turing Machine, have to be
tried.  Although these combinations of programs from each of the simulated
Turing Machine may not all be unique, they all have to be tried.  Since each
simulated Turing Machine would produce infinite programs, I am pretty sure
that this means that Solmonoff Induction is, *by definition,*trans-infinite.
Jim Bromer


On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my claim that the programs of Solomonoff Induction would
 be trans-infinite.  Each of the infinite individual programs could be
 enumerated by their individual instructions so some combination of unique
 individual programs would not correspond to a unique program but to the
 enumerated program that corresponds to the string of their individual
 instructions.  So I got that one wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
On Sat, Jul 24, 2010 at 3:59 PM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer



All the possible combinations of (accepted) programs, one program taken from
any two or more simulated Turing Machines, have to be tried. Since each
simulated Turing Machine would produce infinite programs and there are
infinite simulated Turing Machines, I am pretty sure that this means
that Solmonoff
Induction is, *by definition,* trans-infinite.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Abram,
II use constructivist's and intuitionist's (and for that matter finitist's)
methods when they seem useful to me.  I often make mistakes when I am not
wary of constructivist issues.  Constructist criticisms are interesting
because they can be turned against any presumptive method even though they
might seem to contradict a constructivist criticism taken from a different
presumption.

I misused the term computable a few times because I have seen it used in
different ways.  But it turns out that it can be used in different ways.
For example pi is not computable because it is infinite but a limiting
approximation to pi is computable.  So I would say that pi is computable -
given infinite resources.  One of my claims here is that I believe there are
programs that will run Solomonoff Induction so the method would therefore be
computable given infinite resources.  However, my other claim is that the
much desired function or result where one may compute the probability that a
string will be produced given a particular prefix is incomputable.

If I lived 500 years ago I might have said that a function that wasn't
computable wasn't well-defined.  (I might well have been somewhat
pompous about such things in 1510).  However, because of the efficacy of the
theory of limits and other methods of finding bounds on functions, I would
not say that now.  Pi is well defined, but I don't think that Solmonoff
Induction is completely well-defined.  But we can still talk about certain
aspects of it (using mathematics that are well grounded relative to those
aspects of the method that are computable) even though the entire function
is not completely well-defined.

One way to do this is by using conditional statements.  So if it turns out
that one or some of my assumptions are wrong, I can see how to revise my
theory about the aspect of the function that is computable (or seems
computable).

Jim Bromer

On Thu, Jul 22, 2010 at 10:50 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Aha! So you *are* a constructivist or intuitionist or finitist of some
 variety? This would explain the miscommunication... you appear to hold the
 belief that a structure needs to be computable in order to be well-defined.
 Is that right?

 If that's the case, then you're not really just arguing against Solomonoff
 induction in particular, you're arguing against the entrenched framework of
 thinking which allows it to be defined-- the so-called classical
 mathematics.

 If this is the case, then you aren't alone.

 --Abram


 On Thu, Jul 22, 2010 at 5:06 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
  Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
I have to retract my claim that the programs of Solomonoff Induction would
be trans-infinite.  Each of the infinite individual programs could be
enumerated by their individual instructions so some combination of unique
individual programs would not correspond to a unique program but to the
enumerated program that corresponds to the string of their individual
instructions.  So I got that one wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:
The fundamental method is that the probability of a string x is proportional
to the sum of all programs M that output x weighted by 2^-|M|. That
probability is dominated by the shortest program, but it is equally
uncomputable either way.
Also, please point me to this mathematical community that you claim rejects
Solomonoff induction. Can you find even one paper that refutes it?

You give a precise statement of the probability in general terms, but then
say that it is uncomputable.  Then you ask if there is a paper that refutes
it.  Well, why would any serious mathematician bother to refute it since you
yourself acknowledge that it is uncomputable and therefore unverifiable and
therefore not a mathematical theorem that can be proven true or false?  It
isn't like you claimed that the mathematical statement is verifiable. It is
as if you are making a statement and then ducking any responsibility for it
by denying that it is even an evaluation.  You honestly don't see the
irregularity?

My point is that the general mathematical community doesn't accept
Solomonoff Induction, not that I have a paper that *refutes it,* whatever
that would mean.

Please give me a little more explanation why you say the fundamental method
is that the probability of a string x is proportional to the sum of all
programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
Thanks for the explanation.  I want to learn more about statistical
modelling and compression but I will need to take my time on it.  But no, I
don't apply Solomonoff Induction all the time, I never apply it.  I am not
being petty, it's just that you have taken a coincidence and interpreted it
the way you want to.

On Thu, Jul 22, 2010 at 9:33 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?

 By |M| I mean the length of the program M in bits. Why 2^-|M|? Because each
 bit means you can have twice as many programs, so they should count half as
 much.

 Being uncomputable doesn't make it wrong. The fact that there is no general
 procedure for finding the shortest program that outputs a string doesn't
 mean that you can never find it, or that for many cases you can't
 approximate it.

 You apply Solomonoff induction all the time. What is the next bit in these
 sequences?

 1. 0101010101010101010101010101010

 2. 11001001110110101010001

 In sequence 1 there is an obvious pattern with a short description. You can
 find a short program that outputs 0 and 1 alternately forever, so you
 predict the next bit will be 1. It might not be the shortest program, but it
 is enough that alternate 0 and 1 forever is shorter than alternate 0 and
 1 15 times followed by 00 that you can confidently predict the first
 hypothesis is more likely.

 The second sequence is not so obvious. It looks like random bits. With
 enough intelligence (or computation) you might discover that the sequence is
 a binary representation of pi, and therefore the next bit is 0. But the fact
 that you might not discover the shortest description does not invalidate the
 principle. It just says that you can't always apply Solomonoff induction and
 get the number you want.

 Perhaps http://en.wikipedia.org/wiki/Kolmogorov_complexity will make this
 clear.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 5:06:12 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental method
 is that the probability of a string x is proportional to the sum of all
 programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite

[agi] Huge Progress on the Core of AGI

2010-07-22 Thread Jim Bromer
I have to say that I am proud of David Jone's efforts.  He has really
matured during these last few months.  I'm kidding but I really do respect
the fact that he is actively experimenting.  I want to get back to work on
my artificial imagination and image analysis programs - if I can ever figure
out how to get the time.

As I have read David's comments, I realize that we need to really leverage
all sorts of cruddy data in order to make good agi.  But since that kind of
thing doesn't work with sparse knowledge, it seems that the only way it
could work is with extensive knowledge about a wide range of situations,
like the knowledge gained from a vast variety of experiences.  This
conjecture makes some sense because if wide ranging knowledge could be kept
in superficial stores where it could be accessed quickly and economically,
it could be used efficiently in (conceptual) model fitting.  However, as
knowledge becomes too extensive it might become too unwieldy to find what is
needed for a particular situation.  At this point indexing becomes necessary
with cross-indexing references to different knowledge based on similarities
and commonalities of employment.

Here I am saying that relevant knowledge based on previous learning might
not have to be totally relevant to a situation as long as it could be used
to run during an ongoing situation.  From this perspective
then, knowledge from a wide variety of experiences should actually be
composed of reactions on different conceptual levels.  Then as a piece of
knowledge is brought into play for an ongoing situation, those levels that
seem best suited to deal with the situation could be promoted quickly as the
situation unfolds, acting like an automated indexing system into other
knowledge relevant to the situation.  So the ongoing process of trying to
determine what is going on and what actions should be made would
simultaneously act like an automated index to find better knowledge more
suited for the situation.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
Matt,
I never said that I did not accept the application of the method of
probability, it is just that is has to be applied using logic.  Solomonoff
Induction does not meet this standard.  From this conclusion, and from other
sources of information, including the acknowledgement of incomputability and
the lack of acceptance in the general mathematical community I feel
comfortable with rejecting the theory of Kolomogrov Complexity as well.

What I said was: My conclusion suggests, that the use of Solmonoff Induction
as an ideal for compression or something like MDL...
What you said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.

Jim Bromer

On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past results,
 not all programs. The difference is no more than the language-dependent
 constant. Legg proved this in the paper that Ben and I both pointed you to.
 Do you dispute his proof? I guess you don't, because you didn't respond the
 last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is nothing
 in the definition of Solmonoff Induction that could be used to prove it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I meant this was what I said was: My conclusion suggests, that the use of
Solmonoff Induction as an ideal for compression or something like MDL is not
only unsubstantiated but based on a massive inability to comprehend the idea
of a program that runs every possible program.
What Matt said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.
Jim Bromer



On Wed, Jul 21, 2010 at 2:36 PM, Jim Bromer jimbro...@gmail.com wrote:

 Matt,
 I never said that I did not accept the application of the method of
 probability, it is just that is has to be applied using logic.  Solomonoff
 Induction does not meet this standard.  From this conclusion, and from other
 sources of information, including the acknowledgement of incomputability and
 the lack of acceptance in the general mathematical community I feel
 comfortable with rejecting the theory of Kolomogrov Complexity as well.

 What I said was: My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL...
 What you said was: It is sufficient to find the shortest program consistent
 with past results, not all programs. The difference is no more than the
 language-dependent constant...
 This is an equivocation based on the line you were responding to.  You are
 presenting a related comment as if it were a valid response to what I
 actually said.  That is one reason why I am starting to ignore you.

 Jim Bromer

 On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past
 results, not all programs. The difference is no more than the
 language-dependent constant. Legg proved this in the paper that Ben and I
 both pointed you to. Do you dispute his proof? I guess you don't, because
 you didn't respond the last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
you iterate through all possible programs, combining different programs as
you go.  Then you have an infinite number of possible programs which have a
trans-infinite number of combinations, because each tier of combinations can
then be recombined to produce a second, third, fourth,... tier of
recombinations.

Anyone who claims that this method is the ideal for a method of applied
probability is unwise.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I should have said, It would be unwise to claim that this method could stand
as an ideal for some valid and feasible application of probability.
Jim Bromer

On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
I haven't made any noteworthy progress on my attempt to create a polynomial
time Boolean Satisfiability Solver.
I am going to try to explore some more modest means of compressing formulas
in a way so that the formula will reveal more about individual combinations
(of the Boolean states of the variables that are True or False), through the
use of strands which are groups of combinations.  So I am not trying to
find a polynomial time solution at this point, I am just going through the
stuff that I have been thinking of, either explicitly or implicitly during
the past few years to see if I can get some means of representing more about
a formula in an efficient manner.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
Because a logical system can be applied to a problem, that does not mean
that the logical system is the same as the problem.  Most notably, the
theory of numbers contains definitions that do not belong to logic per se.
Jim Bromer

On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or less
 than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
You claim that I have not checked how Solomonoff Induction is actually
defined, but then don't bother mentioning how it is defined as if it would
be too much of an ordeal to even begin to try.  It is this kind of evasive
response, along with the fact that these functions are incomputable, that
make your replies so absurd.

On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
Well, Boolean Logic may be a part of number theory but even then it is still
not the same as number theory.

On Wed, Jul 21, 2010 at 4:01 PM, Jim Bromer jimbro...@gmail.com wrote:

 Because a logical system can be applied to a problem, that does not mean
 that the logical system is the same as the problem.  Most notably, the
 theory of numbers contains definitions that do not belong to logic per se.
 Jim Bromer

 On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or
 less than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram


I mean this in a friendly way.  (When I started to write in a fiendly way,
it was only a typo and nothing more.)

Is it possible that it is Abram who doesn't understand how Solomonoff
Induction is actually defined.  Is it possible that it is Abram who has
missed an implication of the defiinition because it didn't fit in with his
ideal of a convenient and reasonable application of Bayesian mathematics?

I am just saying that you should ask yourself this: is it possible that
Abram doesn't see the obvious flaws because it obviously wouldn't make any
sense vis a vis a reasonable and sound application of probability theory.
For example, would you be willing to write to a real expert in probability
theory to ask him for his opinions on Solomonoff Induction?  Because I would
be.

Jim Bromer


On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I can't say where you are going wrong because I really have no idea.
However, my guess is that you are ignoring certain contingencies that would
be necessary to make your claims valid.  I tried to use a reference to the
theory of limits to explain this but it seemed to fall on deaf ears.

If I were to write everything I knew about Bernoulli without looking it up,
it would be a page of few facts.  I have read some things about him, I just
don't recall much of it.  So when I dare say that Abram couldn't write much
about Cauchy without looking it up, it is not a pretentious put down, but
more like a last-ditch effort to teach him some basic humility.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
If someone had a profound knowledge of Solomonoff Induction and the *science
of probability* he could at the very least talk to me in a way that I knew
he knew what I was talking about and I knew he knew what he was talking
about.  He might be slightly obnoxious or he might be casual or (more
likely) he would try to be patient.  But it is unlikely that he would use a
hit and run attack and denounce my conjectures without taking the
opportunity to talk to me about what I was saying.  That is one of the ways
that I know that you don't know as much as you think you know.  You rarely
get angry about being totally right.

A true expert would be able to talk to me and also take advantage of my
thinking about the subject to weave some new information into the
conversation so that I could leave it with a greater insight about the
problem than I did before.  That is not just a skill that only good teachers
have, it is a skill that almost any expert can develop if he is willing to
use it.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
The question was asked whether, given infinite resources could Solmonoff
Induction work.  I made the assumption that it was computable and found that
it wouldn't work.  It is not computable, even with infinite resources, for
the kind of thing that was claimed it would do. (I believe that with a
governance program it might actually be programmable) but it could not be
used to predict (or compute the probability of) a subsequent string
given some prefix string.  Not only is the method impractical it is
theoretically inane.  My conclusion suggests, that the use of Solmonoff
Induction as an ideal for compression or something like MDL is not only
unsubstantiated but based on a massive inability to comprehend the idea of a
program that runs every possible program.

I am comfortable with the conclusion that the claim that Solomonoff
Induction is an ideal for compression or induction or anything else is
pretty shallow and not based on careful consideration.

There is a chance that I am wrong, but I am confident that there is nothing
in the definition of Solmonoff Induction that could be used to prove it.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
I am not going in circles.  I probably should not express myself in
replies.  I made a lot of mistakes getting to the conclusion that I got to,
and I am a little uncertain as to whether the construction of the diagonal
set actually means that there would be uncountable sets for this
particular example, but that, for example, has nothing to do with anything
that you said.
Jim Bromer

On Tue, Jul 20, 2010 at 5:07 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 *sigh* My response to that would just be to repeat certain questions I
 already asked you... I guess we should give it up after all. The best I can
 understand you is to assume that you simply don't understand the relevant
 mathematical constructions, and you've reached pretty much the same point
 with me. I'd continue in private if you're interested, but we should
 probably stop going in circles on a public list.

 --Abram

   On Tue, Jul 20, 2010 at 3:10 PM, Jim Bromer jimbro...@gmail.com wrote:

   The question was asked whether, given infinite
 resources could Solmonoff Induction work.  I made the assumption that it was
 computable and found that it wouldn't work.  It is not computable, even with
 infinite resources, for the kind of thing that was claimed it would do. (I
 believe that with a governance program it might actually be programmable)
 but it could not be used to predict (or compute the probability of) a
 subsequent string given some prefix string.  Not only is the method
 impractical it is theoretically inane.  My conclusion suggests, that the use
 of Solmonoff Induction as an ideal for compression or something like MDL is
 not only unsubstantiated but based on a massive inability to comprehend the
 idea of a program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
Abram,
I feel a responsibility to make an effort to explain myself when someone
doesn't understand what I am saying, but once I have gone over the material
sufficiently, if the person is still arguing with me about it I will just
say that I have already explained myself in the previous messages.  For
example if you can point to some authoritative source outside the
Solomonoff-Kolmogrov crowd that agrees that full program space, as it
pertains to definitions like, all possible programs, or my example
of, all possible mathematical functions, represents an comprehensible
concept that is open to mathematical analysis then tell me about it.  We use
concepts like the set containing sets that are not members of themselves
as a philosophical tool that can lead to the discovery of errors in our
assumptions, and in this way such contradictions are of tremendous value.
The ability to use critical skills to find flaws in one's own presumptions
are critical in comprehension, and if that kind of critical thinking has
been turned off for some reason, then the consequences will be predictable.
I think compression is a useful field but the idea of universal induction
aka Solomonoff Induction is garbage science.  It was a good effort on
Solomonoff's part, but it didn't work and it is time to move on, as the
majority of theorists have.
Jim Bromer

On Sun, Jul 18, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'm still not sure what your point even is, which is probably why my
 responses seem so strange to you. It still seems to me as if you are jumping
 back and forth between different positions, like I said at the start of this
 discussion.

 You didn't answer why you think program space does not represent a
 comprehensible concept. (I will drop the full if it helps...)

 My only conclusion can be that you are (at least implicitly) rejecting some
 classical mathematical principles and using your own very different notion
 of which proofs are valid, which concepts are well-defined, et cetera.

 (Or perhaps you just don't have a background in the formal theory of
 computation?)

 Also, not sure what difference you mean to say I'm papering over.

 Perhaps it *is* best that we drop it, since neither one of us is getting
 through to the other; but, I am genuinely trying to figure out what you are
 saying...

 --Abram

   On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.com wrote:

   Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a trick
 to justify that remark.  I think the trick may be acceptable, but I am not
 sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I checked the term program space and found a few authors who used it, but
it seems to be an ad-hoc definition that is not widely used.  It seems to be
an amalgamation of term sample space with the the set of all programs or
something like that.  Of course, the simple comprehension of the idea of,
all possible programs is different than the pretense that all possible
programs could be comprehended through some kind of strategy of evaluation
of all those programs.  It would be like confusing a domain from mathematics
with a value or an possibly evaluable variable (that can be assigned a value
from the domain).  These type distinctions are necessary for logical
thinking about these things.  The same kind of reasoning goes for Russell's
Paradox.  While I can, (with some thought) comprehend the definition and
understand the paradox, I cannot comprehend the set itself, that is, I
cannot comprehend the evaluation of the set.  Such a thing doesn't make any
sense.  It is odd that the set of all evaluable functions (or all programs)
is an inherent paradox when you try to think of it in the terms of an
evaluable function (as if writing a program that produces all possible
programs was feasible).  The oddness is due to the fact that there is
nothing that obviously leads to a paradox, and it is not easy to prove it
is a paradox (because it lacks the required definition). The only reason we
can give for the seeming paradox is that it is wrong to confuse the domain
of a mathematical definition with a value or values from the domain.  While
this barrier can be transcended in some very special cases, it very
obviously cannot be ignored for the general case.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I made a remark about confusing a domain with the values that was wrong.  What
I should have said is that you cannot just treat a domain of functions or of
programs as if they were a domain of numbers or values and expect them to
act in ways that are familiar from a study of numbers.



Of course you can use any of the members of a domain of numbers or numerical
variables in evaluation methods, but when you try that with a domain of
functions, programs or algorithms, you have to expect that you may get some
odd results.



I believe that since programs can be represented by strings, the Solomonoff
Induction of programs can be seen to be computable because you can just
iterate through every possible program string.  I believe that the same
thing could be said of all possible Universal Turing Machines.  If these two
statements are true, then I believe that the program is both computable and
will create the situation of Cantor's diagonal argument.  I believe that the
construction of the infinite sequences of Cantor's argument can be
constructed through an infinite computable program, and since the program
can also act on the infinite memory that Solomonoff Induction needs,
Cantor's diagonal sequence can also be constructed by a program.  Since
Solomonoff Induction is defined so that it will use every possible program,
this situation cannot be avoided.



Thus, Solomonoff Induction would be both computable and it would produce
uncountable infinities of strings.  When combined with the problem of
ordering the resulting strings in order to show how the functions might
approach stable limits for each probability, since you cannot a priori
determine the ordering of the programs that you would need for the
computation of these stable limiting probabilities you would be confronted
with the higher order infinity of all possible combinations of orderings of
the trans infinite strings that the program would hypothetically produce.



Therefore, Solomonoff Induction is either incomputable or else it cannot be
proven to be capable of avoiding the production of trans infinite strings
whose ordering is so confused that they would be totally useless for any
kind of prediction of a string based on a given prefix, as is claimed. The
system is not any kind of ideal but rather *a confused theoretical notion.
*

I might be wrong.  Or I might be right.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Solomonoff Induction is not well-defined because it is either incomputable
and/or absurdly irrelevant.  This is where the communication breaks down.  I
have no idea why you would make a remark like that.  It is interesting that
you are an incremental-progress guy.



On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full program
 space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore, I
 was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
On Sun, Jul 18, 2010 at 11:09 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I think you are using a different definition of well-defined :). I am
 saying Solomonoff induction is totally well-defined as a mathematical
 concept. You are saying it isn't well-defined as a computational entity.
 These are both essentially true.

 Why you might insist that program-space is not well-defined, on the other
 hand, I do not know.

 --Abram


I said: does talk about the full program space, merit mentioning?
Solomonoff Induction is not totally well-defined as a mathematical
concept, as you said it was.
In both of these instances you used qualifications of excess.  Totally,
well-defined and full. It would be like me saying that because your
thesis is wrong in a few ways, your thesis is 'totally wrong in full concept
space or something like that.
Jim Bromer






 On Sun, Jul 18, 2010 at 8:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction is not well-defined because it is either incomputable
 and/or absurdly irrelevant.  This is where the communication breaks down.  I
 have no idea why you would make a remark like that.  It is interesting that
 you are an incremental-progress guy.



 On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full
 program space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts
 of things we keep talking about ... well, I think the best argument that I
 can give is that MDL is strongly supported by both theory and practice for
 many *subsets* of the full program space. The concern might be that, so 
 far,
 it is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful 
 predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing
 you have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore,
 I was just agreeing with what you said, and pointing out that the 
 statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Abram,
I was going to drop the discussion, but then I thought I figured out why you
kept trying to paper over the difference.  Of course, our personal
disagreement is trivial; it isn't that important.  But the problem with
Solomonoff Induction is that not only is the output hopelessly tangled and
seriously infinite, but the input is as well.  The definition of all
possible programs, like the definition of all possible mathematical
functions, is not a proper mathematical problem that can be comprehended in
an analytical way.  I think that is the part you haven't totally figured out
yet (if you will excuse the pun).  Total program space, does not represent
a comprehensible computational concept.  When you try find a way to work out
feasible computable examples it is not enough to limit the output string
space, you HAVE to limit the program space in the same way.  That second
limitation makes the entire concept of total program space, much too
weak for our purposes.  You seem to know this at an intuitive operational
level, but it seems to me that you haven't truly grasped the implications.

I say that Solomonoff Induction is computational but I have to use a trick
to justify that remark.  I think the trick may be acceptable, but I am not
sure.  But the possibility that the concept of all possible programs,
might be computational doesn't mean that that it is a sound mathematical
concept.  This underlies the reason that I intuitively came to the
conclusion that Solomonoff Induction was transfinite.  However, I wasn't
able to prove it because the hypothetical concept of all possible program
space, is so pretentious that it does not lend itself to mathematical
analysis.

I just wanted to point this detail out because your implied view that you
agreed with me but total program space was mathematically well-defined
did not make any sense.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining the
 probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


I believe that Solomonoff Induction would be computable given infinite time
and infinite resources (the Godel Theorem fits into this category) but some
people disagree for reasons I do not understand.

If it is not computable then it is not a mathematical theorem and the
question of whether the sum of probabilities equals 1 is pure fantasy.

If it is computable then the central issue is whether it could (given
infinite time and infinite resources) be used to determine the probability
of a particular string being produced from all possible programs.  The
question about the sum of all the probabilities is certainly an interesting
question. However, the problem of making sure that the function was actually
computable would interfere with this process of determining the probability
of each particular string that can be produced.  For example, since some
strings would be infinite, the computability problem makes it imperative
that the infinite strings be partially computed at an iteration (or else the
function would be hung up at some particular iteration and the infinite
other calculations could not be considered computable).

My criticism is that even though I believe the function may be theoretically
computable, the fact that each particular probability (of each particular
string that is produced) cannot be proven to approach a limit through
mathematical analysis, and since the individual probabilities will fluctuate
with each new string that is produced, one would have to know how to reorder
the production of the probabilities in order to demonstrate that the
individual probabilities do approach a limit.  If they don't, then the claim
that this function could be used to define the probabilities of a particular
string from all possible program is unprovable.  (Some infinite calculations
fluctuate infinitely.)  Since you do not have any way to determine how to
reorder the infinite probabilities a priori, your algorithm would have to be
able to compute all possible reorderings to find the ordering and filtering
of the computations that would produce evaluable limits.  Since there are
trans infinite rearrangements of an infinite list (I am not sure that I am
using the term 'trans infinite' properly) this shows that the conclusion
that the theorem can be used to derive the desired probabilities is
unprovable through a variation of Cantor's Diagonal Argument, and that you
can't use Solomonoff Induction the way you have been talking about using it.

Since you cannot fully compute every string that may be produced at a
certain iteration, you cannot make the claim that you even know the
probabilities of any possible string before infinity and therefore your
claim that the sum of the probabilities can be computed is not provable.

But I could be wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
I think that Solomonoff Induction includes a computational method that
produces probabilities of some sort and whenever those probabilities were
computed (in a way that would make the function computable) they would sum
up to 1.  But the issue that I am pointing out is that there is no way that
you can determine the margin of error in what is being computed for what has
been repeatedly claimed that the function is capable of computing.  Since
you are not able to rely on something like the theory of limits, you are not
able to determine the degree of error in what is being computed.  And in
fact, there is no way to determine that what the function would compute
would be in any way useful for the sort of things that you guys keep talking
about.

Jim Bromer



On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


 I believe that Solomonoff Induction would be computable given infinite time
 and infinite resources (the Godel Theorem fits into this category) but some
 people disagree for reasons I do not understand.

 If it is not computable then it is not a mathematical theorem and the
 question of whether the sum of probabilities equals 1 is pure fantasy.

 If it is computable then the central issue is whether it could (given
 infinite time and infinite resources) be used to determine the probability
 of a particular string being produced from all possible programs.  The
 question about the sum of all the probabilities is certainly an interesting
 question. However, the problem of making sure that the function was actually
 computable would interfere with this process of determining the probability
 of each particular string that can be produced.  For example, since some
 strings would be infinite, the computability problem makes it imperative
 that the infinite strings be partially computed at an iteration (or else the
 function would be hung up at some particular iteration and the infinite
 other calculations could not be considered computable).

 My criticism is that even though I believe the function may be
 theoretically computable, the fact that each particular probability (of each
 particular string that is produced) cannot be proven to approach a limit
 through mathematical analysis, and since the individual probabilities will
 fluctuate with each new string that is produced, one would have to know how
 to reorder the production of the probabilities in order to demonstrate that
 the individual probabilities do approach a limit.  If they don't, then the
 claim that this function could be used to define the probabilities of a
 particular string from all possible program is unprovable.  (Some
 infinite calculations fluctuate infinitely.)  Since you do not have any way
 to determine how to reorder the infinite probabilities a priori, your
 algorithm would have to be able to compute all possible reorderings to find
 the ordering and filtering of the computations that would produce evaluable
 limits.  Since there are trans infinite rearrangements of an infinite list
 (I am not sure that I am using the term 'trans infinite' properly) this
 shows that the conclusion that the theorem can be used to derive the desired
 probabilities is unprovable through a variation of Cantor's Diagonal
 Argument, and that you can't use Solomonoff Induction the way you have been
 talking about using it.

 Since you cannot fully compute every string that may be produced at a
 certain iteration, you cannot make the claim that you even know the
 probabilities of any possible string before infinity and therefore your
 claim that the sum of the probabilities can be computed is not provable.

 But I could be wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 What do you mean by definitive events?



I was just trying to find a way to designate obsverations that would be
reliably obvious to a computer program.  This has something to do with the
assumptions that you are using.  For example if some object appeared against
a stable background and it was a different color than the background, it
would be a definitive observation event because your algorithm could detect
it with some certainty and use it in the definition of other more
complicated events (like occlusion.)  Notice that this example would not
necessarily be so obvious (a definitive event) using a camera, because there
are a number of ways that an illusion (of some kind) could end up as a data
event.

I will try to reply to the rest of your message sometime later.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
We all make conjectures all of the time, but we don't often don't have
anyway to establish credibility for the claims that are made.  So I wanted
to examine one part of this field, and the idea that seemed most natural for
me was Solomonoff Induction.  I have reached a conclusion about the subject
and that conclusion is that all of the claims that I have seen made about
Solomonoff Induction are rationally unfounded including the one that you
just made when you said: We can produce upper bounds that get closer and
closer w/o getting arbitrarily near, and we can produce numbers which do
approach arbitrarily near to the correct number in the limit but sometimes
dip below for a time; but we can't have both features.

Your inability to fully recognize or perhaps acknowledge that you cannot use
Solomonoff Induction to make this claim is difficult for me to comprehend.

While the fields of compression and probability have an impressive body of
evidence supporting them, I simply have no reason to think the kind of
claims that have been made about Solomonoff Induction have any merit.  By
natural induction I feel comfortable drawing the conclusion that this whole
area related to algorithmic information theory is based on shallow methods
of reasoning.  It can be useful, as it was for me, just as means of
exploring ideas that I would not have otherwise explored.  But its
usefulness comes in learning how to determine its lack of merit.

I will write one more thing about my feelings about computability, but I
will start a new thread and just mention the relation to this thread.

Jim Bromer

On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Yes this is true  provable: there is no way to compute a correct error
 bound such that it converges to 0 as the computation of algorithmic
 probability converges to the correct number. More specifically--- we can
 approximate the algorithmic probability from below, computing better lower
 bounds which converge to the correct number, but we cannot approximate it
 from above, as there is no procedure (and can never be any procedure) which
 creates closer and closer upper bounds converging to the correct number.

 (We can produce upper bounds that get closer and closer w/o getting
 arbitrarily near, and we can produce numbers which do approach arbitrarily
 near to the correct number in the limit but sometimes dip below for a time;
 but we can't have both features.)

 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 What sort of alternatives do you have in mind, by the way?

 --Abram

   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.comwrote:

   I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the 
 upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove 
 it
 stops below 1

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.
 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.
 Dave


I suppose a hypotheses is a kind of concepts.  So there are other kinds of
concepts that we need to use with hypotheses.  A hypotheses has to be
conceptually integrated into other concepts.  Conceptual integration is
something of greater complexity than shallow deduction or probability
chains.  While reasoning chains are needed in conceptual integration,
conceptual integration is to a chain of reasoning what a multi dimension
structure is to a one dimensional chain.

I will try to come up with some examples.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread Jim Bromer
On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
Even if you refined your model until it was just right, you would have only
caught up to everyone else with a solution to a narrow AI problem.


I did not mean that you would just have a solution to a narrow AI problem,
but that your solution, if put in the form of scoring of points on the basis
of the observation *of definitive* events, would constitute a narrow AI
method.  The central unsolved problem, in my view, is: How can hypotheses be
conceptually integrated along with the observable definitive events of the
problem to form good explanatory connections that can mesh well with other
knowledge about the problem that is considered to be reliable.  The second
problem is finding efficient ways to represent this complexity of knowledge
so that the program can utilize it efficiently.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Jim Bromer
Last week I came up with a sketch that I felt showed that Solomonoff
Induction was incomputable *in practice* using a variation of Cantor's
Diagonal Argument.  I wondered if my argument made sense or not.  I will
explain why I think it did.



First of all, I should have started out by saying something like, Suppose
Solomonoff Induction was computable, since there is some reason why people
feel that it isn't.



Secondly I don't think I needed to use Cantor's Diagonal Argument (for the *in
practice* case), because it would be sufficient to point out that since it
was impossible to say whether or not the probabilities ever approached any
sustained (collared) limits due to the lack of adequate mathematical
definition of the concept all programs, it would be impossible to make the
claim that they were actual representations of the probabilities of all
programs that could produce certain strings.



But before I start to explain why I think my variation of the Diagonal
Argument was valid, I would like to make another comment about what was
being claimed.



Take a look at the n-ary expansion of the square root of 2 (such as the
decimal expansion or the binary expansion).  The decimal expansion or the
binary expansion of the square root of 2 is an infinite string.  To say that
the algorithm that produces the value is predicting the value is a
torturous use of meaning of the word 'prediction'.  Now I have less than
perfect grammar, but the idea of prediction is so important in the field of
intelligence that I do not feel that this kind of reduction of the concept
of prediction is illuminating.



Incidentally, There are infinite ways to produce the square root of 2 (sqrt
2 +1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2
is unlikely is another stretch of conventional thinking.  But since there
are an infinite ways for a program to produce any number (that can be
produced by a program) we would imagine that the probability that one of the
infinite ways to produce the square root of 2 approaches 0 but never reaches
it.  We can imagine it, but we cannot prove that this occurs in Solomonoff
Induction because Solomonoff Induction is not limited to just this class of
programs (which could be proven to approach a limit).  For example, we could
make a similar argument for any number, including a 0 or a 1 which would
mean that the infinite string of digits for the square root of 2 is just as
likely as the string 0.



But the reason why I think a variation of the diagonal argument can work in
my argument is that since we cannot prove that the infinite computations of
the probabilities that a -program will produce a string- will ever approach
a limit, to use the probabilities reliably (even as an infinite theoretical
method) we would have to find some way to rearrange the computations of the
probabilities so that they could.  While the number of ways to rearrange the
ordering of a finite number of things is finite no matter how large the
number is, the number of possible ways to rearrange an infinite number of
things is infinite.  I believe that this problem of finding the right
rearrangement of an infinite list of computations of values after the
calculation of the list is finished qualifies for an infinite to infinite
diagonal argument.


I want to add one more thing to this in a few days.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Jim Bromer
On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:
[The]complaint that probability theory doesn't try to figure out why it was
wrong in the 30% (or whatever) it misses is a common objection. Probability
theory glosses over important detail, it encourages lazy thinking, etc.
However, this all depends on the space of hypotheses being examined.
Statistical methods will be prone to this objection because they are
essentially narrow-AI methods: they don't *try* to search in the space of
all hypotheses a human might consider. An AGI setup can and should have such
a large hypothesis space.
---
That is the thing.
We cannot search all possible hypotheses because we could not even write all
possible hypotheses down.  This is why hypotheses have to be formed
creatively in response to an analysis of a situation.  In my arrogant
opinion, this is best done through a method that creatively uses discreet
representations.  Of course it can use statistical or probabilistic data in
making those creative hypotheses when there is good data to be used.  But
the best way to do this is through categorization based creativity.  But
this is an imaginative method, one which creates imaginative
explanations (or other co-relations) for observed or conjectured events.
Those imaginative hypotheses then have to be compared to a situation through
some trial and error methods.  Then the tentative conjectures that seem to
withstand initial tests have to be further integrated into other hypotheses,
conjectures and explanations that are related to the subject of the
hypotheses.   This process of conceptual integration, a process which has to
rely on both creative methods and rational methods, is a fundamental part of
the process which does not seem to be clearly understood.  Conceptual
Integration cannot be accomplished by reducing a concept to True or False or
to some number from 0 to 1 and then combined with other concepts that were
also so reduced.  Ideas take on roles when combined with other ideas.
Basically, a new idea has to be fit into a complex of other ideas that are
strongly related to it.

Jim Bromer





On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:

 PS-- I am not denying that statistics is applied probability theory. :)
 When I say they are different, what I mean is that saying I'm going to use
 probability theory and I'm going to use statistics tend to indicate very
 different approaches. Probability is a set of axioms, whereas statistics is
 a set of methods. The probability theory camp tends to be bayesian, whereas
 the stats camp tends to be frequentist.

 Your complaint that probability theory doesn't try to figure out why it was
 wrong in the 30% (or whatever) it misses is a common objection. Probability
 theory glosses over important detail, it encourages lazy thinking, etc.
 However, this all depends on the space of hypotheses being examined.
 Statistical methods will be prone to this objection because they are
 essentially narrow-AI methods: they don't *try* to search in the space of
 all hypotheses a human might consider. An AGI setup can and should have such
 a large hypothesis space. Note that AIXI is typically formulated as using a
 space of crisp (non-probabilistic) hypotheses, though probability theory is
 used to reason about them. This means no theory it considers will gloss over
 detail in this way: every theory completely explains the data. (I use AIXI
 as a convenient example, not because I agree with it.)

 --Abram

 On Mon, Jul 12, 2010 at 2:42 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 I tend to think of probability theory and statistics as different things.
 I'd agree that statistics is not enough for AGI, but in contrast I think
 probability theory is a pretty good foundation. Bayesianism to me provides a
 sound way of integrating the elegance/utility tradeoff of explanation-based
 reasoning into the basic fabric of the uncertainty calculus. Others advocate
 different sorts of uncertainty than probabilities, but so far what I've seen
 indicates more a lack of ability to apply probability theory than a need for
 a new type of uncertainty. What other methods do you favor for dealing with
 these things?

 --Abram


 On Sun, Jul 11, 2010 at 12:30 PM, David Jones davidher...@gmail.comwrote:

 Thanks Abram,

 I know that probability is one approach. But there are many problems with
 using it in actual implementations. I know a lot of people will be angered
 by that statement and retort with all the successes that they have had using
 probability. But, the truth is that you can solve the problems many ways and
 every way has its pros and cons. I personally believe that probability has
 unacceptable cons if used all by itself. It must only be used when it is the
 best tool for the task.

 I do plan to use some probability within my approach. But only when it
 makes sense to do so. I do not believe in completely

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Jim Bromer
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.ukwrote:


 And programs as we know them, don't and can't handle *concepts* -  despite
 the misnomers of conceptual graphs/spaces etc wh are not concepts at all.
 They can't for example handle writing or shopping because these can only
 be expressed as flexible outlines/schemas as per ideograms.


I disagree with this, and so this is proper focus for our disagreement.
Although there are other aspects of the problem that we probably disagree
on, this is such a fundamental issue, that nothing can get past it.  Either
programs can deal with flexible outlines/schema or they can't.  If they
can't then AGI is probably impossible.  If they can, then AGI is probably
possible.

I think that we both agree that creativity and imagination is absolutely
necessary aspects of intelligence.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Jim Bromer
I meant,

I think that we both agree that creativity and imagination are absolutely
necessary aspects of intelligence.

of course!


On Tue, Jul 13, 2010 at 12:46 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 And programs as we know them, don't and can't handle *concepts* -  despite
 the misnomers of conceptual graphs/spaces etc wh are not concepts at all.
 They can't for example handle writing or shopping because these can only
 be expressed as flexible outlines/schemas as per ideograms.


 I disagree with this, and so this is proper focus for our disagreement.
 Although there are other aspects of the problem that we probably disagree
 on, this is such a fundamental issue, that nothing can get past it.  Either
 programs can deal with flexible outlines/schema or they can't.  If they
 can't then AGI is probably impossible.  If they can, then AGI is probably
 possible.

 I think that we both agree that creativity and imagination is absolutely
 necessary aspects of intelligence.

 Jim Bromer








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Jim Bromer
On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  The first thing is to acknowledge that programs *don't* handle concepts -
 if you think they do, you must give examples.

 The reasons they can't, as presently conceived, is

 a) concepts encase a more or less *infinite diversity of forms* (even
 if only applying at first to a species of object)  -  *chair* for example
 as I've demonstrated embraces a vast open-ended diversity of radically
 different chair forms; higher order concepts like  furniture embrace ...
 well, it's hard to think even of the parameters, let alone the diversity of
 forms, here.

 b) concepts are *polydomain*- not just multi- but open-endedly extensible
 in their domains; chair for example, can also refer to a person, skin in
 French, two humans forming a chair to carry s.o., a prize, etc.

 Basically concepts have a freeform realm or sphere of reference, and you
 can't have a setform, preprogrammed approach to defining that realm.

 There's no reason however why you can't mechanically and computationally
 begin to instantiate the kind of freeform approach I'm proposing.



So here you are saying that programs don't handle concepts but they could
begin to instantiate the kind of freeform approach that you are proposing.
Are you sure you are not saying that programs can't handle concepts unless
we do exactly what you are suggesting that we should do.  Because a lot of
us say that.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-13 Thread Jim Bromer
My opinion is that this is as good a place to start as any.  At least you
are dealing with an actual problem, your trying different stuff out, and you
seem like you are willing to actually try it out.
The problem is that the scoring is based on a superficial model of
conceptual integration, where, for some reason, you believe that the answer
to the essential problem includes a method of rephrasing the problem into
simpler questions which then can magically be answered.  You are worried
about the finery without first creating the structure.  Even if you refined
your model until it was just right, you would have only caught up to
everyone else with a solution to a narrow AI problem.
Jim Bromer

On Tue, Jul 13, 2010 at 8:15 PM, David Jones davidher...@gmail.com wrote:

 I've been trying to figure out how to score hypotheses. Do you guys have
 any constructive ideas about how to define the way you score hypotheses like
 these a little better? I'll define the problem below in detail. I know Abram
 mentioned MDL, which I'm about to look into. Does that even apply to this
 sort of thing?

 I came up with a hypothesis scoring idea. It goes as follows

 *Rule 1:* Hypotheses are compared only 1 at a time.
 *Rule 2:* If hypothesis 1 predicts/expects/anticipates something, then you
 add (+1) to its score and subtract (-1) from hypothesis 2 if it doesn't also
 anticipate the observation. (Note:When comparing only 2 hypotheses, it may
 actually not be necessary to subtract from the competing hypothesis I
 guess.)

 *Here is the specific problem I'm analyzing: *Let's say that you have two
 window objects that contain the same letter, such as the letter e. In
 frame 0, the first window object is visible. In frame 1, window 1 moves a
 bit. In frame 2 though, the second window object appears and completely
 occludes the first window object. So, if you only look at the letter e
 from frame 0 to frame 2, it looks like it never disappears and it just
 moves. But that's not what happens. There are two independent instances of
 the letter e. But, how do we get the algorithm to figure this out in a
 general way? How do we get it to compare the two possible hypotheses (1
 object or two objects) and decide that one is better than the other? That is
 what the hypothesis scoring method is for.

 *Algorithm Description and Details*
 *Hypothesis 1:* there are two separate objects... there are two separate
 instances of the letter e
 *Hypothesis 2:* there is only one letter object... only one letter e
 that occurs in all the frames of the video.

 *Time 0: object 1*

 *Time 1: e moves rigidly with object 1*
 H1: +1 compared to h2 because we expect the e to move rigidly with the
 first object, rather than independently from the first object.
 H2: -1 compared to h1 because we don't expect the first object to move
 rigidly with e but h1 does.

 *Time 2: object 2 appears and completely occludes object 1.  Object 1 and
 2 both have the letter e on them. So, to a dumb algorithm, it looks as if
 the e moved between the two frames of the video.*
 H1: -1 compared to h2 because we don't expect what h2 expects.
 H2: +1 compared to h1 e moves independently of the first window

 *Time 3: e moves rigidly with object 2*
 H1: +1 compared to h2 e moves with second object.
 H2: -1 compared to h1
 *Time 4: e moves rigidly with object 2*
 H1: +1 compared to h2 e moves with second object.
 H2: -1 compared to h1
 *Time 5: e moves rigidly with object 2*
 H1: +1 compared to h2 e moves with second object.
 H2: -1 compared to h1

 *After 5 video frames the score is: *
 H1: +3
 H2: -3

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
Abram,
Solomoff Induction would produce poor predictions if it could be used to
compute them.  Secondly, since it cannot be computed it is useless.  Third,
it is not the sort of thing that is useful for AGI in the first place.

You could experiment with finite possible ways to produce a string and see
how useful the idea is, both as an abstraction and as an actual AGI tool.
Have you tried this?  An example is a word program that complete a word as
you are typing.

As far as Matt's complaint.  I haven't yet been able to find a way that
could be used to prove that Solomonoff Induction does not do what Matt
claims it does, but I have yet to see an explanation of a proof that it
does.  When you are dealing with unverifiable pseudo-abstractions you are
dealing with something that cannot be proven.  All we can work on is whether
or not the idea seems to make sense as an abstraction.

As I said, the starting point would be to develop simpler problems and see
how they behave as you build up more complicated problems.

Jim

On Thu, Jul 8, 2010 at 5:15 PM, Abram Demski abramdem...@gmail.com wrote:

 Yes, Jim, you seem to be mixing arguments here. I cannot tell which of the
 following you intend:

 1) Solomonoff induction is useless because it would produce very bad
 predictions if we could compute them.
 2) Solomonoff induction is useless because we can't compute its
 predictions.

 Are you trying to reject #1 and assert #2, reject #2 and assert #1, or
 assert both #1 and #2?

 Or some third statement?

 --Abram


 On Wed, Jul 7, 2010 at 7:09 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Who is talking about efficiency? An infinite sequence of uncomputable
 values is still just as uncomputable. I don't disagree that AIXI and
 Solomonoff induction are not computable. But you are also arguing that they
 are wrong.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 7, 2010 6:40:52 PM
 *Subject:* Re: [agi] Solomonoff Induction is Not Universal and
 Probability is not Prediction

 Matt,
 But you are still saying that Solomonoff Induction has to be recomputed
 for each possible combination of bit value aren't you?  Although this
 doesn't matter when you are dealing with infinite computations in the first
 place, it does matter when you are wondering if this has anything to do with
 AGI and compression efficiencies.
 Jim Bromer
 On Wed, Jul 7, 2010 at 5:44 PM, Matt Mahoney matmaho...@yahoo.comwrote:

Jim Bromer wrote:
  But, a more interesting question is, given that the first digits are
 000, what are the chances that the next digit will be 1?  Dim Induction will
 report .5, which of course is nonsense and a whole less useful than making a
 rough guess.

 Wrong. The probability of a 1 is p(0001)/(p()+p(0001)) where the
 probabilities are computed using Solomonoff induction. A program that
 outputs  will be shorter in most languages than a program that outputs
 0001, so 0 is the most likely next bit.

 More generally, probability and prediction are equivalent by the chain
 rule. Given any 2 strings x followed by y, the prediction p(y|x) =
 p(xy)/p(x).


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 7, 2010 10:10:37 AM
 *Subject:* [agi] Solomonoff Induction is Not Universal and Probability
 is not Prediction

 Suppose you have sets of programs that produce two strings.  One set of
 outputs is 00 and the other is 11. Now suppose you used these sets
 of programs to chart the probabilities of the output of the strings.  If the
 two strings were each output by the same number of programs then you'd have
 a .5 probability that either string would be output.  That's ok.  But, a
 more interesting question is, given that the first digits are 000, what are
 the chances that the next digit will be 1?  Dim Induction will report .5,
 which of course is nonsense and a whole less useful than making a rough
 guess.

 But, of course, Solomonoff Induction purports to be able, if it was
 feasible, to compute the possibilities for all possible programs.  Ok, but
 now, try thinking about this a little bit.  If you have ever tried writing
 random program instructions what do you usually get?  Well, I'll take a
 hazard and guess (a lot better than the bogus method of confusing shallow
 probability with prediction in my example) and say that you will get a lot
 of programs that crash.  Well, most of my experiments with that have ended
 up with programs that go into an infinite loop or which crash.  Now on a
 universal Turing machine, the results would probably look a little
 different.  Some strings will output nothing and go into an infinite loop.
 Some programs will output something and then either stop outputting anything
 or start outputting an infinite loop of the same substring.  Other

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

If you're going to argue against a mathematical theorem, your argument must
be mathematical not verbal.  Please explain one of

1) which step in the proof about Solomonoff induction's effectiveness you
believe is in error

2) which of the assumptions of this proof you think is inapplicable to real
intelligence [apart from the assumption of infinite or massive compute
resources]


Solomonoff Induction is not a provable Theorem, it is therefore a
conjecture.  It cannot be computed, it cannot be verified.  There are many
mathematical theorems that require the use of limits to prove them for
example, and I accept those proofs.  (Some people might not.)  But there is
no evidence that Solmonoff Induction would tend toward some limits.  Now
maybe the conjectured abstraction can be verified through some other means,
but I have yet to see an adequate explanation of that in any terms.  The
idea that I have to answer your challenges using only the terms you specify
is noise.

Look at 2.  What does that say about your Theorem.

I am working on 1 but I just said: I haven't yet been able to find a way
that could be used to prove that Solomonoff Induction does not do what Matt
claims it does.
  Z
What is not clear is that no one has objected to my characterization of
the conjecture as I have been able to work it out for myself.  It requires
an infinite set of infinitely computed probabilities of each infinite
string.  If this characterization is correct, then Matt has been using the
term string ambiguously.  As a primary sample space: A particular string.
And as a compound sample space: All the possible individual cases of the
substring compounded into one.  No one has yet to tell of his mathematical
experiments of using a Turing simulator to see what a finite iteration of
all possible programs of a given length would actually look like.

I will finish this later.




  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used to
 compute them.


 Solomonoff induction is a mathematical, not verbal, construct.  Based on
 the most obvious mapping from the verbal terms you've used above into
 mathematical definitions in terms of which Solomonoff induction is
 constructed, the above statement of yours is FALSE.

 If you're going to argue against a mathematical theorem, your argument must
 be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to real
 intelligence [apart from the assumption of infinite or massive compute
 resources]

 Otherwise, your statement is in the same category as the statement by the
 protagonist of Dostoesvky's Notes from the Underground --

 I admit that two times two makes four is an excellent thing, but if we are
 to give everything its due, two times two makes five is sometimes a very
 charming thing too.

 ;-)



 Secondly, since it cannot be computed it is useless.  Third, it is not the
 sort of thing that is useful for AGI in the first place.


 I agree with these two statements

 -- ben G

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
On Fri, Jul 9, 2010 at 11:37 AM, Ben Goertzel b...@goertzel.org wrote:


 I don't think Solomonoff induction is a particularly useful direction for
 AI, I was just taking issue with the statement made that it is not capable
 of correct prediction given adequate resources...


Pi is not computable.  It would take infinite resources to compute it.
However, because Pi approaches a limit, the theory of limits can be used to
show that it can be refined to any limit that is possible and since it
consistently approaches a limit it can be used in general theorems that can
be proven through induction.  You can use *computed values* of pi in a
general theorem as long as you can show that the usage is valid by using the
theory of limits.

I think I figured out a way, given infinite resources, to write a program
that could compute Solomonoff Induction.  However, since it cannot be
shown (or at least I don't know anyone who has ever shown) that the
probabilities approaches some value (or values) as a limit (or limits), this
program (or a variation on this kind of program) could not be used to show
that it can be:
1. computed to any specified degree of precision within some finite number
of steps.
2. proven through the use of mathematical induction.

The proof is based on the diagonal argument of Cantor, but it might be
considered as variation of Cantor's diagonal argument.  There can be no one
to one *mapping of the computation to an usage* as the computation
approaches infinity to make the values approach some limit of precision. For
any computed values there is always a *possibility* (this is different than
Cantor) that there are an infinite number of more precise values (of the
probability of a string (primary sample space or compound sample space))
within any two iterations of the computational program (formula).

So even though I cannot disprove what Solomonoff Induction might be given
infinite resources, if this superficial analysis is right, without a way to
compute the values so that they tend toward a limit for each of the
probabilities needed, it is not a usable mathematical theorem.

What uncomputable means is that any statement (most statements) drawn from
it are matters of mathematical conjecture or opinion.  It's like opinioning
that the Godel sentence, given infinite resources, is decidable.

I don't think the question of whether it is valid for infinite resources or
not can be answered mathematically for the time being.  And conclusions
drawn from uncomputable results have to be considered dubious.

However, it certainly leads to other questions which I think are more
interesting and more useful.

What is needed to promote greater insight about the problem of conditional
probabilities in complicated situations where the probability emitters
and the elementary sample space may be obscured by the use of complicated
interactions and a preliminary focus on compound sample spaces?  Are there
theories, which like asking questions about the givens in a problem, that
could lead toward a greater detection of the relation between the givens and
the primary probability emitters and the primary sample space?

Can a mathematical theory be based solely on abstract principles even though
it is impossible to evaluate the use of those abstractions with examples
from the particulars (of the abstractions)?  How could those abstract
principles be reliably defined so that they aren't too simplistic?

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
On Fri, Jul 9, 2010 at 1:12 PM, Jim Bromer jimbro...@gmail.com wrote:
The proof is based on the diagonal argument of Cantor, but it might be
considered as variation of Cantor's diagonal argument.  There can be no one
to one *mapping of the computation to an usage* as the computation
approaches infinity to make the values approach some limit of precision. For
any computed values there is always a *possibility* (this is different than
Cantor) that there are an infinite number of more precise values (of the
probability of a string (primary sample space or compound sample space))
within any two iterations of the computational program (formula).

Ok, I didn't get that right, but there is enough there to get the idea.
For any computed values there is always a *possibility* (I think this is
different than Cantor) that there are an infinite number of more precise
values (of the probability of a string (primary sample space or compound
sample space)) that may fall outside the limits that could be derived from
any finite sequence of iterations of the computational program (formula).

On Fri, Jul 9, 2010 at 1:12 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Fri, Jul 9, 2010 at 11:37 AM, Ben Goertzel b...@goertzel.org wrote:


 I don't think Solomonoff induction is a particularly useful direction for
 AI, I was just taking issue with the statement made that it is not capable
 of correct prediction given adequate resources...


 Pi is not computable.  It would take infinite resources to compute it.
 However, because Pi approaches a limit, the theory of limits can be used to
 show that it can be refined to any limit that is possible and since it
 consistently approaches a limit it can be used in general theorems that can
 be proven through induction.  You can use *computed values* of pi in a
 general theorem as long as you can show that the usage is valid by using the
 theory of limits.

 I think I figured out a way, given infinite resources, to write a program
 that could compute Solomonoff Induction.  However, since it cannot be
 shown (or at least I don't know anyone who has ever shown) that the
 probabilities approaches some value (or values) as a limit (or limits), this
 program (or a variation on this kind of program) could not be used to show
 that it can be:
 1. computed to any specified degree of precision within some finite number
 of steps.
 2. proven through the use of mathematical induction.

 The proof is based on the diagonal argument of Cantor, but it might be
 considered as variation of Cantor's diagonal argument.  There can be no one
 to one *mapping of the computation to an usage* as the computation
 approaches infinity to make the values approach some limit of precision. For
 any computed values there is always a *possibility* (this is different
 than Cantor) that there are an infinite number of more precise values (of
 the probability of a string (primary sample space or compound sample space))
 within any two iterations of the computational program (formula).

 So even though I cannot disprove what Solomonoff Induction might be given
 infinite resources, if this superficial analysis is right, without a way to
 compute the values so that they tend toward a limit for each of the
 probabilities needed, it is not a usable mathematical theorem.

 What uncomputable means is that any statement (most statements) drawn from
 it are matters of mathematical conjecture or opinion.  It's like opinioning
 that the Godel sentence, given infinite resources, is decidable.

 I don't think the question of whether it is valid for infinite resources or
 not can be answered mathematically for the time being.  And conclusions
 drawn from uncomputable results have to be considered dubious.

 However, it certainly leads to other questions which I think are more
 interesting and more useful.

 What is needed to promote greater insight about the problem of conditional
 probabilities in complicated situations where the probability emitters
 and the elementary sample space may be obscured by the use of complicated
 interactions and a preliminary focus on compound sample spaces?  Are there
 theories, which like asking questions about the givens in a problem, that
 could lead toward a greater detection of the relation between the givens and
 the primary probability emitters and the primary sample space?

 Can a mathematical theory be based solely on abstract principles even
 though it is impossible to evaluate the use of those abstractions with
 examples from the particulars (of the abstractions)?  How could those
 abstract principles be reliably defined so that they aren't too simplistic?

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
Solomonoff Induction is not a mathematical conjecture.  We can talk about a
function which is based on all mathematical functions, but since we cannot
define that as a mathematical function it is not a realizable function.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Jim Bromer
 I guess the Godel Theorem is called a theorem, so Solomonoff Induction
would be called a theorem.  I believe that Solomonoff Induction is
computable, but the claims that are made for it are not provable because
there is no way you could prove that it approaches a stable limit (stable
limits).  You can't prove that it does just because the sense of all
possible programs is so ill-defined that there is not enough to go
on.  Whether my outline of a disproof could actually be used to find an
adequate disproof, I don't know.  My attempt to disprove it may just be an
unprovable theorem (or even wrong).

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-07 Thread Jim Bromer
Suppose you have sets of programs that produce two strings.  One set of
outputs is 00 and the other is 11. Now suppose you used these sets
of programs to chart the probabilities of the output of the strings.  If the
two strings were each output by the same number of programs then you'd have
a .5 probability that either string would be output.  That's ok.  But, a
more interesting question is, given that the first digits are 000, what are
the chances that the next digit will be 1?  Dim Induction will report .5,
which of course is nonsense and a whole less useful than making a rough
guess.

But, of course, Solomonoff Induction purports to be able, if it was
feasible, to compute the possibilities for all possible programs.  Ok, but
now, try thinking about this a little bit.  If you have ever tried writing
random program instructions what do you usually get?  Well, I'll take a
hazard and guess (a lot better than the bogus method of confusing shallow
probability with prediction in my example) and say that you will get a lot
of programs that crash.  Well, most of my experiments with that have ended
up with programs that go into an infinite loop or which crash.  Now on a
universal Turing machine, the results would probably look a little
different.  Some strings will output nothing and go into an infinite loop.
Some programs will output something and then either stop outputting anything
or start outputting an infinite loop of the same substring.  Other programs
will go on to infinity producing something that looks like random strings.
But the idea that all possible programs would produce well distributed
strings is complete hogwash.  Since Solomonoff Induction does not define
what kind of programs should be used, the assumption that the distribution
would produce useful data is absurd.  In particular, the use of the method
to determine the probability based given an initial string (as in what
follows given the first digits are 000) is wrong as in really wrong.  The
idea that this crude probability can be used as prediction is
unsophisticated.

Of course you could develop an infinite set of Solomonoff Induction values
for each possible given initial sequence of digits.  Hey when you're working
with infeasible functions why not dream anything?

I might be wrong of course.  Maybe there is something you guys
haven't been able to get across to me.  Even if you can think for yourself
you can still make mistakes.  So if anyone has actually tried writing a
program to output all possible programs (up to some feasible point) on a
Turing Machine simulator, let me know how it went.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-07 Thread Jim Bromer
Abram,
I don't think you are right.  The reason is that Solomonoff Induction does
not produce a true universal probability for any given first digits.  To do
so it would have to be capable of representing the probability of any
(computable)  sequence that follows any (computable) string of given first
digits.

Yes, if a high proportion of programs produce 00, it will be able to
register that as string as more probable, but the information on what the
next digits will be, given some input, will not be represented in anything
that resembled compression.  For instance, if you had 62 bits and wanted
to know what the probability of the next two bits were, you would have to
have done the infinite calculations of a Solomonoff Induction for each of
the 2^62 possible combination of bits that represented the possible input to
your problem.

I might be wrong, but I don't see where all this is information is being
hidden if I am.  On the other hand, if I am right (or even partially right)
I don't understand why seemingly smart people are excited about this as a
possible AGI method.

We in AGI specifically want to know the answer to the kind of question:
Given some partially defined situation, how could a computer best figure out
what is going on.  Most computer situations are going to be represented by
kilobytes or megabytes these days, not in strings of 32 bits or less.  If
there was an abstraction that could help us think about these things, it
could help even if the ideal would be way beyond any feasible technology.
And there is an abstraction like this that can help us.  Applied
probability.  We can think about these ideas in the terms of strings if we
want to but the key is that WE have to work out the details because we see
the problems differently.  There is nothing that I have seen in Solomonoff
Induction that suggests that this is an adequate or even useful method to
use.  On the other hand I would not be talking about this if it weren't for
Solomonoff so maybe I just don't share your enthusiasm.  If I have
misunderstood something then all I can say is that I am still waiting for
someone to explain it in a way that I can understand.

Jim

On Wed, Jul 7, 2010 at 1:58 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 I am unable to find the actual objection to Solomonoff in what you wrote
 (save for that it's wrong as in really wrong).

 It's true that a lot of programs won't produce any output. That just means
 they won't alter the prediction.

 It's also true that a lot of programs will produce random-looking or
 boring-looking output. This just means that Solomonoff will have some
 expectation of those things. To use your example, given 000, the chances
 that the next digit will be 0 will be fairly high thanks to boring programs
 which just output lots of zeros. (Not sure why you mention the idea that it
 might be .5? This sounds like no induction rather than dim induction...)

 --Abram

   On Wed, Jul 7, 2010 at 10:10 AM, Jim Bromer jimbro...@gmail.com wrote:

   Suppose you have sets of programs that produce two strings.  One set
 of outputs is 00 and the other is 11. Now suppose you used these
 sets of programs to chart the probabilities of the output of the strings.
 If the two strings were each output by the same number of programs then
 you'd have a .5 probability that either string would be output.  That's ok.
 But, a more interesting question is, given that the first digits are 000,
 what are the chances that the next digit will be 1?  Dim Induction will
 report .5, which of course is nonsense and a whole less useful than making a
 rough guess.

 But, of course, Solomonoff Induction purports to be able, if it was
 feasible, to compute the possibilities for all possible programs.  Ok, but
 now, try thinking about this a little bit.  If you have ever tried writing
 random program instructions what do you usually get?  Well, I'll take a
 hazard and guess (a lot better than the bogus method of confusing shallow
 probability with prediction in my example) and say that you will get a lot
 of programs that crash.  Well, most of my experiments with that have ended
 up with programs that go into an infinite loop or which crash.  Now on a
 universal Turing machine, the results would probably look a little
 different.  Some strings will output nothing and go into an infinite loop.
 Some programs will output something and then either stop outputting anything
 or start outputting an infinite loop of the same substring.  Other programs
 will go on to infinity producing something that looks like random strings.
 But the idea that all possible programs would produce well distributed
 strings is complete hogwash.  Since Solomonoff Induction does not define
 what kind of programs should be used, the assumption that the distribution
 would produce useful data is absurd.  In particular, the use of the method
 to determine the probability based given an initial string (as in what
 follows given

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-07 Thread Jim Bromer
Matt,
But you are still saying that Solomonoff Induction has to be recomputed for
each possible combination of bit value aren't you?  Although this doesn't
matter when you are dealing with infinite computations in the first place,
it does matter when you are wondering if this has anything to do with AGI
and compression efficiencies.
Jim Bromer
On Wed, Jul 7, 2010 at 5:44 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Jim Bromer wrote:
  But, a more interesting question is, given that the first digits are 000,
 what are the chances that the next digit will be 1?  Dim Induction will
 report .5, which of course is nonsense and a whole less useful than making a
 rough guess.

 Wrong. The probability of a 1 is p(0001)/(p()+p(0001)) where the
 probabilities are computed using Solomonoff induction. A program that
 outputs  will be shorter in most languages than a program that outputs
 0001, so 0 is the most likely next bit.

 More generally, probability and prediction are equivalent by the chain
 rule. Given any 2 strings x followed by y, the prediction p(y|x) =
 p(xy)/p(x).


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 7, 2010 10:10:37 AM
 *Subject:* [agi] Solomonoff Induction is Not Universal and Probability
 is not Prediction

 Suppose you have sets of programs that produce two strings.  One set of
 outputs is 00 and the other is 11. Now suppose you used these sets
 of programs to chart the probabilities of the output of the strings.  If the
 two strings were each output by the same number of programs then you'd have
 a .5 probability that either string would be output.  That's ok.  But, a
 more interesting question is, given that the first digits are 000, what are
 the chances that the next digit will be 1?  Dim Induction will report .5,
 which of course is nonsense and a whole less useful than making a rough
 guess.

 But, of course, Solomonoff Induction purports to be able, if it was
 feasible, to compute the possibilities for all possible programs.  Ok, but
 now, try thinking about this a little bit.  If you have ever tried writing
 random program instructions what do you usually get?  Well, I'll take a
 hazard and guess (a lot better than the bogus method of confusing shallow
 probability with prediction in my example) and say that you will get a lot
 of programs that crash.  Well, most of my experiments with that have ended
 up with programs that go into an infinite loop or which crash.  Now on a
 universal Turing machine, the results would probably look a little
 different.  Some strings will output nothing and go into an infinite loop.
 Some programs will output something and then either stop outputting anything
 or start outputting an infinite loop of the same substring.  Other programs
 will go on to infinity producing something that looks like random strings.
 But the idea that all possible programs would produce well distributed
 strings is complete hogwash.  Since Solomonoff Induction does not define
 what kind of programs should be used, the assumption that the distribution
 would produce useful data is absurd.  In particular, the use of the method
 to determine the probability based given an initial string (as in what
 follows given the first digits are 000) is wrong as in really wrong.  The
 idea that this crude probability can be used as prediction is
 unsophisticated.

 Of course you could develop an infinite set of Solomonoff Induction values
 for each possible given initial sequence of digits.  Hey when you're working
 with infeasible functions why not dream anything?

 I might be wrong of course.  Maybe there is something you guys
 haven't been able to get across to me.  Even if you can think for yourself
 you can still make mistakes.  So if anyone has actually tried writing a
 program to output all possible programs (up to some feasible point) on a
 Turing Machine simulator, let me know how it went.

 Jim Bromer

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-04 Thread Jim Bromer
I figured out a way to make the Solomonoff Induction iteratively infinite,
so I guess I was wrong.  Thanks for explaining it to me.  However, I don't
accept that it is feasible to make those calculations since an examination
of the infinite programs that could output each individual string would be
required.

My sense is that the statistics of a examination of a finite number of
programs that output a finite number of strings could be used in Solomonff
Induction to to give a reliable probability of what the next bit (or next
sequence of bits) might be based on the sampling, under the condition that
only those cases that had previously occurred would occur again and at the
same frequencyy during the samplings.  However, the attempt to figure the
probabilities of concatenation of these strings or sub strings would be
unreliable and void whatever benefit the theoretical model might appear to
offer.

Logic, probability and compression methods are all useful in AGI even though
we are constantly violating the laws of logic and probability because it is
necessary, and we sometimes need to use more complicated models
(anti-compression so to speak) so that we can consider other possibilities
based on what we have previously learned.  So, I still don't see how
Kolmogrov Complexity and Solomonoff Induction are truly useful except as
theoretical methods that are interesting to consider.

And, Occam's Razor is not reliable as an axiom of science.  If we were to
abide by it we would come to conclusions like a finding that describes an
event by saying that it occurs some of the time, since it would be simpler
than trying to describe the greater circumstances of the event in an effort
to see if we can find something out about why the event occurred or didn't
occur.  In this sense Occam's Razor is anti-science since it implies that
the status quo should be maintained since simpler is better.  All things
being equal, simpler is better.  I think we all get that.  However, the
human mind is capable of re weighting the conditions and circumstances of a
system to reconsider other possibilities and that seems to be an important
and necessary method in research (and in planning).

Jim Bromer

On Sat, Jul 3, 2010 at 11:39 AM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  You can't assume a priori that the diagonal argument is not relevant.

 When I say infinite in my proof of Solomonoff induction, I mean countably
 infinite, as in aleph-null, as in there is a 1 to 1 mapping between the set
 and N, the set of natural numbers. There are a countably infinite number of
 finite strings, or of finite programs, or of finite length descriptions of
 any particular string. For any finite length string or program or
 description x with nonzero probability, there are a countably infinite
 number of finite length strings or programs or descriptions that are longer
 and less likely than x, and a finite number of finite length strings or
 programs or descriptions that are either shorter or more likely or both than
 x.

 Aleph-null is larger than any finite integer. This means that for any
 finite set and any countably infinite set, there is not a 1 to 1 mapping
 between the elements, and if you do map all of the elements of the finite
 set to elements of the infinite set, then there are unmapped elements of the
 infinite set left over.

 Cantor's diagonalization argument proves that there are infinities larger
 than aleph-null, such as the cardinality of the set of real numbers, which
 we call uncountably infinite. But since I am not using any uncountably
 infinite sets, I don't understand your objection.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 3, 2010 9:43:15 AM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, to address all of your points,

 Solomonoff induction claims that the probability of a string is
 proportional to the number of programs that output the string, where each
 program M is weighted by 2^-|M|. The probability is dominated by the
 shortest program (Kolmogorov complexity), but it is not exactly the same.
 The difference is small enough that we may neglect it, just as we neglect
 differences that depend on choice of language.



 The infinite number of programs that could output the infinite number of
 strings that are to be considered (for example while using Solomonoff
 induction to predict what string is being output) lays out the potential
 for the diagonal argument.  You can't assume a priori that the diagonal
 argument is not relevant.  I don't believe that you can prove that it isn't
 relevant since as you say, Kolmogorov Complexity is not computable, and you
 cannot be sure that you have listed all the programs that were able to
 output a particular string. This creates a situation

Re: [agi] Reward function vs utility

2010-07-04 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:35 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 It appears that one hemisphere is a *completely* passive observer, that
 does *not* even bother to distinguish you and not-you, other than noting a
 probable boundary. The other hemisphere concerns itself with manipulating
 the world, regardless of whether particular pieces of it are you or not-you.
 It seems unlikely that reward could have any effect at all on the passive
 observer hemisphere.

 In the case of the author of the book, apparently the manipulating
 hemisphere was knocked out of commission for a while, and then slowly
 recovered. This allowed her to see the passively observed world, without the
 overlay of the manipulating hemisphere. Obviously, this involved severe
 physical impairment until she recovered.

 Note that AFAIK all of the AGI efforts are egocentric, while half of our
 brains are concerned with passively filtering/understanding the world enough
 to apply egocentric logic. Note further that since the two hemispheres are
 built from the same types of neurons, that the computations needed to do
 these two very different tasks are performed by the same wet-stuff. There is
 apparently some sort of advanced Turing machine sort of concept going on
 in wetware.
 Hence, I see goal direction, reward, etc., as potentially useful only in
 some tiny part of our brains.

 Any thoughts?


I don't buy the hemisphere disconnect, but I do feel that it makes sense to
say that some parts are (like) passive observers and other parts are more
concerned with the interactive aspects of reasoning.  The idea that
reinforcement might operate on the interactive aspects but not the passive
observers is really interesting.  My only criticism is that there is
evidence that human beings will often interpret events according to the
projections of their primary concerns onto their observations.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-03 Thread Jim Bromer
On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, to address all of your points,

 Solomonoff induction claims that the probability of a string is
 proportional to the number of programs that output the string, where each
 program M is weighted by 2^-|M|. The probability is dominated by the
 shortest program (Kolmogorov complexity), but it is not exactly the same.
 The difference is small enough that we may neglect it, just as we neglect
 differences that depend on choice of language.



The infinite number of programs that could output the infinite number of
strings that are to be considered (for example while using Solomonoff
induction to predict what string is being output) lays out the potential
for the diagonal argument.  You can't assume a priori that the diagonal
argument is not relevant.  I don't believe that you can prove that it isn't
relevant since as you say, Kolmogorov Complexity is not computable, and you
cannot be sure that you have listed all the programs that were able to
output a particular string. This creates a situation in which the underlying
logic of using Solmonoff induction is based on incomputable reasoning which
can be shown using the diagonal argument.

This kind of criticism cannot be answered with the kinds of presumptions
that you used to derive the conclusions that you did.  It has to be answered
directly.  I can think of other infinity to infinity relations in which the
potential mappings can be countably derived from the formulas or equations,
but I have yet to see any analysis which explains why this usage can be.
Although you may imagine that the summation of the probabilities can be used
just like it was an ordinary number, the unchecked usage is faulty.  In
other words the criticism has to be considered more carefully by someone
capable of dealing with complex mathematical problems that involve the
legitimacy of claims between infinite to infinite mappings.

Jim Bromer



On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, to address all of your points,

 Solomonoff induction claims that the probability of a string is
 proportional to the number of programs that output the string, where each
 program M is weighted by 2^-|M|. The probability is dominated by the
 shortest program (Kolmogorov complexity), but it is not exactly the same.
 The difference is small enough that we may neglect it, just as we neglect
 differences that depend on choice of language.

 Here is the proof that Kolmogorov complexity is not computable. Suppose it
 were. Then I could test the Kolmogorov complexity of strings in increasing
 order of length (breaking ties lexicographically) and describe the first
 string that cannot be described in less than a million bits, contradicting
 the fact that I just did. (Formally, I could write a program that outputs
 the first string whose Kolmogorov complexity is at least n bits, choosing n
 to be larger than my program).

 Here is the argument that Occam's Razor and Solomonoff distribution must be
 true. Consider all possible probability distributions p(x) over any infinite
 set X of possible finite strings x, i.e. any X = {x: p(x)  0} that is
 infinite. All such distributions must favor shorter strings over longer
 ones. Consider any x in X. Then p(x)  0. There can be at most a finite
 number (less than 1/p(x)) of strings that are more likely than x, and
 therefore an infinite number of strings which are less likely than x. Of
 this infinite set, only a finite number (less than 2^|x|) can be shorter
 than x, and therefore there must be an infinite number that are longer than
 x. So for each x we can partition X into 4 subsets as follows:

 - shorter and more likely than x: finite
 - shorter and less likely than x: finite
 - longer and more likely than x: finite
 - longer and less likely than x: infinite.

 So in this sense, any distribution over the set of strings must favor
 shorter strings over longer ones.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Fri, July 2, 2010 4:09:38 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:

There cannot be a one to one correspondence to the representation of
 the shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.



 But, there is also no way to determine what the shortest program is,
 since there may be different programs that are the same length.  That means
 that there is a many to one relation between programs and program length.
 So the claim that you could just iterate through programs *by length* is
 false

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-03 Thread Jim Bromer
This group, as in most AGI discussions, will use logic and statistical
theory loosely.  We have to.  One is that we - thinking entities - do not
know everything and so our reasoning is based on fragmentary knowledge.  In
this situation the boundaries of logical reasoning in thought, both natural
and artificial, are going to be transgressed.  However, knowing that is
going to be the case in AGI, we can acknowledge it and try to figure out
algorithms that will tend to ground our would-be programs.

Now Solomonoff Induction and Algorithmic Information Theory are a little
different.  They deal with concrete data spaces.  We can and should question
how relevant those concrete sample spaces might be to general reasoning
about the greater universe of knowledge, but the fact that they deal with
concrete spaces means that they might be logically bound.  But are they?  If
an idealism is both concrete (too concrete for our uses) and not logically
computable then we have to really be wary of trying to use it.

If using Solomonoff Induction is incomputable it does not prove that it is
illogical.  But if it is incomputable, it would be illogical to believe that
it can be used reliably.

Solomonoff Induction has been around long enough for serious mathematicians
to examine its validity.  If it was a genuinely sound method, mathematicians
would have accepted it.  However, if Solomonoff Induction is incomputable in
practice it would be so unreliable that top mathematicians would tend to
choose more productive and interesting subjects to study.  As far as I can
tell, Solomonoff Induction exists today within the backwash of AI
communities.  It has found new life in these kinds of discussion groups
where most of us do not have the skill or the time to critically examine the
basis of every theory that is put forward.  The one test that we can make is
whether or not some method that is being presented has some reliability in
our programs which constitute mini experiments.  Logic and probability pass
the smell test, even though we know that our use of them in AGI is not
ideal.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, what evidence do you have that Occam's Razor ... is wrong, besides
 your own opinions? It is well established that elegant (short) theories are
 preferred in all branches of science because they have greater predictive
 power.



  -- Matt Mahoney, matmaho...@yahoo.com


When a heuristic is used as if it were an axiom of truth, it will interfere
in the development of reasonable insight just because an heuristic is not an
axiom.  Now to apply this heuristic (which does have value) as an
unquestionable axiom of mind, you are making a more egregious claim because
you are multiplying the force of the error.

Occam's razor has greater predictive power within the boundaries of the
isolation experiments which have the greatest potential to enhance its
power.  If simplest theories are preferred because they have the greater
predictive power, then it would follow that isolation experiments would be
the preferred vehicles of science just because they can produce theories
that had the most predictive power.  Whether this is the case or not (the
popular opinion), it does not answer the question of whether narrow AI (for
example) should be the preferred child of computer science just because the
theorems of narrow AI are so much better at predicting their (narrow) events
than the theorems of AGI are at comprehending their (more
complicated) events.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, what evidence do you have that Occam's Razor or algorithmic
 information theory is wrong,
 Also, what does this have to do with Cantor's diagonalization argument? AIT
 considers only the countably infinite set of hypotheses.


 -- Matt Mahoney, matmaho...@yahoo.com



There cannot be a one to one correspondence to the representation of the
shortest program to produce a string and the strings that they produce.
This means that if the consideration of the hypotheses were to be put into
general mathematical form it must include the potential of many to one
relations between candidate programs (or subprograms) and output strings.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:09 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim, what evidence do you have that Occam's Razor or algorithmic
 information theory is wrong,
 Also, what does this have to do with Cantor's diagonalization argument?
 AIT considers only the countably infinite set of hypotheses.
  -- Matt Mahoney, matmaho...@yahoo.com



  There cannot be a one to one correspondence to the representation of the
 shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.


But, there is also no way to determine what the shortest program is, since
there may be different programs that are the same length.  That means that
there is a many to one relation between programs and program length.  So
the claim that you could just iterate through programs *by length* is
false.  This is the goal of algorithmic information theory not a premise
of a methodology that can be used.  So you have the diagonalization problem.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:

There cannot be a one to one correspondence to the representation of
 the shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.



 But, there is also no way to determine what the shortest program is,
 since there may be different programs that are the same length.  That means
 that there is a many to one relation between programs and program length.
 So the claim that you could just iterate through programs *by length* is
 false.  This is the goal of algorithmic information theory not a premise
 of a methodology that can be used.  So you have the diagonalization problem.



A counter argument is that there are only a finite number of Turing Machine
programs of a given length.  However, since you guys have specifically
designated that this theorem applies to any construction of a Turing Machine
it is not clear that this counter argument can be used.  And there is still
the specific problem that you might want to try a program that writes a
longer program to output a string (or many strings).  Or you might want to
write a program that can be called to write longer programs on a dynamic
basis.  I think these cases, where you might consider a program that outputs
a longer program, (or another instruction string for another Turing
Machine) constitutes a serious problem, that at the least, deserves to be
answered with sound analysis.

Part of my original intuitive argument, that I formed some years ago, was
that without a heavy constraint on the instructions for the program, it will
be practically impossible to test or declare that some program is indeed the
shortest program.  However, I can't quite get to the point now that I can
say that there is definitely a diagonalization problem.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-06-30 Thread Jim Bromer
Cantor's diagonal argument is (in all likelihood) mathematically correct.
However the attempt to use Cantor's methodology to derive an irrational
number that is the next greater irrational number from a given irrational
number (to a degree of precision sufficient to distinguish the two numbers)
is not mathematically correct.  If you were to say that Cantor's argument
was mathematically correct, I would agree with you.  As far as I can tell it
is.  However, if you were then to use his method of enumerating irrational
numbers as a means to discover subsequent irrational numbers, I would not
conclude that you understand what it means to say that Cantor's diagonal
argument was mathematically correct.  (However, I am not a mathematician and
I might be wrong in some ways.)

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-06-30 Thread Jim Bromer
The use of the terminology of mathematics is counter intuitive, if, what you
want to say is that mathematical methods are inadequate to describe AGI
systems (or something like that.)
That is what I meant when I said that people don't always mean exactly what
they seem to be saying.  You are not really defining a mathematical system,
and you are not trying to conclude that a specific presumption is illogical
are you?  Or are you?
There is another problem.  We can define sets so we can define things like a
closed set of sets each containing infinities of objects.

However by qualifying your use of concepts like this and then appealing to a
reasonable right to be understood *as you intended*, you can certainly use
this kind of metaphor.
That's my opinion.
Jim Bromer



On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I'd like opinions on terminology here.

 IMO the opposition of closed sets vs open sets is fundamental to the
 difference between narrow AI and AGI.

 However I notice that these terms have different meanings to mine in maths.

 What I mean is:

 closed set: contains a definable number and *kinds/species* of objects

 open set: contains an undefinable number and *kinds/species* of objects
 (what we in casual, careless conversation describe as containing all kinds
 of things);  the rules of an open set allow adding new kinds of things ad
 infinitum

 Narrow AI's operate in artificial environments containing closed sets of
 objects - all of wh. are definable. AGI's operate in real world environments
 containing open sets of objects - some of wh. will be definable, and some
 definitely not

 To engage in any real world activity, like walking down a street or
 searching/tidying a room or reading a science book/text is to  operate
 with open sets of objects,  because the next field of operations - the
 next street or room or text -  may and almost certainly will have
 unpredictably different kinds of objects from the last.

 Any objections to my use of these terms, or suggestions that I should use
 others?

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .


This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow.  It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate objects move in patchy patchwork ways or in unpredictable
patterns.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
Well, I see that Mike did say normally move... so yes that type of
principle could be used in a more flexible AGI program (although there is
still a question about the use of any presumptions that go into this level
of detail about their reference subjects.  I would not use a primary
reference like Mike's in my AGI program just because it is so presumptuous
about animate and inanimate objects).  But anyway, my criticism then is that
the presumption is not really superior - in any way - to the run of the mill
presumptions that you often hear considered in discussions about AGI
programs.  For example, David never talked about distinguishing between
animate and inanimate objects (in the sense of the term 'animate' that Mike
is using the words,) and his reference was only made to an graphics example
to present the idea that he was talking about.
Jim Bromer

On Mon, Jun 28, 2010 at 12:20 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .


 This presumption looks similar (in some profound way) to many of the
 presumptions that were tried in the early days of AI, partly because
 computers lacked memory and they were very slow.  It's unreliable just
 because we need the AGI program to be able to consider situations when, for
 example, inanimate objects move in patchy patchwork ways or in unpredictable
 patterns.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer

  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .




I think you made a major tactical error and just got caught acting the way
you are constantly criticizing everyone else for acting.  --(Busted)--

You might say my interest is: how do we get a contemporary computer problem
to deal with situations in which a prevailing (or presumptuous) point of
view should be reconsidered from different points of view, when the range of
reasonable ways to look at a problem is not clear and the possibilities are
too numerous for a contemporary computer to examine carefully in a
reasonable amount of time.

For example, we might try opposites, and in this case I wondered about the
case where we might want to consider a 'supposedly inanimate object' that
moves in an irregular and unpredictable way.  Another example: Can
unpredictable itself be considered predictable?  To some extent the answer
is, of course it can.  The problem with using opposites is that it is an
idealization of real world situations and where using alternative ways of
looking at a problem may be useful.  Can an object be both inanimate and
animate (in the sense Mike used the term)?  Could there be another class of
things that was neither animate nor inanimate?  Is animate versus animate
really the best way to describe living versus non living?  No?

Given that the possibilities could quickly add up and given that they are
not clearly defined, it presents a major problem of complexity to the would
be designer of a true AGI program.  The problem is that it is just not
feasible to evaluate millions of variations of possibilities and then find
the best candidates within a reasonable amount of time. And this problem
does not just concern the problem of novel situations but those specific
situations that are familiar but where there are quite a few details that
are not initially understood.  While this is -clearly- a human problem, it
is a much more severe problem for contemporary AGI.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote:

 A method for comparing hypotheses in explanatory-based reasoning:*Here is
 a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two?

 Well, first of all, #3 is correct because it has the most explanatory power
 of the three and is the simplest of the three. Simpler is better because,
 with the given evidence and information, there is no reason to desire a more
 complicated hypothesis such as #2.

 So, the answer to the question is because explanation #3 expects the most
 observations, such as:
 1) the consistent relative positions of the squares in each frame are
 expected.
 2) It also expects their new positions in each from based on velocity
 calculations.
 3) It expects both squares to occur in each frame.

 Explanation 1 ignores 1 square from each frame of the video, because it
 can't match it. Hypothesis #1 doesn't have a reason for why the a new square
 appears in each frame and why one disappears. It doesn't expect these
 observations. In fact, explanation 1 doesn't expect anything that happens
 because something new happens in each frame, which doesn't give it a chance
 to confirm its hypotheses in subsequent frames.

 The power of this method is immediately clear. It is general and it solves
 the problem very cleanly.
 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


Nonsense.  This illustrates one of the things wrong with the
dreary instantiations of the prevailing mind set of a group.  It is only a
matter of time until you discover (through experiment) how absurd it is to
celebrate the triumph of an overly simplistic solution to a problem that is,
by its very potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in
'unison' I doubt if the program calculated or represented those objects in
unison.  I also doubt that their positioning was literally based on moving
'right' since their movement were presumably calculated with incremental
mathematics that were associated with screen positions.  And, looking for a
technicality that represents the failure of the over reliance of the
efficacy of a simplistic over generalization, I only have to point out that
they did not only move to the right, so your description was either wrong or
only partially representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less
useful hypotheses, and the underlying causes for apparent relations are kept
irrelevant, over simplification is a reasonable (and valuable) method. But
if you are seriously interested in scalability, then this kind of conclusion
is just dull.

I have often made the criticism that the theories put forward in these
groups are overly simplistic.  Although I understand that this was just a
simple example, here is the key to determining whether a method is overly
simplistic (or as in AIXI) based on an overly simplistic definition
of insight.  Would this method work in discovering the possibilities of a
potentially more complex IO data environment like those we would expect to
find using AGI?
Jim Bromer.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
The fact that you are using experiment and the fact that you recognized that
AGI needs to provide both explanation and expectations (differentiated from
the false precision of 'prediction') shows that you have a grasp of some of
the philosophical problems, but the fact that you would rely on a primary
principle of over simplification (as differentiated from a method that does
not start with a rule that eliminates the very potential of possibilities as
a *general* rule of intelligence) shows that you don't fully understand the
problem.
Jim Bromer



On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two?

 Well, first of all, #3 is correct because it has the most explanatory power
 of the three and is the simplest of the three. Simpler is better because,
 with the given evidence and information, there is no reason to desire a more
 complicated hypothesis such as #2.

 So, the answer to the question is because explanation #3 expects the most
 observations, such as:
 1) the consistent relative positions of the squares in each frame are
 expected.
 2) It also expects their new positions in each from based on velocity
 calculations.
 3) It expects both squares to occur in each frame.

 Explanation 1 ignores 1 square from each frame of the video, because it
 can't match it. Hypothesis #1 doesn't have a reason for why the a new square
 appears in each frame and why one disappears. It doesn't expect these
 observations. In fact, explanation 1 doesn't expect anything that happens
 because something new happens in each frame, which doesn't give it a chance
 to confirm its hypotheses in subsequent frames.

 The power of this method is immediately clear. It is general and it solves
 the problem very cleanly.

 *Here is a simplified version of how we solve case study 2:*
 We expect the original square to move at a similar velocity from left to
 right because we hypothesized that it did move from left to right and we
 calculated its velocity. If this expectation is confirmed, then it is more
 likely than saying that the square suddenly stopped and another started
 moving. Such a change would be unexpected and such a conclusion would be
 unjustifiable.

 I also believe that explanations which generate fewer incorrect

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Jim :This illustrates one of the things wrong with the
 dreary instantiations of the prevailing mind set of a group.  It is only a
 matter of time until you discover (through experiment) how absurd it is to
 celebrate the triumph of an overly simplistic solution to a problem that is,
 by its very potential, full of possibilities]

 To put it more succinctly, Dave  Ben  Hutter are doing the wrong subject
 - narrow AI.  Looking for the one right prediction/ explanation is narrow
 AI. Being able to generate more and more possible explanations, wh. could
 all be valid,  is AGI.  The former is rational, uniform thinking. The latter
 is creative, polyform thinking. Or, if you prefer, it's convergent vs
 divergent thinking, the difference between wh. still seems to escape Dave 
 Ben  most AGI-ers.


Well, I agree with what (I think) Mike was trying to get at, except that I
understood that Ben, Hutter and especially David were not only talking about
prediction as a specification of a single prediction when many possible
predictions (ie expectations) were appropriate for consideration.

For some reason none of you seem to ever talk about methods that could be
used to react to a situation with the flexibility to integrate the
recognition of different combinations of familiar events and to classify
unusual events so they could be interpreted as more familiar *kinds* of
events or as novel forms of events which might be then be integrated.  For
me, that seems to be one of the unsolved problems.  Being able to say that
the squares move to the right in unison is a better description than saying
the squares are dancing the irish jig is not really cutting edge.

As far as David's comment that he was only dealing with the core issues, I
am sorry but you were not dealing with the core issues of contemporary AGI
programming.  You were dealing with a primitive problem that has been
considered for many years, but it is not a core research issue.  Yes we have
to work with simple examples to explain what we are talking about, but there
is a difference between an abstract problem that may be central to
your recent work and a core research issue that hasn't really been solved.

The entire problem of dealing with complicated situations is that these
narrow AI methods haven't really worked.  That is the core issue.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
I am working on logical satisfiability again.  If what I am working on right
now works, it will become a pivotal moment in AGI, and what's more, the
method that I am developing will (probably) become a core method for AGI.
However, if the idea I am working on does not -itself- lead to a major
breakthrough (which is the likelihood) then the idea will (probably) not
become a core issue regardless of its significance to me right at this
moment.

This is a personal statement but it is not just a question that can be
resolved through personal perspective.  So I have to rely on a more
reasonable and balanced perspective that does not just assume that I will be
successful without some hard evidence.  Without the benefit of knowing what
will happen with the theory at this time, I have to assume that there is no
evidence that this is going to be a central approach which will in some
manifestation be central to AGI in the future.  I can see that as one
possibility but this one view has to be integrated with other possibilities
as well.

I appreciate people's reports of what they are doing, and I would happily
tell you what I am working on if I was more sure that it won't work or had
it all figured out and I thought anyone would be interested (even if it
didn't work.)

Dave asked and answered,  How do we add and combine this complex behavior
learning, explanation, recognition and understanding into our system?
Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely.

That's not the answer.  That is a statement of a subgoal some of which is
programmable, but there is nothing in the statement that describes how it
can be actually achieved and there is nothing in the statement which
suggests that you have a mature insight into the nature of the problem.
There is nothing in the statement that seems new to me, I presume that many
of the programmers in the group have considered something similar at some
time in the past.

I am trying to avoid criticisms that get unnecessarily personal, but there
are some criticisms of ideas that should be made from time to time, and some
times a personal perspective is so tightly interwoven into the ideas that a
statement of a subgoal can look like it is a solution to a difficult
problem.

But Mike was absolutely right about one thing.  Constantly testing your
ideas with experiments is important, and if I ever gain any traction in
-anything- that I am doing, I will begin doing some AGI experiments again.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


  1   2   3   >