Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-15 Thread Robert Picone
On Wed, Jul 14, 2010 at 10:35 PM, Michael Swan ms...@voyagergaming.comwrote:


 
 
  I'd argue that mathematical operations are unnecesary,
   we don't even have integer support inbuilt.
 I'd disagree.  is a mathematical operation, and in combination can
 become an enormous number of concepts.

 Sure, I think the brain is more sensibly understood in a
 programattical sense than mathematical.

 I say programattical because it probably has 100 billion or so
 conditional statements, a difficult thing to represent mathematically.
 Even so, each conditional is going to have maths constructs in it.


Sorry, I meant unnecessary to demonstrate that particular point.  There's no
need to say you have no innate ability to know what 3456/6 is when you are
unlikely to have an innate concept of the number 3456 or any other arbitrary
number greater than a few hundred to begin with, you can get by with a few
lookup tables upon which you get a vague idea what 3456 of something would
be, but if I were to show you a sheet of paper with 3000-4000 dots on it,
you would be unlikely to be able to tell me whether it was greater or less
than 3456.

I don't see any way an evaluator of some sort wouldn't be completely
necessary for an AGI, sorry for the confusion.  Though, you do bring to mind
the point that while  can be an extremely useful tool for composing other
concepts, our internal comparisons do seem to tend more towards the analog
than towards the binary, and while you can compose those analog outputs with
 and - easily enough, you probably want concepts supported as close to
natively as is possible.  Remember, there are turing-complete
one-dimensional systems of cellular automata, but that doesn't make it
feasible to port the Linux kernel to them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-15 Thread Mike Tintner
And yet you dream dreams wh. are broad-ranging in subject matter, unlike all 
programs wh. are extremely narrow-ranging.


--
From: Michael Swan ms...@voyagergaming.com
Sent: Thursday, July 15, 2010 5:16 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] What is the smallest set of operations that can 
potentially  define everything and how do you combine them ?




I watched a brain experiment last night that proved that connections
between major parts of the brain stop when you are asleep.

They put electricity at different brain points, and it went everywhere
when the person was a awake, and dissipated when they were asleep.


On Thu, 2010-07-15 at 02:13 +0100, Mike Tintner wrote:

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle,
wheel, wire coil, outline, station on metro, hole, Kenneth Noland 
painting,

ring, coin, roundabout

connecting among other things - language, numbers, geometry, food, 
cartoons,

paintings, speech, sports, science, technology, art, transport,
transportation system, money.

Note though the other crucial weakness of the brain wh. impairs global
connections - fatigue. To maintain any piece of information in 
consciousness

for long is a strain,  (unless it's sexual?).

But the above demonstrates IMO why the brain is and has to be an image
processor.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining the
 probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


I believe that Solomonoff Induction would be computable given infinite time
and infinite resources (the Godel Theorem fits into this category) but some
people disagree for reasons I do not understand.

If it is not computable then it is not a mathematical theorem and the
question of whether the sum of probabilities equals 1 is pure fantasy.

If it is computable then the central issue is whether it could (given
infinite time and infinite resources) be used to determine the probability
of a particular string being produced from all possible programs.  The
question about the sum of all the probabilities is certainly an interesting
question. However, the problem of making sure that the function was actually
computable would interfere with this process of determining the probability
of each particular string that can be produced.  For example, since some
strings would be infinite, the computability problem makes it imperative
that the infinite strings be partially computed at an iteration (or else the
function would be hung up at some particular iteration and the infinite
other calculations could not be considered computable).

My criticism is that even though I believe the function may be theoretically
computable, the fact that each particular probability (of each particular
string that is produced) cannot be proven to approach a limit through
mathematical analysis, and since the individual probabilities will fluctuate
with each new string that is produced, one would have to know how to reorder
the production of the probabilities in order to demonstrate that the
individual probabilities do approach a limit.  If they don't, then the claim
that this function could be used to define the probabilities of a particular
string from all possible program is unprovable.  (Some infinite calculations
fluctuate infinitely.)  Since you do not have any way to determine how to
reorder the infinite probabilities a priori, your algorithm would have to be
able to compute all possible reorderings to find the ordering and filtering
of the computations that would produce evaluable limits.  Since there are
trans infinite rearrangements of an infinite list (I am not sure that I am
using the term 'trans infinite' properly) this shows that the conclusion
that the theorem can be used to derive the desired probabilities is
unprovable through a variation of Cantor's Diagonal Argument, and that you
can't use Solomonoff Induction the way you have been talking about using it.

Since you cannot fully compute every string that may be produced at a
certain iteration, you cannot make the claim that you even know the
probabilities of any possible string before infinity and therefore your
claim that the sum of the probabilities can be computed is not provable.

But I could be wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Matt Mahoney
Jim Bromer wrote:
 Since you cannot fully compute every string that may be produced at a certain 
iteration, you cannot make the claim that you even know the probabilities of 
any 
possible string before infinity and therefore your claim that the sum of the 
probabilities can be computed is not provable.
 
 But I could be wrong.

Could be. Theorem 1.7.2 in http://www.vetta.org/documents/disSol.pdf proves 
that 
finding just the shortest program that outputs x gives you a probability for x 
close to the result you would get if you found all of the (infinite number of) 
programs that output x. Either number could be used for Solomonoff induction 
because the difference is bounded only by the choice of language.
 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 15, 2010 8:18:13 AM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,

There is a simple proof of convergence for the sum involved in defining the 
probability of a given string in the Solomonoff distribution:

At its greatest, a particular string would be output by *all* programs. In 
this 
case, its sum would come to 1. This puts an upper bound on the sum. Since 
there 
is no subtraction, there is a lower bound at 0 and the sum monotonically 
increases as we take the limit. Knowing these facts, suppose it *didn't* 
converge. It must then increase without bound, since it cannot fluctuate back 
and forth (it can only go up). But this contradicts the upper bound of 1. So, 
the sum must stop at 1 or below (and in fact we can prove it stops below 1, 
though we can't say where precisely without the infinite computing power 
required to compute the limit).

--Abram
 
I believe that Solomonoff Induction would be computable given infinite time and 
infinite resources (the Godel Theorem fits into this category) but some people 
disagree for reasons I do not understand.  

 
If it is not computable then it is not a mathematical theorem and the question 
of whether the sum of probabilities equals 1 is pure fantasy.
 
If it is computable then the central issue is whether it could (given infinite 
time and infinite resources) be used to determine the probability of a 
particular string being produced from all possible programs.  The question 
about 
the sum of all the probabilities is certainly an interesting question. However, 
the problem of making sure that the function was actually computable would 
interfere with this process of determining the probability of each particular 
string that can be produced.  For example, since some strings would be 
infinite, 
the computability problem makes it imperative that the infinite strings be 
partially computed at an iteration (or else the function would be hung up at 
some particular iteration and the infinite other calculations could not be 
considered computable).  

 
My criticism is that even though I believe the function may be theoretically 
computable, the fact that each particular probability (of each particular 
string 
that is produced) cannot be proven to approach a limit through mathematical 
analysis, and since the individual probabilities will fluctuate with each new 
string that is produced, one would have to know how to reorder the production 
of 
the probabilities in order to demonstrate that the individual probabilities do 
approach a limit.  If they don't, then the claim that this function could be 
used to define the probabilities of a particular string from all possible 
program is unprovable.  (Some infinite calculations fluctuate infinitely.)  
Since you do not have any way to determine how to reorder the infinite 
probabilities a priori, your algorithm would have to be able to compute all 
possible reorderings to find the ordering and filtering of the computations 
that 
would produce evaluable limits.  Since there are trans infinite rearrangements 
of an infinite list (I am not sure that I am using the term 'trans infinite' 
properly) this shows that the conclusion that the theorem can be used to derive 
the desired probabilities is unprovable through a variation of Cantor's 
Diagonal 
Argument, and that you can't use Solomonoff Induction the way you have been 
talking about using it.
 
Since you cannot fully compute every string that may be produced at a certain 
iteration, you cannot make the claim that you even know the probabilities of 
any 
possible string before infinity and therefore your claim that the sum of the 
probabilities can be computed is not provable.
 
But I could be wrong.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
It is no wonder that I'm having a hard time finding documentation on
hypothesis scoring. Few can agree on how to do it and there is much debate
about it.

I noticed though that a big reason for the problems is that explanatory
reasoning is being applied to many diverse problems. I think, like I
mentioned before, that people should not try to come up with a single
universal rule set for applying explanatory reasoning to every possible
problem. So, maybe that's where the hold up is.

I've been testing my ideas out on complex examples. But now I'm going to go
back to simplified model testing (although not as simple as black squares :)
) and work my way up again.

Dave

On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.com wrote:

 Actually, I just realized that there is a way to included inductive
 knowledge and experience into this algorithm. Inductive knowledge and
 experience about a specific object or object type can be exploited to know
 which hypotheses in the past were successful, and therefore which hypothesis
 is most likely. By choosing the most likely hypothesis first, we skip a lot
 of messy hypothesis comparison processing and analysis. If we choose the
 right hypothesis first, all we really have to do is verify that this
 hypothesis reveals in the data what we expect to be there. If we confirm
 what we expect, that is reason enough not to look for other hypotheses
 because the data is explained by what we originally believed to be likely.
 We only look for additional hypotheses when we find something unexplained.
 And even then, we don't look at the whole problem. We only look at what we
 have to to explain the unexplained data. In fact, we could even ignore the
 unexplained data if we believe, from experience, that it isn't pertinent.

 I discovered this because I'm analyzing how a series of hypotheses are
 navigated when analyzing images. It seems to me that it is done very
 similarly to way we do it. We sort of confirm what we expect and try to
 explain what we don't expect. We try out hypotheses in a sort of trial and
 error manor and see how each hypothesis affects what we find in the image.
 If we confirm things because of the hypothesis, we are likely to keep it. We
 keep going, navigating the tree of hypotheses, conflicts and unexpected
 observations until we find a good hypothesis. Something like that. I'm
 attempting to construct an algorithm for doing this as I analyze specific
 problems.

 Dave


 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated learned experience into the algorithm, which
 is also important.

 The general AI problem is way too complicated to consider all at once. I
 simply can't solve hypothesis generation, comparison and disambiguation
 while at the same time solving induction and experience-based reasoning. It
 becomes unwieldly. So, I'm starting where I can and I'll work my way up to
 the full complexity of the problem.

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.

 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.

 Dave


 On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just 

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Matt Mahoney
Hypotheses are scored using Bayes law. Let D be your observed data and H be 
your 
hypothesis. Then p(H|D) = p(D|H)p(H)/p(D). Since p(D) is constant, you can 
remove it and rank hypotheses by p(D|H)p(H).

p(H) can be estimated using the minimum description length principle or 
Solomonoff induction. Ideally, p(H) = 2^-|H| where |H| is the length (in bits) 
of the description of the hypothesis. The value is language dependent, so this 
method is not perfect.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 15, 2010 10:22:44 AM
Subject: Re: [agi] How do we Score Hypotheses?

It is no wonder that I'm having a hard time finding documentation on hypothesis 
scoring. Few can agree on how to do it and there is much debate about it. 


I noticed though that a big reason for the problems is that explanatory 
reasoning is being applied to many diverse problems. I think, like I mentioned 
before, that people should not try to come up with a single universal rule set 
for applying explanatory reasoning to every possible problem. So, maybe that's 
where the hold up is. 


I've been testing my ideas out on complex examples. But now I'm going to go 
back 
to simplified model testing (although not as simple as black squares :) ) and 
work my way up again. 


Dave


On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.com wrote:

Actually, I just realized that there is a way to included inductive knowledge 
and experience into this algorithm. Inductive knowledge and experience about a 
specific object or object type can be exploited to know which hypotheses in the 
past were successful, and therefore which hypothesis is most likely. By 
choosing 
the most likely hypothesis first, we skip a lot of messy hypothesis comparison 
processing and analysis. If we choose the right hypothesis first, all we really 
have to do is verify that this hypothesis reveals in the data what we expect to 
be there. If we confirm what we expect, that is reason enough not to look for 
other hypotheses because the data is explained by what we originally believed 
to 
be likely. We only look for additional hypotheses when we find something 
unexplained. And even then, we don't look at the whole problem. We only look at 
what we have to to explain the unexplained data. In fact, we could even ignore 
the unexplained data if we believe, from experience, that it isn't pertinent. 


I discovered this because I'm analyzing how a series of hypotheses are 
navigated 
when analyzing images. It seems to me that it is done very similarly to way we 
do it. We sort of confirm what we expect and try to explain what we don't 
expect. We try out hypotheses in a sort of trial and error manor and see how 
each hypothesis affects what we find in the image. If we confirm things 
because 
of the hypothesis, we are likely to keep it. We keep going, navigating the 
tree 
of hypotheses, conflicts and unexpected observations until we find a good 
hypothesis. Something like that. I'm attempting to construct an algorithm for 
doing this as I analyze specific problems. 


Dave



On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

What do you mean by definitive events? 

I guess the first problem I see with my approach is that the movement of the 
window is also a hypothesis. I need to analyze it in more detail and see how 
the 
tree of hypotheses affects the hypotheses regarding the es on the windows. 


What I believe is that these problems can be broken down into types of 
hypotheses,  types of events and types of relationships. then those types can 
be 
reasoned about in a general way. If possible, then you have a method for 
reasoning about any object that is covered by the types of hypotheses, events 
and relationships that you have defined.

How to reason about specific objects should not be preprogrammed. But, I 
think 
the solution to this part of AGI is to find general ways to reason about a 
small 
set of concepts that can be combined to describe specific objects and 
situations. 


There are other parts to AGI that I am not considering yet. I believe the 
problem has to be broken down into separate pieces and understood before 
putting 
it back together into a complete system. I have not covered inductive 
learning 
for example, which would be an important part of AGI. I have also not yet 
incorporated learned experience into the algorithm, which is also important. 


The general AI problem is way too complicated to consider all at once. I 
simply 
can't solve hypothesis generation, comparison and disambiguation while at the 
same time solving induction and experience-based reasoning. It becomes 
unwieldly. So, I'm starting where I can and I'll work my way up to the full 
complexity of the problem. 


I don't really understand what you mean here: The central unsolved problem, 
in 
my view, is: How can hypotheses be  

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
:) You say that as if bayesian explanatory reasoning is the only way.

There is much debate over bayesian explanatory reasoning and non-bayesian.
There are pros and cons to bayesian methods. Likewise, there is the problem
with non-bayesian methods because few have figured out how to do it
effectively. I'm still going to pursue a non-bayesian approach because I
believe there is likely more merit to it and that the short-comings can be
overcome.

Dave

On Thu, Jul 15, 2010 at 10:54 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Hypotheses are scored using Bayes law. Let D be your observed data and H be
 your hypothesis. Then p(H|D) = p(D|H)p(H)/p(D). Since p(D) is constant, you
 can remove it and rank hypotheses by p(D|H)p(H).

 p(H) can be estimated using the minimum description length principle or
 Solomonoff induction. Ideally, p(H) = 2^-|H| where |H| is the length (in
 bits) of the description of the hypothesis. The value is language dependent,
 so this method is not perfect.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 15, 2010 10:22:44 AM
 *Subject:* Re: [agi] How do we Score Hypotheses?

 It is no wonder that I'm having a hard time finding documentation on
 hypothesis scoring. Few can agree on how to do it and there is much debate
 about it.

 I noticed though that a big reason for the problems is that explanatory
 reasoning is being applied to many diverse problems. I think, like I
 mentioned before, that people should not try to come up with a single
 universal rule set for applying explanatory reasoning to every possible
 problem. So, maybe that's where the hold up is.

 I've been testing my ideas out on complex examples. But now I'm going to go
 back to simplified model testing (although not as simple as black squares :)
 ) and work my way up again.

 Dave

 On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.comwrote:

 Actually, I just realized that there is a way to included inductive
 knowledge and experience into this algorithm. Inductive knowledge and
 experience about a specific object or object type can be exploited to know
 which hypotheses in the past were successful, and therefore which hypothesis
 is most likely. By choosing the most likely hypothesis first, we skip a lot
 of messy hypothesis comparison processing and analysis. If we choose the
 right hypothesis first, all we really have to do is verify that this
 hypothesis reveals in the data what we expect to be there. If we confirm
 what we expect, that is reason enough not to look for other hypotheses
 because the data is explained by what we originally believed to be likely.
 We only look for additional hypotheses when we find something unexplained.
 And even then, we don't look at the whole problem. We only look at what we
 have to to explain the unexplained data. In fact, we could even ignore the
 unexplained data if we believe, from experience, that it isn't pertinent.

 I discovered this because I'm analyzing how a series of hypotheses are
 navigated when analyzing images. It seems to me that it is done very
 similarly to way we do it. We sort of confirm what we expect and try to
 explain what we don't expect. We try out hypotheses in a sort of trial and
 error manor and see how each hypothesis affects what we find in the image.
 If we confirm things because of the hypothesis, we are likely to keep it. We
 keep going, navigating the tree of hypotheses, conflicts and unexpected
 observations until we find a good hypothesis. Something like that. I'm
 attempting to construct an algorithm for doing this as I analyze specific
 problems.

 Dave


 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated 

RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
Make sure you study that up YKY :)

 

John

 

From: YKY (Yan King Yin, 甄景贤) [mailto:generic.intellige...@gmail.com] 
Sent: Thursday, July 15, 2010 8:59 AM
To: agi
Subject: [agi] OFF-TOPIC: University of Hong Kong Library

 

 

Today, I went to the HKU main library: 

 

 

=)

KY


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description:
https://www.listbox.com/images/feed-icon-10x10.jpg|
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com/ Description:
https://www.listbox.com/images/listbox-logo-small.png

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
image001.jpgimage002.png

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
I think that Solomonoff Induction includes a computational method that
produces probabilities of some sort and whenever those probabilities were
computed (in a way that would make the function computable) they would sum
up to 1.  But the issue that I am pointing out is that there is no way that
you can determine the margin of error in what is being computed for what has
been repeatedly claimed that the function is capable of computing.  Since
you are not able to rely on something like the theory of limits, you are not
able to determine the degree of error in what is being computed.  And in
fact, there is no way to determine that what the function would compute
would be in any way useful for the sort of things that you guys keep talking
about.

Jim Bromer



On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


 I believe that Solomonoff Induction would be computable given infinite time
 and infinite resources (the Godel Theorem fits into this category) but some
 people disagree for reasons I do not understand.

 If it is not computable then it is not a mathematical theorem and the
 question of whether the sum of probabilities equals 1 is pure fantasy.

 If it is computable then the central issue is whether it could (given
 infinite time and infinite resources) be used to determine the probability
 of a particular string being produced from all possible programs.  The
 question about the sum of all the probabilities is certainly an interesting
 question. However, the problem of making sure that the function was actually
 computable would interfere with this process of determining the probability
 of each particular string that can be produced.  For example, since some
 strings would be infinite, the computability problem makes it imperative
 that the infinite strings be partially computed at an iteration (or else the
 function would be hung up at some particular iteration and the infinite
 other calculations could not be considered computable).

 My criticism is that even though I believe the function may be
 theoretically computable, the fact that each particular probability (of each
 particular string that is produced) cannot be proven to approach a limit
 through mathematical analysis, and since the individual probabilities will
 fluctuate with each new string that is produced, one would have to know how
 to reorder the production of the probabilities in order to demonstrate that
 the individual probabilities do approach a limit.  If they don't, then the
 claim that this function could be used to define the probabilities of a
 particular string from all possible program is unprovable.  (Some
 infinite calculations fluctuate infinitely.)  Since you do not have any way
 to determine how to reorder the infinite probabilities a priori, your
 algorithm would have to be able to compute all possible reorderings to find
 the ordering and filtering of the computations that would produce evaluable
 limits.  Since there are trans infinite rearrangements of an infinite list
 (I am not sure that I am using the term 'trans infinite' properly) this
 shows that the conclusion that the theorem can be used to derive the desired
 probabilities is unprovable through a variation of Cantor's Diagonal
 Argument, and that you can't use Solomonoff Induction the way you have been
 talking about using it.

 Since you cannot fully compute every string that may be produced at a
 certain iteration, you cannot make the claim that you even know the
 probabilities of any possible string before infinity and therefore your
 claim that the sum of the probabilities can be computed is not provable.

 But I could be wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 What do you mean by definitive events?



I was just trying to find a way to designate obsverations that would be
reliably obvious to a computer program.  This has something to do with the
assumptions that you are using.  For example if some object appeared against
a stable background and it was a different color than the background, it
would be a definitive observation event because your algorithm could detect
it with some certainty and use it in the definition of other more
complicated events (like occlusion.)  Notice that this example would not
necessarily be so obvious (a definitive event) using a camera, because there
are a number of ways that an illusion (of some kind) could end up as a data
event.

I will try to reply to the rest of your message sometime later.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
Jim,

even that isn't an obvious event. You don't know what is background and what
is not. You don't even know if there is an object or not. You don't know if
anything moved or not. You can make some observations using predefined
methods and then see if you find matches... then hypothesize about the
matches...

 It all has to be learned and figured out through reasoning.

That's why I asked what you meant by definitive events. Nothing is really
definitive. It is all hypothesized in a non-monotonic manner.

Dave

On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?



 I was just trying to find a way to designate obsverations that would be
 reliably obvious to a computer program.  This has something to do with the
 assumptions that you are using.  For example if some object appeared against
 a stable background and it was a different color than the background, it
 would be a definitive observation event because your algorithm could detect
 it with some certainty and use it in the definition of other more
 complicated events (like occlusion.)  Notice that this example would not
 necessarily be so obvious (a definitive event) using a camera, because there
 are a number of ways that an illusion (of some kind) could end up as a data
 event.

 I will try to reply to the rest of your message sometime later.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Mike Tintner
Sounds like a good explanation of why a body is essential for vision - not just 
for POV and orientation [up/left/right/down/ towards/ away] but for comparison 
and yardstick - you do know when your body or parts thereof are moving -and  
it's not merely touch but the comparison of other objects still and moving with 
your own moving hands and body that is important.

The more you go into it, the crazier the prospect of vision without eyes in a 
body becomes.


From: David Jones 
Sent: Thursday, July 15, 2010 5:54 PM
To: agi 
Subject: Re: [agi] How do we Score Hypotheses?


Jim,

even that isn't an obvious event. You don't know what is background and what is 
not. You don't even know if there is an object or not. You don't know if 
anything moved or not. You can make some observations using predefined methods 
and then see if you find matches... then hypothesize about the matches...

 It all has to be learned and figured out through reasoning. 

That's why I asked what you meant by definitive events. Nothing is really 
definitive. It is all hypothesized in a non-monotonic manner.

Dave


On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

What do you mean by definitive events? 



  I was just trying to find a way to designate obsverations that would be 
reliably obvious to a computer program.  This has something to do with the 
assumptions that you are using.  For example if some object appeared against a 
stable background and it was a different color than the background, it would be 
a definitive observation event because your algorithm could detect it with some 
certainty and use it in the definition of other more complicated events (like 
occlusion.)  Notice that this example would not necessarily be so obvious (a 
definitive event) using a camera, because there are a number of ways that an 
illusion (of some kind) could end up as a data event.

  I will try to reply to the rest of your message sometime later.
  Jim Bromer
agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
On screenshots, the point of view is equivalent to the absolute positions
and their relative positions using absolute(screen x and y) measurements.

You don't need a robot to learn about how AGI works and figure out how to
solve some problems. It would be a terrible mistake to spend years, or even
weeks for that matter, on robotics before getting started.

Dave

On Thu, Jul 15, 2010 at 1:09 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Sounds like a good explanation of why a body is essential for vision -
 not just for POV and orientation [up/left/right/down/ towards/ away] but for
 comparison and yardstick - you do know when your body or parts thereof are
 moving -and  it's not merely touch but the comparison of other objects still
 and moving with your own moving hands and body that is important.

 The more you go into it, the crazier the prospect of vision without eyes in
 a body becomes.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Thursday, July 15, 2010 5:54 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we Score Hypotheses?

 Jim,

 even that isn't an obvious event. You don't know what is background and
 what is not. You don't even know if there is an object or not. You don't
 know if anything moved or not. You can make some observations using
 predefined methods and then see if you find matches... then hypothesize
 about the matches...

  It all has to be learned and figured out through reasoning.

 That's why I asked what you meant by definitive events. Nothing is really
 definitive. It is all hypothesized in a non-monotonic manner.

 Dave

 On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?



 I was just trying to find a way to designate obsverations that would be
 reliably obvious to a computer program.  This has something to do with the
 assumptions that you are using.  For example if some object appeared against
 a stable background and it was a different color than the background, it
 would be a definitive observation event because your algorithm could detect
 it with some certainty and use it in the definition of other more
 complicated events (like occlusion.)  Notice that this example would not
 necessarily be so obvious (a definitive event) using a camera, because there
 are a number of ways that an illusion (of some kind) could end up as a data
 event.

 I will try to reply to the rest of your message sometime later.
 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread Ian Parker
Ok Off topic, but not as far as you might think. YKY has posted in Creating
Artificial Intelligence on a collaborative project. It is quite important
to know *exactly* where he is. You see Taiwan uses the classical character
set, The People's Republic uses a simplified character set.

Hong Kong was handed back to China in I think 1997. It is still outside the
Great Firewall and (I presume) uses classical characters, although I don't
really know. If we are to discuss transliteration schemes, translation and
writing Chinese (PRC or Taiwan) on Western keyboards, it is important for us
to know.

I have just bashed up a Java program to write Arabic. You input Roman
Buckwalter and it has an internal conversion table. The same thing could in
principle be done for a load of character sets. In Chinese you would have to
input two Western keys simultaneously. That can be done.

I know HK is outside the Firewall because that is where Google has its proxy
server. Is YKY there, do you know?


  - Ian Parker

2010/7/15 John G. Rose johnr...@polyplexic.com

 Make sure you study that up YKY :)



 John



 *From:* YKY (Yan King Yin, 甄景贤) [mailto:generic.intellige...@gmail.com]
 *Sent:* Thursday, July 15, 2010 8:59 AM
 *To:* agi
 *Subject:* [agi] OFF-TOPIC: University of Hong Kong Library





 Today, I went to the HKU main library:





 =)

 KY

 *agi* | Archives https://www.listbox.com/member/archive/303/=now [image:
 Description: 
 https://www.listbox.com/images/feed-icon-10x10.jpg]https://www.listbox.com/member/archive/rss/303/|
 Modify https://www.listbox.com/member/?; Your Subscription

 [image: Description: 
 https://www.listbox.com/images/listbox-logo-small.png]http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
image002.pngimage001.jpg

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Abram Demski
Jim,

Yes this is true  provable: there is no way to compute a correct error
bound such that it converges to 0 as the computation of algorithmic
probability converges to the correct number. More specifically--- we can
approximate the algorithmic probability from below, computing better lower
bounds which converge to the correct number, but we cannot approximate it
from above, as there is no procedure (and can never be any procedure) which
creates closer and closer upper bounds converging to the correct number.

(We can produce upper bounds that get closer and closer w/o getting
arbitrarily near, and we can produce numbers which do approach arbitrarily
near to the correct number in the limit but sometimes dip below for a time;
but we can't have both features.)

The question of whether the function would be useful for the sorts of
things we keep talking about ... well, I think the best argument that I can
give is that MDL is strongly supported by both theory and practice for many
*subsets* of the full program space. The concern might be that, so far, it
is only supported by *theory* for the full program space-- and since
approximations have very bad error-bound properties, it may never be
supported in practice. My reply to this would be that it still appears
useful to approximate Solomonoff induction, since most successful predictors
can be viewed as approximations to Solomonoff induction. It approximates
solomonoff induction appears to be a good _explanation_ for the success of
many systems.

What sort of alternatives do you have in mind, by the way?

--Abram

On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.com wrote:

 I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


 I believe that Solomonoff Induction would be computable given infinite
 time and infinite resources (the Godel Theorem fits into this category) but
 some people disagree for reasons I do not understand.

 If it is not computable then it is not a mathematical theorem and the
 question of whether the sum of probabilities equals 1 is pure fantasy.

 If it is computable then the central issue is whether it could (given
 infinite time and infinite resources) be used to determine the probability
 of a particular string being produced from all possible programs.  The
 question about the sum of all the probabilities is certainly an interesting
 question. However, the problem of making sure that the function was actually
 computable would interfere with this process of determining the probability
 of each particular string that can be produced.  For example, since some
 strings would be infinite, the computability problem makes it imperative
 that the infinite strings be partially computed at an iteration (or else the
 function would be hung up at some particular iteration and the infinite
 other calculations could not be considered computable).

 My criticism is that even though I believe the function may be
 theoretically computable, the fact that each particular probability (of each
 particular string that is produced) cannot be proven to approach a limit
 through mathematical analysis, and since the individual probabilities will
 fluctuate with each new string that is produced, one would have to know how
 to reorder the production of the probabilities in order to demonstrate 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
We all make conjectures all of the time, but we don't often don't have
anyway to establish credibility for the claims that are made.  So I wanted
to examine one part of this field, and the idea that seemed most natural for
me was Solomonoff Induction.  I have reached a conclusion about the subject
and that conclusion is that all of the claims that I have seen made about
Solomonoff Induction are rationally unfounded including the one that you
just made when you said: We can produce upper bounds that get closer and
closer w/o getting arbitrarily near, and we can produce numbers which do
approach arbitrarily near to the correct number in the limit but sometimes
dip below for a time; but we can't have both features.

Your inability to fully recognize or perhaps acknowledge that you cannot use
Solomonoff Induction to make this claim is difficult for me to comprehend.

While the fields of compression and probability have an impressive body of
evidence supporting them, I simply have no reason to think the kind of
claims that have been made about Solomonoff Induction have any merit.  By
natural induction I feel comfortable drawing the conclusion that this whole
area related to algorithmic information theory is based on shallow methods
of reasoning.  It can be useful, as it was for me, just as means of
exploring ideas that I would not have otherwise explored.  But its
usefulness comes in learning how to determine its lack of merit.

I will write one more thing about my feelings about computability, but I
will start a new thread and just mention the relation to this thread.

Jim Bromer

On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Yes this is true  provable: there is no way to compute a correct error
 bound such that it converges to 0 as the computation of algorithmic
 probability converges to the correct number. More specifically--- we can
 approximate the algorithmic probability from below, computing better lower
 bounds which converge to the correct number, but we cannot approximate it
 from above, as there is no procedure (and can never be any procedure) which
 creates closer and closer upper bounds converging to the correct number.

 (We can produce upper bounds that get closer and closer w/o getting
 arbitrarily near, and we can produce numbers which do approach arbitrarily
 near to the correct number in the limit but sometimes dip below for a time;
 but we can't have both features.)

 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 What sort of alternatives do you have in mind, by the way?

 --Abram

   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.comwrote:

   I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the 
 upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove 
 it
 stops below 1, 

RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 Ok Off topic, but not as far as you might think. YKY has posted in Creating
 Artificial Intelligence on a collaborative project. It is quite important to 
 know
 exactly where he is. You see Taiwan uses the classical character set, The
 People's Republic uses a simplified character set.
 

The classical character set is much more artistic but more difficult to learn 
thus the simplified is becoming popular.  Like a social tendency of 
K-complexity minimalistic language langour. Less energy expended since less 
bits required for the symbols.
 

 Hong Kong was handed back to China in I think 1997. It is still outside the
 Great Firewall and (I presume) uses classical characters, although I don't
 really know. If we are to discuss transliteration schemes, translation and
 writing Chinese (PRC or Taiwan) on Western keyboards, it is important for us
 to know.
 
 I have just bashed up a Java program to write Arabic. You input Roman
 Buckwalter and it has an internal conversion table. The same thing could in
 principle be done for a load of character sets. In Chinese you would have to
 input two Western keys simultaneously. That can be done.
 

I always wondered - do language translators map from one language to another or 
do they map to a universal language first. And if there is a universal 
language what is it or.. what are they?

 I know HK is outside the Firewall because that is where Google has its proxy
 server. Is YKY there, do you know?
 

Uhm yes. He's been followed by the government censors into the HK library. 
They're thinking about sending him to re-education camp for being caught 
red-handed reading AI4U.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.
 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.
 Dave


I suppose a hypotheses is a kind of concepts.  So there are other kinds of
concepts that we need to use with hypotheses.  A hypotheses has to be
conceptually integrated into other concepts.  Conceptual integration is
something of greater complexity than shallow deduction or probability
chains.  While reasoning chains are needed in conceptual integration,
conceptual integration is to a chain of reasoning what a multi dimension
structure is to a one dimensional chain.

I will try to come up with some examples.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Abram Demski
Jim,

The statements about bounds are mathematically provable... furthermore, I
was just agreeing with what you said, and pointing out that the statement
could be proven. So what is your issue? I am confused at your response. Is
it because I didn't include the proofs in my email?

--Abram

On Thu, Jul 15, 2010 at 7:30 PM, Jim Bromer jimbro...@gmail.com wrote:

 We all make conjectures all of the time, but we don't often don't have
 anyway to establish credibility for the claims that are made.  So I wanted
 to examine one part of this field, and the idea that seemed most natural for
 me was Solomonoff Induction.  I have reached a conclusion about the subject
 and that conclusion is that all of the claims that I have seen made about
 Solomonoff Induction are rationally unfounded including the one that you
 just made when you said: We can produce upper bounds that get closer and
 closer w/o getting arbitrarily near, and we can produce numbers which do
 approach arbitrarily near to the correct number in the limit but sometimes
 dip below for a time; but we can't have both features.

 Your inability to fully recognize or perhaps acknowledge that you cannot
 use Solomonoff Induction to make this claim is difficult for me to
 comprehend.

 While the fields of compression and probability have an impressive body of
 evidence supporting them, I simply have no reason to think the kind of
 claims that have been made about Solomonoff Induction have any merit.  By
 natural induction I feel comfortable drawing the conclusion that this whole
 area related to algorithmic information theory is based on shallow methods
 of reasoning.  It can be useful, as it was for me, just as means of
 exploring ideas that I would not have otherwise explored.  But its
 usefulness comes in learning how to determine its lack of merit.

 I will write one more thing about my feelings about computability, but I
 will start a new thread and just mention the relation to this thread.

 Jim Bromer

 On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Yes this is true  provable: there is no way to compute a correct error
 bound such that it converges to 0 as the computation of algorithmic
 probability converges to the correct number. More specifically--- we can
 approximate the algorithmic probability from below, computing better lower
 bounds which converge to the correct number, but we cannot approximate it
 from above, as there is no procedure (and can never be any procedure) which
 creates closer and closer upper bounds converging to the correct number.

 (We can produce upper bounds that get closer and closer w/o getting
 arbitrarily near, and we can produce numbers which do approach arbitrarily
 near to the correct number in the limit but sometimes dip below for a time;
 but we can't have both features.)

 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 What sort of alternatives do you have in mind, by the way?

 --Abram

   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.comwrote:

   I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the 
 sum.
 Since