Hypotheses are scored using Bayes law. Let D be your observed data and H be 
your 
hypothesis. Then p(H|D) = p(D|H)p(H)/p(D). Since p(D) is constant, you can 
remove it and rank hypotheses by p(D|H)p(H).

p(H) can be estimated using the minimum description length principle or 
Solomonoff induction. Ideally, p(H) = 2^-|H| where |H| is the length (in bits) 
of the description of the hypothesis. The value is language dependent, so this 
method is not perfect.

 -- Matt Mahoney, [email protected]




________________________________
From: David Jones <[email protected]>
To: agi <[email protected]>
Sent: Thu, July 15, 2010 10:22:44 AM
Subject: Re: [agi] How do we Score Hypotheses?

It is no wonder that I'm having a hard time finding documentation on hypothesis 
scoring. Few can agree on how to do it and there is much debate about it. 


I noticed though that a big reason for the problems is that explanatory 
reasoning is being applied to many diverse problems. I think, like I mentioned 
before, that people should not try to come up with a single universal rule set 
for applying explanatory reasoning to every possible problem. So, maybe that's 
where the hold up is. 


I've been testing my ideas out on complex examples. But now I'm going to go 
back 
to simplified model testing (although not as simple as black squares :) ) and 
work my way up again. 


Dave


On Wed, Jul 14, 2010 at 12:59 PM, David Jones <[email protected]> wrote:

Actually, I just realized that there is a way to included inductive knowledge 
and experience into this algorithm. Inductive knowledge and experience about a 
specific object or object type can be exploited to know which hypotheses in the 
past were successful, and therefore which hypothesis is most likely. By 
choosing 
the most likely hypothesis first, we skip a lot of messy hypothesis comparison 
processing and analysis. If we choose the right hypothesis first, all we really 
have to do is verify that this hypothesis reveals in the data what we expect to 
be there. If we confirm what we expect, that is reason enough not to look for 
other hypotheses because the data is explained by what we originally believed 
to 
be likely. We only look for additional hypotheses when we find something 
unexplained. And even then, we don't look at the whole problem. We only look at 
what we have to to explain the unexplained data. In fact, we could even ignore 
the unexplained data if we believe, from experience, that it isn't pertinent. 

>
>I discovered this because I'm analyzing how a series of hypotheses are 
>navigated 
>when analyzing images. It seems to me that it is done very similarly to way we 
>do it. We sort of confirm what we expect and try to explain what we don't 
>expect. We try out hypotheses in a sort of trial and error manor and see how 
>each hypothesis affects what we find in the image. If we confirm things 
>because 
>of the hypothesis, we are likely to keep it. We keep going, navigating the 
>tree 
>of hypotheses, conflicts and unexpected observations until we find a good 
>hypothesis. Something like that. I'm attempting to construct an algorithm for 
>doing this as I analyze specific problems. 
>
>
>Dave
>
>
>
>On Wed, Jul 14, 2010 at 10:22 AM, David Jones <[email protected]> wrote:
>
>What do you mean by definitive events? 
>>
>>I guess the first problem I see with my approach is that the movement of the 
>>window is also a hypothesis. I need to analyze it in more detail and see how 
>>the 
>>tree of hypotheses affects the hypotheses regarding the "e"s on the windows. 
>>
>>
>>What I believe is that these problems can be broken down into types of 
>>hypotheses,  types of events and types of relationships. then those types can 
>>be 
>>reasoned about in a general way. If possible, then you have a method for 
>>reasoning about any object that is covered by the types of hypotheses, events 
>>and relationships that you have defined.
>>
>>How to reason about specific objects should not be preprogrammed. But, I 
>>think 
>>the solution to this part of AGI is to find general ways to reason about a 
>>small 
>>set of concepts that can be combined to describe specific objects and 
>>situations. 
>>
>>
>>There are other parts to AGI that I am not considering yet. I believe the 
>>problem has to be broken down into separate pieces and understood before 
>>putting 
>>it back together into a complete system. I have not covered inductive 
>>learning 
>>for example, which would be an important part of AGI. I have also not yet 
>>incorporated learned experience into the algorithm, which is also important. 
>>
>>
>>The general AI problem is way too complicated to consider all at once. I 
>>simply 
>>can't solve hypothesis generation, comparison and disambiguation while at the 
>>same time solving induction and experience-based reasoning. It becomes 
>>unwieldly. So, I'm starting where I can and I'll work my way up to the full 
>>complexity of the problem. 
>>
>>
>>I don't really understand what you mean here: "The central unsolved problem, 
>>in 
>>my view, is: How can hypotheses be  conceptually integrated along with the 
>>observable definitive events of  the problem to form good explanatory 
>>connections that can mesh well with  other knowledge about the problem that 
>>is 
>>considered to be reliable.   The second problem is finding efficient ways to 
>>represent this  complexity of knowledge so that the program can utilize it 
>>efficiently."
>>
>>You also might want to include concrete problems to analyze for your central 
>>problem suggestions. That would help define the problem a bit better for 
>>analysis. 
>>
>>
>>Dave
>>
>>
>>
>>On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer <[email protected]> wrote:
>>
>>
>>>
>>>
>>>On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer <[email protected]> wrote:
>>>Even if you refined your model until it was just right, you would have only 
>>>caught up to everyone else with a solution to a narrow AI problem.
>>> 
>>> 
>>>I did not mean that you would just have a solution to a narrow AI problem, 
>>>but 
>>>that your solution, if put in the form of scoring of points on the basis of 
>>>the 
>>>observation of definitive events, would constitute a narrow AI method.  
>>>The central unsolved problem, in my view, is: How can hypotheses be 
>>>conceptually 
>>>integrated along with the observable definitive events of the problem to 
>>>form 
>>>good explanatory connections that can mesh well with other knowledge about 
>>>the 
>>>problem that is considered to be reliable.  The second problem is finding 
>>>efficient ways to represent this complexity of knowledge so that the program 
>>>can 
>>>utilize it efficiently.
>>> 
>>>agi | Archives  | Modify Your Subscription  
>>
>

agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to