Thanks Abram, I'll read up on it when I get a chance.

On Tue, Jul 13, 2010 at 12:03 PM, Abram Demski <[email protected]>wrote:

> David,
>
> Yes, this makes sense to me.
>
> To go back to your original query, I still think you will find a rich
> community relevant to your work if you look into the MDL literature (which
> additionally does not rely on probability theory, though as I said I'd say
> it's equivalent).
>
> Perhaps this book might be helpful:
>
> http://www.amazon.com/Description-Principle-Adaptive-Computation-Learning/dp/0262072815/ref=sr_1_1?ie=UTF8&s=books&qid=1279036776&sr=8-1
>
> It includes a (short-ish?) section comparing the pros/cons of MDL and
> Bayesianism, and examining some of the mathematical linkings between them--
> with the aim of showing that MDL is a broader principle. I disagree there,
> of course. :)
>
> --Abram
>
> On Tue, Jul 13, 2010 at 10:01 AM, David Jones <[email protected]>wrote:
>
>> Abram,
>>
>> Thanks for the clarification Abram. I don't have a single way to deal with
>> uncertainty. I try not to decide on a method ahead of time because what I
>> really want to do is analyze the problems and find a solution. But, at the
>> same time. I have looked at the probabilistic approaches and they don't seem
>> to be sufficient to solve the problem as they are now. So, my inclination is
>> to use methods that don't gloss over important details. For me, the most
>> important way of dealing with uncertainty is through explanatory-type
>> reasoning. But, explanatory reasoning has not been well defined yet. So, the
>> implementation is not yet clear. That's where I am now.
>>
>> I've begun to approach problems as follows. I try to break the problem
>> down and answer the following questions:
>> 1) How do we come up with or construct possible hypotheses.
>> 2) How do we compare hypotheses to determine which is better.
>> 3) How do we lower the uncertainty of hypotheses.
>> 4) How do we determine the likelihood or strength of a single hypothesis
>> all by itself. Is it sufficient on its own?
>>
>> With those questions in mind, the solution seems to be to break possible
>> hypotheses down into pieces that are generally applicable. For example, in
>> image analysis, a particular type of hypothesis might be related to 1)
>> motion or 2) attachment relationships or 3) change or movement behavior of
>> an object, etc.
>>
>> By breaking the possible hypotheses into very general pieces, you can
>> apply them to just about any problem. With that as a tool, you can then
>> develop general methods for resolving uncertainty of such hypotheses using
>> explanatory scoring, consistency, and even statistical analysis.
>>
>> Does that make sense to you?
>>
>> Dave
>>
>>
>> On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski <[email protected]>wrote:
>>
>>> PS-- I am not denying that statistics is applied probability theory. :)
>>> When I say they are different, what I mean is that saying "I'm going to use
>>> probability theory" and "I'm going to use statistics" tend to indicate very
>>> different approaches. Probability is a set of axioms, whereas statistics is
>>> a set of methods. The probability theory camp tends to be bayesian, whereas
>>> the stats camp tends to be frequentist.
>>>
>>> Your complaint that probability theory doesn't try to figure out why it
>>> was wrong in the 30% (or whatever) it misses is a common objection.
>>> Probability theory glosses over important detail, it encourages lazy
>>> thinking, etc. However, this all depends on the space of hypotheses being
>>> examined. Statistical methods will be prone to this objection because they
>>> are essentially narrow-AI methods: they don't *try* to search in the space
>>> of all hypotheses a human might consider. An AGI setup can and should have
>>> such a large hypothesis space. Note that AIXI is typically formulated as
>>> using a space of crisp (non-probabilistic) hypotheses, though probability
>>> theory is used to reason about them. This means no theory it considers will
>>> gloss over detail in this way: every theory completely explains the data. (I
>>> use AIXI as a convenient example, not because I agree with it.)
>>>
>>> --Abram
>>>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to