YKY,

Thanks for the reply. It seems important to me to be able to do more
than just the fast reasoning. When given more time, a reasoning method
should reconsider its independence assumptions, employ more
sophisticated models, et cetera.

By the way, when I say "markov model" I mean markov chain as opposed
to markov network-- should have been more clear. In that context,
"1st-order" means "conditioned on 1 past item". So when I say
1st-order model, I mean something like: a model that records
conditional probabilities conditioned only on 1 thing. (So I might
know the probability of winning the election given the fact of being
male, and the probability given the fact of being over age 30, but to
calculate the probability given *both*, I'd have to assume that the
effects of each were independent rather than asking my model what the
combined influence was.) These models allow facts to be combined
fairly quickly, but are wrong in cases where there are combined
effects (such as "adding sugar makes it nice, adding salt makes it
nice, but adding both makes it awful"). 2nd-order means condition on
only 2 items, and so on.

Anyway, my vision is something like this: we first learn very simple
(perhaps 1st or 2nd order) models, and then we learn "corrections" to
those simple models. Corrections are models that concentrate only on
the things that the simple models get wrong. The system could learn a
series of better and better models, each consisting of corrections on
the previous. Thus the system reasons progressively, first by the
low-order conditional model, then by invoking progressive corrections
that revise conclusions.

So, what I really would like would be a formal account of how this
should be done; exactly what kind of uncertainty results from using
the simple models, how is it best represented, and how is it best
corrected? Conditional independence assumptions seem like the most
relevant type of inaccuracy; collapsing probabilities down to boolean
truth values (or collapsing higher-order probabilities down to
lower-order probabilities), and employing max-entropy assumptions, are
runner-ups.

--Abram

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
>
> Speaking of my BPZ-logic...
>
>> 2. Good at quick-and-dirty reasoning when needed
>
> Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
> logic's speed approach that of Prolog (which is a fast inference
> algorithm for binary logic).
>
>> --a. Makes unwarranted independence assumptions
>
> Yes, I think independence should always be assumed "unless otherwise
> stated" -- which means there exists a Bayesian network link between X
> and Y.
>
>> --b. Collapses probability distributions down to the most probable
>> item when necessary for fast reasoning
>
> Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.
>
>> --c. Uses the maximum entropy distribution when it doesn't have time
>> to calculate the true distribution
>
> Not done yet.  I'm not familiar with max-ent.  Will study that later.
>
>> --d. Learns simple conditional models (like 1st-order markov models)
>> for use later when full models are too complicated to quickly use
>
> I focus on learning 1st-order Bayesian networks.  I think we should
> start with learning 1st-order Bayesian / Markov.  I will explore
> mixing Markov and Bayesian when I have time...
>
>> 3. Capable of "repairing" initial conclusions based on the bad models
>> through further reasoning
>
>> --a. Should have a good way of representing the special sort of
>> uncertainty that results from the methods above
>
> Yes, this can be done via meta-reasoning, which I'm currently working on.
>
>> --b. Should have a "repair" algorithm based on that higher-order uncertainty
>
> Once it is represented at the meta-level, you may do that.  But
> higher-order uncertain reasoning is not high on my priority list...
>
> YKY
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to