David,

I tend to think of probability theory and statistics as different things.
I'd agree that statistics is not enough for AGI, but in contrast I think
probability theory is a pretty good foundation. Bayesianism to me provides a
sound way of integrating the elegance/utility tradeoff of explanation-based
reasoning into the basic fabric of the uncertainty calculus. Others advocate
different sorts of uncertainty than probabilities, but so far what I've seen
indicates more a lack of ability to apply probability theory than a need for
a new type of uncertainty. What other methods do you favor for dealing with
these things?

--Abram

On Sun, Jul 11, 2010 at 12:30 PM, David Jones <[email protected]> wrote:

> Thanks Abram,
>
> I know that probability is one approach. But there are many problems with
> using it in actual implementations. I know a lot of people will be angered
> by that statement and retort with all the successes that they have had using
> probability. But, the truth is that you can solve the problems many ways and
> every way has its pros and cons. I personally believe that probability has
> unacceptable cons if used all by itself. It must only be used when it is the
> best tool for the task.
>
> I do plan to use some probability within my approach. But only when it
> makes sense to do so. I do not believe in completely statistical solutions
> or completely Bayesian machine learning alone.
>
> A good example of when I might use it is when a particular hypothesis
> predicts something with 70% accuracy, well it may be better than any other
> hypothesis we can come up with so far. So, we may use that hypothesis. But,
> the 30% unexplained errors should be explained if possible with the
> resources and algorithms available, if at all possible. This is where my
> method differs from statistical methods. I want to build algorithms that
> resolve the 30% and explain it. For many problems, there are rules and
> knowledge that will solve them effectively. Probability should only be used
> when you cannot find a more accurate solution.
>
> Basically we should use probability when we don't know the factors
> involved, can't find any rules to explain the phenomena or we don't have the
> time and resources to figure it out. So you must simply guess at the most
> probable event without any rules for figuring out which event is more
> applicable under the current circumstances.
>
> So, in summary, probability definitely has its place. I just think that
> explanatory reasoning and other more accurate methods should be preferred
> whenever possible.
>
> Regarding learning the knowledge being the bigger problem, I completely
> agree. That is why I think it is so important to develop machine learning
> that can learn by direct observation of the environment. Without that, it is
> practically impossible to gather the knowledge required for AGI-type
> applications. We can learn this knowledge by analyzing the world
> automatically and generally through video.
>
> My step by step approach for learning and then applying the knowledge for
> agi is as follows:
> 1) Understand and learn about the environment(through Computer Vision for
> now and other sensory perceptions in the future)
> 2) learn about your own actions and how they affect the environment
> 3) learn about language and how it is associated with or related to the
> environment.
> 4) learn goals from language(such as through dedicated inputs).
> 5) Goal pursuit
> 6) Other Miscellaneous capabilities as needed
>
> Dave
>
> On Sat, Jul 10, 2010 at 8:40 PM, Abram Demski <[email protected]>wrote:
>
>> David,
>>
>> Sorry for the slow response.
>>
>> I agree completely about expectations vs predictions, though I wouldn't
>> use that terminology to make the distinction (since the two terms are
>> near-synonyms in English, and I'm not aware of any technical definitions
>> that are common in the literature). This is why I think probability theory
>> is necessary: to formalize this idea of expectations.
>>
>> I also agree that it's good to utilize previous knowledge. However, I
>> think existing AI research has tackled this over and over; learning that
>> knowledge is the bigger problem.
>>
>> --Abram
>>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to