You seem to be reaching for something important here, but it isn't at all clear 
what you mean.

I would say that any creative activity (incl. pure problemsolving) begins from 
a *conceptual paradigm* - a v. rough outline - of the form of that activity and 
the form of its end-product or -procedure.  As distinct from rational 
activities where a formula (and algorithm) define the form of the product (and 
activity) with complete precision.

You have a conceptual paradigm of "writing a post" or "shopping for groceries" 
or "having a conversation". You couldn't possibly have a formula or algorithm 
completely defining every step - every word and sentence, every food, every 
topic  - you may have or want to take.

And programs as we know them, don't and can't handle *concepts* -  despite the 
misnomers of "conceptual graphs/spaces" etc wh are not concepts at all.  They 
can't for example handle "writing" or "shopping" because these can only be 
expressed as flexible outlines/schemas as per ideograms.

What do you mean?

.




From: Jim Bromer 
Sent: Tuesday, July 13, 2010 2:50 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski <abramdem...@gmail.com> wrote:
[The]complaint that probability theory doesn't try to figure out why it was 
wrong in the 30% (or whatever) it misses is a common objection. Probability 
theory glosses over important detail, it encourages lazy thinking, etc. 
However, this all depends on the space of hypotheses being examined. 
Statistical methods will be prone to this objection because they are 
essentially narrow-AI methods: they don't *try* to search in the space of all 
hypotheses a human might consider. An AGI setup can and should have such a 
large hypothesis space.
-------------------------------
That is the thing.
We cannot search all possible hypotheses because we could not even write all 
possible hypotheses down.  This is why hypotheses have to be formed creatively 
in response to an analysis of a situation.  In my arrogant opinion, this is 
best done through a method that creatively uses discreet representations.  Of 
course it can use statistical or probabilistic data in making those creative 
hypotheses when there is good data to be used.  But the best way to do this is 
through categorization based creativity.  But this is an imaginative method, 
one which creates imaginative explanations (or other co-relations) for observed 
or conjectured events.  Those imaginative hypotheses then have to be compared 
to a situation through some trial and error methods.  Then the tentative 
conjectures that seem to withstand initial tests have to be further integrated 
into other hypotheses, conjectures and explanations that are related to the 
subject of the hypotheses.   This process of conceptual integration, a process 
which has to rely on both creative methods and rational methods, is a 
fundamental part of the process which does not seem to be clearly understood.  
Conceptual Integration cannot be accomplished by reducing a concept to True or 
False or to some number from 0 to 1 and then combined with other concepts that 
were also so reduced.  Ideas take on roles when combined with other ideas.  
Basically, a new idea has to be fit into a complex of other ideas that are 
strongly related to it.

Jim Bromer




 
On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski <abramdem...@gmail.com> wrote:

  PS-- I am not denying that statistics is applied probability theory. :) When 
I say they are different, what I mean is that saying "I'm going to use 
probability theory" and "I'm going to use statistics" tend to indicate very 
different approaches. Probability is a set of axioms, whereas statistics is a 
set of methods. The probability theory camp tends to be bayesian, whereas the 
stats camp tends to be frequentist.

  Your complaint that probability theory doesn't try to figure out why it was 
wrong in the 30% (or whatever) it misses is a common objection. Probability 
theory glosses over important detail, it encourages lazy thinking, etc. 
However, this all depends on the space of hypotheses being examined. 
Statistical methods will be prone to this objection because they are 
essentially narrow-AI methods: they don't *try* to search in the space of all 
hypotheses a human might consider. An AGI setup can and should have such a 
large hypothesis space. Note that AIXI is typically formulated as using a space 
of crisp (non-probabilistic) hypotheses, though probability theory is used to 
reason about them. This means no theory it considers will gloss over detail in 
this way: every theory completely explains the data. (I use AIXI as a 
convenient example, not because I agree with it.)

  --Abram


  On Mon, Jul 12, 2010 at 2:42 PM, Abram Demski <abramdem...@gmail.com> wrote:

    David,

    I tend to think of probability theory and statistics as different things. 
I'd agree that statistics is not enough for AGI, but in contrast I think 
probability theory is a pretty good foundation. Bayesianism to me provides a 
sound way of integrating the elegance/utility tradeoff of explanation-based 
reasoning into the basic fabric of the uncertainty calculus. Others advocate 
different sorts of uncertainty than probabilities, but so far what I've seen 
indicates more a lack of ability to apply probability theory than a need for a 
new type of uncertainty. What other methods do you favor for dealing with these 
things?

    --Abram 



    On Sun, Jul 11, 2010 at 12:30 PM, David Jones <davidher...@gmail.com> wrote:

      Thanks Abram,

      I know that probability is one approach. But there are many problems with 
using it in actual implementations. I know a lot of people will be angered by 
that statement and retort with all the successes that they have had using 
probability. But, the truth is that you can solve the problems many ways and 
every way has its pros and cons. I personally believe that probability has 
unacceptable cons if used all by itself. It must only be used when it is the 
best tool for the task.

      I do plan to use some probability within my approach. But only when it 
makes sense to do so. I do not believe in completely statistical solutions or 
completely Bayesian machine learning alone. 

      A good example of when I might use it is when a particular hypothesis 
predicts something with 70% accuracy, well it may be better than any other 
hypothesis we can come up with so far. So, we may use that hypothesis. But, the 
30% unexplained errors should be explained if possible with the resources and 
algorithms available, if at all possible. This is where my method differs from 
statistical methods. I want to build algorithms that resolve the 30% and 
explain it. For many problems, there are rules and knowledge that will solve 
them effectively. Probability should only be used when you cannot find a more 
accurate solution.

      Basically we should use probability when we don't know the factors 
involved, can't find any rules to explain the phenomena or we don't have the 
time and resources to figure it out. So you must simply guess at the most 
probable event without any rules for figuring out which event is more 
applicable under the current circumstances.

      So, in summary, probability definitely has its place. I just think that 
explanatory reasoning and other more accurate methods should be preferred 
whenever possible.

      Regarding learning the knowledge being the bigger problem, I completely 
agree. That is why I think it is so important to develop machine learning that 
can learn by direct observation of the environment. Without that, it is 
practically impossible to gather the knowledge required for AGI-type 
applications. We can learn this knowledge by analyzing the world automatically 
and generally through video. 

      My step by step approach for learning and then applying the knowledge for 
agi is as follows:
      1) Understand and learn about the environment(through Computer Vision for 
now and other sensory perceptions in the future)
      2) learn about your own actions and how they affect the environment
      3) learn about language and how it is associated with or related to the 
environment.
      4) learn goals from language(such as through dedicated inputs).
      5) Goal pursuit
      6) Other Miscellaneous capabilities as needed

      Dave


      On Sat, Jul 10, 2010 at 8:40 PM, Abram Demski <abramdem...@gmail.com> 
wrote:

        David,


        Sorry for the slow response.

        I agree completely about expectations vs predictions, though I wouldn't 
use that terminology to make the distinction (since the two terms are 
near-synonyms in English, and I'm not aware of any technical definitions that 
are common in the literature). This is why I think probability theory is 
necessary: to formalize this idea of expectations.

        I also agree that it's good to utilize previous knowledge. However, I 
think existing AI research has tackled this over and over; learning that 
knowledge is the bigger problem.

        --Abram

            agi | Archives  | Modify Your Subscription  





    -- 
    Abram Demski
    http://lo-tho.blogspot.com/
    http://groups.google.com/group/one-logic




  -- 
  Abram Demski
  http://lo-tho.blogspot.com/
  http://groups.google.com/group/one-logic

        agi | Archives  | Modify Your Subscription   



      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to