On Thu, Jan 8, 2009 at 10:41 AM, Ed Porter <ewpor...@msn.com> wrote:
> ====Ed Porter====>
>
> This is certainly not true of a Novamente-type system, at least as I
> conceive of it being built on the type of massively parallel, highly
> interconnected hardware that will be available to AI within 3-7 years.  Such
> a system would be hierarchical in both the compositional and
> generalizational dimensions, and the computation would be taking place by
> importance weighted probabilisitic spreading activation, constraint
> relaxation, and k-winner take all competition across multiple layers of
> these hierarchies, so the decision making would not "funnel all reasoning
> through a single narrowly focused process" any more that human though
> processes do.
>
>
> If a decision is to be made, it makes computational sense to have some
> selection process that focuses attention on a selected one of multiple
> possible candidate actions or
>
> though.  If that is the type of "funneling" that you object to, you are
> largely objecting to decision making itself.

I have been busy and I just started reading the remarks on this
thread. I want to reply to Ed's comment since his remarks seemed to be
focused in on what I said.  (And I was able to understand what he was
talking about!)

Parallel methods do not in of themselves constitute what I call
structural reasoning.

I object to the funneling and flat methods of reasoning itself.

Although I do not have any new alternatives to add to logic, fuzzy
logic, probability, genetic algorithms and various network decision
processes, my objection is directed toward the narrow focus on the
fundamentals of those decision making processes, or to the creative
(but somewhat dubious) steps taken to force the data to conform to the
inadequacies of (what I called) flat decision processes.

For instance, when it is discovered that probabilistic reasoning isn't
quite good enough for advanced nlp, many hopefuls will rediscover the
creative 'solution' of using orthogonal multidimensional 'measures' of
semantic distance.  Instead of following their intuition and coming up
with ways to make the reasoning seem more natural, they first turn
toward a more fanciful method by which they try to force the corpus of
natural language to conform to their previously decision to use a
simple metric.

My recommendation would be to first try to begin thinking about how
natural reasoning might be better structured to solve those problems
before you start distorting the data.

For an example, reasons are often used in natural reasoning. A reason
can be good or bad.  A reason can provide causal information about the
reasoning but even a good reason may only shed light on information
incidental to the reasoning. The value of a reason can be relative to
both the reasoning and the nature of the supplied reason itself.  My
point here is that the relation of reason to reasoning is significant
(especially when they work) although it can be very complicated.  But
even though the use of a reason is not simple, notice how natural and
familiar it seems.  Example: 'I do this because I want to!'  Not a
good reason to explain why I am doing something unless you are (for
instance) curious about the emotional issues behind my actions.
Another example: "I advocate this theory because it seems natural!" A
much better reason for the advocacy.  It tells you something about
what is motivating me to make the advocacy but it also tells you
something about the theory as it is being advocated.

There are other kinds of structures to reasoning that can be
considered as well.  This was only one.

I realized during the past few days, that most reasoning in a
contemporary AGI program would be ongoing and so yes the reasoning
would be more structured than I originally thought.  (I wouldn't have
written my original message at all except that I was a little more off
than usual that night for some reason.)  However, even though ongoing
reasoning does represent some additional complexity to the process of
reasoning, the fact that structural reasoning itself is not being
discussed means that it is being downplayed and even ignored.  So you
have the curious situation where the less natural metric of semantic
distance being enthusiastically offered while a more complete
examination of the potential of using natural reasons in reasoning is
almost totally ignored.

So while I believe that modifications and extensions of logic,
categorical systems, probability, and network decision processes will
be used to eventually create more powerful AGI programs, I don't think
the contemporary efforts to produce such advanced AGI will be
successful without the conscious consideration and use of structural
reasoning.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com

Reply via email to