Well, I am still sceptical of the theory for one basic reason.  The
problem, as I see it, is that the complexity of the level of general
knowledge that is (probably) required for a basic AGI program is too great
for any of these methods to work with anything other than superficial or at
best simple examples.  So if the day came when one method worked then a
great many methods would probably work because it would mean that someone
had figured out how to contain combinatorial complexity or had
developed hardware sufficient to attain minimal AI.

Weighted reasoning was at first believed to be a solution to combinatorial
complexity.  Why didn't it work?  There were a few problems.  One was that
not everything can be expressed in the terms of a range of values.
Secondly, a weighting would, by the very method that it is able to simplify
complex relationships, represent different kinds of things and these
different things get mushed up and produce sub par results. Finally, these
systems have no way to integrate different kinds of relations wisely.

Of course, since weighted reasoning is usually used with correlations, one
might imagine that correlation points could be developed to represent when
these complicated relations can be fused and when they should be divided
and how they can be structured and integrated.  This would require some
trial and error methods to learn how to apply these techniques to real
world modelling, but few people actually talk about stuff like this.  And
there is no reason to think that this sort of method can produce
intelligence without first transcending the combinatorial complexity
problem.

I have thought about taking actions that can be used to minimize the
complications of numerous unknowns, but since this strategy has to be based
on some method of avoiding the worse outcomes, that means that the strategy
cannot be based on a simplistic way to minimize "entropy".  Taking an
action when facing multiple unknowns has to be derived from a biased method
that help the entity avoid the worse outcomes.  This in turn implies that
biasing strategies could also be used in the hope of increasing the chances
of better outcomes based on the projection of insights about the kinds of
situation that the intelligent device thought it might be in.

Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to