I said:
I have thought about taking actions that can be used to minimize the
complications of numerous unknowns, but since this strategy has to be based
on some method of avoiding the worse outcomes, that means that the strategy
cannot be based on a simplistic way to minimize "entropy".
--------------

A probability method is a form of narrow AI. It is based on a presumption
that the probability of what is being measured will be both obvious and
simple.  Neither of these situations are actual realities for AGI
developers.  So simple binary examples examples where the state of the
result can be easily observed, is fine for an introductory explanation but
it is not adequate as a solution to the real problem.

AGI and AI programs do not attain the level of child-like intelligence
because many fundamental stages of learning (or of situational analysis) is
too complicated for a contemporary computer program. So while we can
imagine these different methods attaining human-like intelligence, it is
not feasible at this time.  Another variant, like free-action agency is
unlikely to break through that barrier.

When combined with a simplistic theory, like the markov process, Sergio's
theory that the physical stratum is the place where the miminalization of
the information entropy is taking place is similar to those theories which
posits that all reasoning takes place through the syntax of expressions.
These kinds of theories have proven inadequate to even model the necessary
types of subject matter.  It is clearly the semantics (or meaning) of
expressions which is of interest to the AGI programmer or AGI dreamer.  So
while we can say that the mind must be developing insight with the
'physics' of the brain, we must still be able to recognize that it must be
doing so on the basis of the meaning of the objects that the information
refers to.  The attempt to declare that all processes of mind must be
reduced to the terms of purely physical processes is inadequate because it
is presumptious (people do not know exactly how the brain works) and
blatently false (a computer program operates on the basis of its electronic
components but at the same time is is programmable so it can be used to do
things that we want it to do - within limits.  The computer is not just
operating on the basis of the physics of it components but also because
people have decided to use it in certain ways.  This duality of causation
may be understood as a result of other kinds of physical reactions but the
theory of everything does not actually refer to everything but only a frail
conjecture about a conjectured prototype. Without understanding everything
we are left with dualities and multiplicities).

It seems to me that the problems of multiplicities is inherently
complicated and these complications will doom many over-simplifications
of non-essential reductionism.

Jim Bromer

On Wed, Aug 15, 2012 at 9:55 PM, Jim Bromer <[email protected]> wrote:

> Well, I am still sceptical of the theory for one basic reason.  The
> problem, as I see it, is that the complexity of the level of general
> knowledge that is (probably) required for a basic AGI program is too great
> for any of these methods to work with anything other than superficial or at
> best simple examples.  So if the day came when one method worked then a
> great many methods would probably work because it would mean that someone
> had figured out how to contain combinatorial complexity or had
> developed hardware sufficient to attain minimal AI.
>
> Weighted reasoning was at first believed to be a solution to combinatorial
> complexity.  Why didn't it work?  There were a few problems.  One was that
> not everything can be expressed in the terms of a range of values.
> Secondly, a weighting would, by the very method that it is able to simplify
> complex relationships, represent different kinds of things and these
> different things get mushed up and produce sub par results. Finally, these
> systems have no way to integrate different kinds of relations wisely.
>
> Of course, since weighted reasoning is usually used with correlations, one
> might imagine that correlation points could be developed to represent when
> these complicated relations can be fused and when they should be divided
> and how they can be structured and integrated.  This would require some
> trial and error methods to learn how to apply these techniques to real
> world modelling, but few people actually talk about stuff like this.  And
> there is no reason to think that this sort of method can produce
> intelligence without first transcending the combinatorial complexity
> problem.
>
> I have thought about taking actions that can be used to minimize the
> complications of numerous unknowns, but since this strategy has to be based
> on some method of avoiding the worse outcomes, that means that the strategy
> cannot be based on a simplistic way to minimize "entropy".  Taking an
> action when facing multiple unknowns has to be derived from a biased method
> that help the entity avoid the worse outcomes.  This in turn implies that
> biasing strategies could also be used in the hope of increasing the chances
> of better outcomes based on the projection of insights about the kinds of
> situation that the intelligent device thought it might be in.
>
> Jim Bromer
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to