Jim,

 

Since you are using your brain to do all that reasoning, I must conclude
that there must be something in your brain that allows you to do that. My
attention, therefore, immediately switches away from the reasoning itself,
and towards "that" which is in your brain and allows you to reason that way.


 

This statement is in no way a judgement about your ability to reason, or the
correctness of your conclusions. It only says that brain comes before
reasoning, so I want to know how it came to be before I start using it for
reasoning. If I or someone else succeded in explaining how the brain came to
be, then, and only then, would I agree to consider  reasoning. And my next
step would be to reason about AGI, and I would conclude that, knowing how
the function of the brain came to be,  AGI might also work if we managed to
simulate the same function on computers. 

 

Now, with all due respect to other beliefs, my belief is that the brain came
to be by evolution for the survival of the fittest. I can tone this down
even more. I want to assume that the brain came to be because brained
creatures survided better than brainless creatures, and examine the
consequences of that assumption. I do not need to deny any other assumption.


 

Of course, one could argue against. Brained creatures survived better
because they could reason. True, but they needed a brain before they could
reason, and they had to test that brain through countless uncertainties they
encountered in nature, for a long period of time, and still survive. So the
brain came first, and reasoning for survival came as a consequence. 

 

I'll stop here. I'll stop because I just did two things, that I am afraid
you may not like. I refused to even consider your reasoning, and I proposed
a new approach to AGI that turns 60 years of research on its head. I want to
hear your reaction, and most of all, I want to know if you are willing to
allow me to proceed and examine the consequences of my assumption. The first
consequence, is that I would be forced to explain reasoning and intelligence
from purely natural causes. 

 

As your two presentations were limited to describing the difficulties and
failures of the traditional approach to AGI which consists of reasoning and
writing computer programs, I think you should allow me to proceed. 

 

Sergio

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Wednesday, August 15, 2012 8:56 PM
To: AGI
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
Schroedinger's cat.

 

Well, I am still sceptical of the theory for one basic reason.  The problem,
as I see it, is that the complexity of the level of general knowledge that
is (probably) required for a basic AGI program is too great for any of these
methods to work with anything other than superficial or at best simple
examples.  So if the day came when one method worked then a great many
methods would probably work because it would mean that someone had figured
out how to contain combinatorial complexity or had developed hardware
sufficient to attain minimal AI.

 

Weighted reasoning was at first believed to be a solution to combinatorial
complexity.  Why didn't it work?  There were a few problems.  One was that
not everything can be expressed in the terms of a range of values.
Secondly, a weighting would, by the very method that it is able to simplify
complex relationships, represent different kinds of things and these
different things get mushed up and produce sub par results. Finally, these
systems have no way to integrate different kinds of relations wisely.

 

Of course, since weighted reasoning is usually used with correlations, one
might imagine that correlation points could be developed to represent when
these complicated relations can be fused and when they should be divided and
how they can be structured and integrated.  This would require some trial
and error methods to learn how to apply these techniques to real world
modelling, but few people actually talk about stuff like this.  And there is
no reason to think that this sort of method can produce intelligence without
first transcending the combinatorial complexity problem.

 

I have thought about taking actions that can be used to minimize the
complications of numerous unknowns, but since this strategy has to be based
on some method of avoiding the worse outcomes, that means that the strategy
cannot be based on a simplistic way to minimize "entropy".  Taking an action
when facing multiple unknowns has to be derived from a biased method that
help the entity avoid the worse outcomes.  This in turn implies that biasing
strategies could also be used in the hope of increasing the chances of
better outcomes based on the projection of insights about the kinds of
situation that the intelligent device thought it might be in.

 

Jim Bromer 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to