Well, I probably do not understand exactly what you have meant in your previous 
statements. But I do not believe that a method of study that only examines 
computational structures, no matter how objective, is going to succeed in 
producing higher general intelligence without also comparing them to, and 
modeling them from, human and animal intelligence.  It seems obvious to me. 

>From my point of view, the problem with pretentiously reaching for objectivity 
>before the prototype of the objective has even been found is all too clear 
>from the kinds of discussions we have in these groups.  Rather than discuss 
>what I think is a more central problem of conceptual complexity, the 
>discussions in these groups are always orbiting around arguments about ANNs vs 
>GOFAI, or Bayesian Reasoning vs GAs, or Free Will vs Determinism, or whether 
>or not sensors and robotics are necessary to produce higher understanding, or 
>the definitions of what consciousness means or whether or not the other guys 
>get it.  These arguments were once interesting to me, but the discussion that 
>I feel is key to the contemporary advancement of AGI is a discussion that I 
>have yet to participate in.

There is no question in my mind that intelligence produces some kind of 
compression and that the more effective forms of generalization are, for the 
most part, forms of compression.  But to declare compression (just to take that 
one example) as the undisputed measure of intelligence is just nonsense.

Intelligence is more than compression, it is more than Bayesian reasoning, it 
is more than logic, it is more than sensor-effector interactions, it is more 
than networks, it is more than Genetic Algorithms.

It is perfectly reasonable for an individual to seize on some objective method 
that looks useful to him in his work.  But to declare that an eccentric 
individualistic vision of the problem is the only truly objective method that 
should be used in all AI research is a case of putting the cart before the 
horse.

Jim Bromer


----- Original Message ----
From: Tudor Boloni <[EMAIL PROTECTED]>
To: [email protected]
Sent: Saturday, May 31, 2008 11:54:19 AM
Subject: Re: [agi] Compression PLUS a fitness function "motoring" for 
hypothesized compressibility is intelligence?

Jim, these are good points, and seem to be saying that: even with the perfect 
metric for intelligence discovered (lets pretend), and a maximally intelligent 
program built (keep pretending), that without a value system in place that 
selects among future possible actions or internal tests/experiments to perform 
and whose outcomes are JUDGED as favorable or less so, we dont have an AGI of 
human proportions.  or are you saying permutations and compression alone would 
result in a huge database optimally organized but not even intelligent... what 
if any question asked of this program returns all possible answers (including 
the Japanese MU (rephrase the question since it assumes untrue concepts)) and 
the user, based on his own value system ACTS according to his answers of 
choice... this to me seems even more useful than some bigoted program that 
really acts like one of us... maybe we should define what AGI goals we are 
actually working for

tudor


On Sat, May 31, 2008 at 2:36 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

The attempt to create an objective measure or process for intelligence seems 
worthwhile, but the problem here is that in making the attempt to eliminate 
"actions and beliefs" from the modeling of intelligence one is in danger of 
repeating the serious error of over-simplification as was done, for example, 
when the behaviorists tried to eliminate "ideas and reasoning" from the study 
of psychology, or when the proponents of the theories of logic-based artificial 
intelligence tried to eliminate other methods of reasoning from the scientific 
retinue on the basis that logic was the only truly scientific form of reasoning 
available.

The use of a metaphor from the history science is legitimate.  However when the 
metaphor purports to make an overly broad conclusion, especially one that is 
narrowly focused on a system (mathematical celestial orbital physics) which has 
yet to show its efficacy in the field of general artificial intelligence, and 
which the exclusion of other methods of reasoning is presented as if it had 
emerged from some kind of triumph, you really have to think before you jump.

I often argue against things like the simplistic use of Bayesian reasoning.  
However, when I do make an argument like that, I am not arguing against the 
value of Bayesian reasoning, but against the narrow simplistic belief that 
Bayesian reasoning is itself sufficient to explain human level general 
intelligence.

Similarly, I am not against the attempts to create objectives measures and 
processes for intelligence, but I am definitely opposed to those arguments 
which make an unsubstantiated claim that a narrow simplistic objective method 
is going to be sufficient when the evidence supporting that conclusion is 
seriously lacking and there are numerous good reasons for including other means 
of reasoning in the design of an AI program.
 
Jim Bromer


----- Original Message ----
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To: [email protected]
Sent: Friday, May 30, 2008 11:12:54 AM
Subject: Re: [agi] Compression PLUS a fitness function "motoring" for 
hypothesized compressibility is intelligence?

I don't really have any argument with this, except possibly quibbles about the 
nuances of the difference between "empirical" and "empiricism" -- and I don't 
really care about those!

On Friday 30 May 2008 05:04:58 am, Tudor Boloni wrote:
> The key point was lost, here is a clearer way of saying it.
> 
> Kepler's experience (his empirical work and experimentation with all his
> equipment) IS NOT what helped him DISCOVER properties of gravity (equal
> times for equal areas) (we can agree no one Invented it, though Newton
> generalized Kepler's insights). He had an INSIGHT separate from his possible
> SENSORY past or SENSORY future.  In the words of Einstein in a speech on
> Kepler given on Kepler's 300th anniversary of his death:
> 
> "One can never see where a planet really is at any given moment, but only in
> what direction it can be seen just then from the Earth, which is itself
> moving in an unknown manner around the Sun. The difficulties thus seemed
> practically unsurmountable [by empirical means].
> Kepler had to discover a way of bringing order into this chaos."  The
> breakthrough was Kepler's Universal Mathematical Physics as he defined it,
> and NOT physical empirical cosmology (which he specifically REJECTS in his
> attack on Aristotle's SENSORY based beliefs).
> 
> So what created this peak of human INSIGHT if compression of experienced
> patterns was not enough?  He did "trade one theory for another" but we call
> that thinking, and he didn't use empiricism to do it, he hypothesized new
> patterns and compressed them until they could not be disproved
> empirically... (this is a major difference from how modern science in
> executed, where most researchers actually give way, way too much worth to
> new theories arising from their experimental results, instead of simply
> removing theories that are negated by the same experiments and leaving their
> belief spaces open)
> 
> By bringing an agent's "actions" and "beliefs" of future optimized
> experiences into the discussion of intelligence, i believe you are limiting
> the agent to human stupidity and going down the same weak path as nature.
> True intelligence would be infinitely more humble in what it would declare
> as knowledge, it would only really know what it doesn't know.  Intelligence
> gradients would be products of compression algorithm efficiency, and
> available workspace resources for the permutations of past concept patterns.
> 
> to paraphrase Nietzsche "pointing to a picture of yourself and exclaiming
> ecce homo " says more about you than man, the same for intelligence, human
> intelligence is limited by our mind blindness resulting from empiricism and
> reliance on the senses, AGIs dont need to be that dumb
> 
> t
> 
> 
> 


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com



________________________________
 
agi | Archives  | Modify Your Subscription  


________________________________
 
agi | Archives  | Modify Your Subscription  


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to