Ben,
 
I am coming to understand that it is not a small matter of little importance, this notion that a computer program can achieve anything that should properly be called "intelligence".  There is more than a philosophical difference here, between you and I. 
 
It is a question also of what one attempts to do and what one does not spend (other people's) energy on.
 
You see, for me complexity is defined in a way that must produce a halting condition for a computer program, simply because complexity (defined by Kugler from his study of Rosen) is exactly when "a = a" can not be determined.
 
By definition the computer program can not see and can not compute anything that has any degree of complexity to it - despite what the academic computer science professors say.  Computer science has no grounding in observational science, except in the limited sence of never looking at the natural world (at all - ever).
 
It is turtles all the way down.  Period.  And there is only so many of these turtles before one get to the final abstraction of being "on" or "off".  Stratified complexity will make this clear - and finding and holding to this clarity is what the Manhattan Project must be about.  This is not a small matter.  It is a matter of putting computer science in it place after several decades of complete domination over all of things.
 
When one looks closely one sees that the issue is not computer science but "abstraction".  Computer science works because abstractions produce the programming languages and the hard reification of electromagnetic waves into on and off states.  This is engineering to produce an engineering tool.    Each "on" is "exactly" the same as any other "on". 
 
But in natural reality we have something called similarity - but we do not have "exactness" ever.  No two "things" are ever exactly the same.  Rosen did not state it this way exactly, at least I have not found a way to quote him; but Peter Kugler's work on Rosen's literature lead me to see that the category error (mistaking a formal system for a natural system) was the CAUSE of the confusion around AI.  I then traced this cause further into what I have come to call the religion of scientific reductionism, and to a IT industry whose production of snake oil is only rivaled by the "medicine men" who traveled from town to town in the early west selling tonics.
 
The very Nation is in some trouble over the failure of IT to produce any sort of real "information exchanges" .  Some of this failure is due to the wrong mindedness of the AI camp and of the reductionist at NSF, NIST and DARPA that are focused on professional careers and not on the development of true science.
 
I do agree with you about so many things, and it is a shear joy to know you.
 
When you say:
 
"
I see the Manhattan Project for KM has having five main aspects:
 
 
1* Actually building a huge integrative database out of existing structured databases
2* Creating tools for creating structured data out of unstructured data (e.g. text)
3* Creating tools for browsing the integrative database
4* Creating tools to encourage humans to collaboratively and individually insert new knowledge into the database
5* Creating computational tools to create new data out of old, and put it in the database
 
 
AI plays a role in 3 and 5.
 
But for starters, it may be that 1, 3 and 4 are our greatest concerns.  They're "easy" technically yet difficult to execute politically & socially...
 
In this picture, Paul's and my disagreement on the relation of intelligence & computation is really a small matter.  It has to do with the amount of power that can be achieved in 5, via computational intelligence alone without significant human participation.
 
"
I would argue that the most important aspect is the design of cognitive neuroscience grounded science of Human Information Interaction (or as it is being called, HII).  It is clear that the man/machine interface has much to gain by the improvement of data aggregation and convolution processes, (such as those by Novamente, CCM/LSI, and Primentia and other new methods) as well as by the improvement of the cognitive skill that humans might develop based on what the computer programs can actually do.
 
So I recognize that there is a great need for the Novamente engine and for work like the work I am doing on Latent Semantic Indexing and generalized LSI.  But I would change each of the five aspects to read:
 
 
1* Actually building a huge integrative database out of existing structured databases
--> Develop schema-independent means for communicating and storing both semi-structured and structured data. 
 
2* Creating tools for creating structured data out of unstructured data (e.g. text)
--> This is the Implicit to Explicit Ontology conversion process that I have recently called "Differential Ontology", but this process must have human decisions within EACH phase of a loop Implicit - Explicit  ----> Explicit to Implicit..  The reasons are many, but avoiding false sense making is the more important.  Computer programs to not exist, in the world, and can not achieve a pragmatic axis...and thus the machine ontology can be come anything including something that has no relationship to any part of the natural world.  Without a topic map type reification process (that must involve humans) then the ontology has no way of making the fine adjustments that are so clear to a natural intelligence.
 
3* Creating tools for browsing the integrative database
--> I would say that this issue goes away if other issues are addressed proper.  Thee is a "by-pass' that makes the notion of "integrative database" collapse to just "database".
 
4* Creating tools to encourage humans to collaboratively and individually insert new knowledge into the database
--> People already collaborate in many different ways, it is not the computer that is needed to enhance this natural activity but rather it is the computer, under the current use patterns, that inhibits this collaboration.
 
5* Creating computational tools to create new data out of old, and put it in the database
--> Why store useless things.  One needs to create educational processes that provides humans the ability to understand better and deeper the nature of life....
 
The bottom line is that the planet has billions of individual human minds and each of these has far greater capacity to renew information in creative ways, than does the Internet (as something divorced from people.)  One can act as if this is not so, but that acting does not change the reality a single bit.  (no pun intended...smile)
 
 
 
 
 
 
 
 
 
-----Original Message-----
From: Ben Goertzel [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 06, 2002 6:13 PM
To: beadmaster
Cc: NaturesPattern
Subject: RE: localized and global ontologies

Hi all,
 
paul wrote: 
 
 
***
 Dr. Ben Goertzel's deep work on implicit ontology is important not in the development of something that can not be (by nature) but in the development of something unexpected and new and thus in great need for definition.  But let us not call this "intelligence" as the work already has a meaning that is violated by this notion of a computer intelligence.  Let us work on our language so that there is no unnecessary confusion.  (I say to my friend, Ben.)  I think that Don is leading the way here 
*** 
 
 
My friend Shane Legg has coined the word "cybernance" to mean "the ability to achieve complex goals in complex environments."  This is basically what I mean by "general intelligence" ...
 
I do not agree that the word "intelligence" is violated by the notion of a computer intelligence.
 
I think that "intelligence" is a *behavioral* quantity, which may be realized by systems built of organic molecules, by digital computers, and probably by many other types of media that we 21'st century humans haven't yet conceived....
 
I see the Manhattan Project for KM has having four main aspects:
 
 
1* Actually building a huge integrative database out of existing structured databases
2* Creating tools for creating structured data out of unstructured data (e.g. text)
3* Creating tools for browsing the integrative database
4* Creating tools to encourage humans to collaboratively and individually insert new knowledge into the database
5* Creating computational tools to create new data out of old, and put it in the database
 
 
AI plays a role in 3 and 5.
 
But for starters, it may be that 1, 3 and 4 are our greatest concerns.  They're "easy" technically yet difficult to execute politically & socially...
 
In this picture, Paul's and my disagreement on the relation of intelligence & computation is really a small matter.  It has to do with the amount of power that can be achieved in 5, via computational intelligence alone without significant human participation.
 
 
-- Ben G
 

Reply via email to