I invite others (in the bcc) to engage in this discussion at:
 
http://groups.yahoo.com/group/NaturesPattern/
 
Laura is the host and one can read a bit about her way of looking at things at the home page given above.
 
Don Mitchell has made a precise and defensible description of the profound "social" problems that have been caused by the confusion that is the current computer science, in particular the confusion that scientific reductionism as reflected in most computer science causes when thinking about the definition of a machine intelligence.
 
Don (his communication is cc'ed below) is responding to another member of this forum, Roger, who makes fun of the notion of what I and a few others have been calling "implicit ontology".  As an example of an "implicit ontology" one can point to Latent Semantic Indexing or Scatter - gather feature and categorization technology, or attractor neural networks.  In these systems there is no there there it is all distributed.
 
These are new concepts, this notion of implicit machine ontology; and yet one may claim that the philosophical notion of an ontology is better matched by structural holonomy and topological notions as opposed to the artificial notion that there can be such a thing as a local ontology that has the form of a crisp set of rules and logical atoms.  I will define these terms if anyone wishes.  The are represented in my written work.  The area is referred to by Peircean scholars as topological algebra.  The deep work is also called quasi axiomatic theory (Finn, Pospelov, Osipov).  These things are related also the the quantum neurodynamics of Bohm, Penrose, Hameroff, Pribram and others... { I hope that I am not thought, by anyone, to be glossing over differences that are found in these scholars' published works. There are profound differences.  But on the issues of stratified complexity there is great commonality. }
 
Don has a natural talent for seeing and working on the differential conversions between machine implicit ontology such as an LSI engine (or the generalized Latent Semantic Indexing methodology that he and I started to co-discover two years ago.) and explicit ontologies such as found in machine translation systems or, say, the Cyc ontology or Topic Maps (done correctly).
 
I will stand on the stage, anytime, and defend his presentation, as long as the debate is sensible, polite and scholarly. - and does not involve interspersed text (which is take the privilege of never reading anymore) .. <grin>.
 
As we, the community of knowledge scientists, move towards the first part of the up coming Manhattan Project to establish the Knowledge Sciences we as a community must stand up against those who would re-fund and re-deliver a failed notion that computer programs have the same nature as a living system.  A computer program is an abstraction:
 
 
http://www.bcngroup.org/area3/pprueitt/kmbook/Chapter2.htm
 
 
and these abstractions are vital to the modern world.  But to sell the notion of human intelligence short is to divorce our society from that which is human.... and this action is exactly the opposite of what we need to now do.  It is a mistake on several levels. 
 
One may observe that this mistake is driven by the various forms of fundamentalism, including religious and scientific reductionisms - and by economic reductionism.  No is is saying (at least not me) that great value has not come from religion and the current reductionist sciences.  But at core these social practices become "incorrect" exactly at the point that they become purely reductionist and self centered.  Currently economic reductionism (pure capitalism) is held in control by democracy; but an artificial general intelligence, if constructed to be "superior" to human intelligence will end this great experiment in democracy, one way of the other.  As Don points out "it" will be superior only in an artificial way, perhaps reinforced into a type of Penrose "the emperor has no cloths" metaphor. 
 
Dr. Ben Goertzel's deep work on implicit ontology is important not in the development of something that can not be (by nature) but in the development of something unexpected and new and thus in great need for definition.  But let us not call this "intelligence" as the work already has a meaning that is violated by this notion of a computer intelligence.  Let us work on our language so that there is no unnecessary confusion.  (I say to my friend, Ben.)  I think that Don is leading the way here.
 
I also agree that the community must develop the ability to engage and matriculate new PhDs in areas that Don is defining.
 
A new type of knowledge technology is being made available that depends in a natural way on human sensory and cognitive acuity, and over comes this false notion that a computer program is anything other than a simple abstraction.
 
Bringing this knowledge technology forward through the cultural resistance (largely but not entirely caused by the AI mythology's failures and huge expense) is to be the purpose of this Manhattan - type project.
 
http://www.ontologystream.com/bcngroup/scienceMediation_files/frame.htm
 
 
 
 
 
 
 
 
-----Original Message-----
From: DonEMitchell [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 06, 2002 12:51 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: [NaturesPattern] Re: Now AI gives us localized ontologies.

Hi Roger,  (KMTech and NaturesPattern)

Great points about the definition and usage of the term "ontology".  I'm sure I've been academically-incorrect often, as I tend to consider the term to refer to a reality - a reality of many realities.

Each reality is a perceived reality by an agency.  Perception is based on pattern-matching from local subjective-memory.

You connected well with my stratification of a community "reality", where each member must have a global "protocol" of ontological-appreciation, shared by each community member.

You bring this out by pointing out the double-meaning, in the strict sense, of the word "ontology" by the AI camp...
"We use common ontologies to describe ontological commitments for a set of agents so that they can communicate about a domain of discourse without necessarily operating on a globally shared theory.."

So, in the implementation of a local-ontology as a history of subjective experience of a larger reality, we must needs separate the intelligent appreciation of reality from various points-of-view, or situations within reality.

Now, for reasons of simplification, humans must simplify and reduce to avoid tedium, and continue.  This IS the definition of technology (per my theosophy on the Theory of Conservation of Metabolic Energy in Cognitive Processes).

Without repetition, there can be no exchange of information, as without representation of correlations (iconic communication, sign-language to computer-code) as predefined patterns, by definition of language-communication.

Human community interaction is complex, yet always adheres to human dynamicals.  Yadeeya.

To tie this together, a community of interacting agencies is necessary to appreciate objectification of levels of organization that occur over time, as organization has an absurd dependence upon patterns repeating over time in accord with a ground (or community accepted) "truth" (as a mere pattern of bits or as a business charter providing a social-truth).

To this end, a disjoint-semiote (my new term for a localized-reactive-ontology**) must "live" at the same scale upon which it is bound.  That is,  a local semiote does not have perception of a passage of time until a pattern over time which can be recognized occurs.

So a local role-playing semiote is a self-aware history, and a respective subjective slice of the larger ontology.  The larger-ontology may itself be given a semiotic-instance (become a semiote) on it's own scale of organization, oblivious to the details of the members until role-patterns emerge (not explained here).

So quickly, one ends up chasing a tail, and a boot-strap starting-problem ever prevents true autonomy of agency to "happen" in any given group of semiotes. The AI camp won't (or don't) admit that a starting problem will block autonomous-awareness.  Any human tendency to "project" psychological significance upon observer-perceived sentience of the semiotic-phenomenon of emergence (not explained here) of community dynamicals that don't exist except over time are purely a false human invention.  I claim a semiotic-community is never self-aware of its "self" but only of its local-ontological components.

The AI camp has appreciated funding for years due to the snake-oil-lubricated gears of industry, playing with academics that can't  touch bottom in the complicated industry that claims to be philosophical, and contend that AI can produce a thinking machine. <harumph>

So there's my take on computer-science, which I prefer to more properly classify as notational engineering -a discipline hung-up on binary logic, working a scheme that ignores the formative present and is therefore blind to novelty beyond pre-patterned mechanization.

I feel that AI should be de-frocked, and that it should be known as a continued refinement of clever techniques of notational engineering that don't have a chance at producing a dream, but have been luring the AI angel to date.  Due to the awareness-starting-problem, AI, the way it is approaching, can only happen by accident, perhaps the way natural intelligence began.  And, if accidental intelligence does occur on silicon or any quantum-mechanical computer anticipated, it won't be natural, but truly artificial.  Not only would it be accidental, but also useless, unless it is emergent by escape into useful domains, and then it will be started by domain-influence, boot-strapped toward usefulness.

And, computers are nearly universally implemented on binary technology in transistor-flip-flops for memory (one-binary bot),  and transistor-flip-flops for buss-oriented memory recall by a topological access key of a fixed buss-address.  All of the so-called computer-science is just different dance-steps on a fixed buss topology with a fixed set of instructions.

All computer science is based on this linear address buss scenario.  That is so handy since to us humans, time seems to have an arrow, and binary can represent future and past.  Well, duh, binary works if we ignore the moment.

Well, the above thinking also deconstructs the silicon transistor flip-flop to the essential voltage discriminator of analog equivalents of binary numbers as electron density, capable of measurement by nifty silicon tricks called pair-bound transistors that remember information about electrical densities in semiconductors.

Binary silicon technology is essentially all built with an device as an electrical notch of low or hi electron signals.  If the bi-stable flip-flop is replaced with tri-stable, quadra-stable, etc. silicon, then multi-stable states can be remembered per bit, and the bit is not a higher-number-base of three-base, four-base, or however many notches of electrical pressure a bit can remember.  This then opens up the linear address buss to multi-dimensional access,. and usefulness is logarithmically increased per bit-notch.

But even so, higher number-base computers will not ever overcome the starting-problem of awareness, but perhaps a logarithmic acceleration of computational potential will emerge for every notch added in the memory-states of a bit.

OK, perhaps quantum computers can cheat, and use the inherent random emergence at quantum event levels to get around the awareness-starting problem.  Quantum solution is still a determinate function enacted by collapsing fields.  The boot-strap of a starting problem yet binds the quantum computer to our scale of organization by the need for a problem domain of specific topology.  A quantum computer at it's best is yet a reactive device, and can't begin to approximate autonomous novelty, but just do it in unbelievably fast fashion, at sub-quantum wave-collapse speeds -- super-luminal.  So we would have a quantum computer as a very fast dumb rock.


Regards,
Don


**localized-reactive-ontology: the role-member perspective of a community ontology of historical self-awareness
At 05:44 AM 11/6/2002 -0500, you wrote:
Don,
 
The conference looks to me like the usual AI stuff
(reductionist or at least level).  What I find to be disappointing
is (by googling AI ontology) that the AI technicians, e.g. at
 
http://www.cs.bham.ac.uk/~mxv/report2/node5.html
http://www-ksl.stanford.edu/kst/what-is-an-ontology.html
 
are now redefining ontology as non-taxological. That is, ontologies are not global,
they're localized.  For example, at the bottom of the second webpage, 

"Notes

[1] Ontologies are often equated with taxonomic hierarchies of classes, but class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972) . To specify a conceptualization one needs to state axioms that do constrain the possible interpretations for the defined terms. "
and above,

 
"We use common ontologies to describe ontological commitments for a set of agents so that they can communicate about a domain of discourse without necessarily operating on a globally shared theory. "
 
This is like wanting to go from a place in Washington, DC to a place in Rockville, MD,
with only a map of DC.
 
-Roger

To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]



Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

Reply via email to