"OK.  But you did express that you thought the distinction (between paper math 
and computation) isn't meaningful (at least not in perpetuity).  Yet you admit 
that (in perpetuity) we should preserve the distinction at least for the sake 
of efficiency/performance.  You have to admit that can seem paradoxical."

I don't understand why you connect special purpose devices with paper math vs. 
computation.   I claim the problem with paper math is that 1) the former does 
not carry or enforce correctness checks, 2) it is not put in context -- things 
are pulled out of thin air as "the reader should know this", and 3) there isn't 
a formal mapping or harness to a universal computer.  So, going back to the 
"Ask Dad" approach to computing things, one could imagine a detailed model of 
the sorts of calculations Dad could do very well, but nonetheless leave the 
actual calculation to him.  I could share this model with other people and we 
could agree it was a "Certified Dad compliant" interface.
Regarding 2), ideally a paper's citations and bibliography will provide nodes 
on the semantic graph to start pulling, but it isn't required or consistently 
enforced by publishers and it certainly isn't machine readable.   The audience 
of today's technical literature is assumed to be other human domain experts, 
not, say, a Watson.

"Re: _technical_ papers being literate computer programs ... I agree.  But a 
recurring theme in this forum is the poor job journalists do communicating 
scientific efforts.  Analogously, we can predict that when/if all technical 
papers are literate programs, we'll have a similar problem.  This same 
conversation will continue to occur when the Nicks of the world ask the 
you-folks of the world what some program means.  So the distinction will 
persist as long as there are general intelligences (Nicks) attempting to parse 
domain-specific artifacts."

If all domain-specific artifacts were built up with machine readable 
ontologies, then the general intelligent agents will have threads to pull to 
start putting the artifacts in context.   Perhaps some kinds of agents, like 
humans, would benefit from additional `analogy modules' to assist with mapping 
large semantic graphs into similar pre-existing ones.  That would be an 
accelerator for learning, not a question of having a sufficient semantic 
representation.

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to