Thanks, steve.  It made clear to me some of the terms of this argument that
I was having a lot of trouble grasping.  

 

I do want to make sure we are straight on one point.  In my example, opacity
refers to the INABILITY to make inferences on the basis of intentional
utterances.  I don't know that people use the term referential transparency,
but if they do it refers to the ability to make inferences in "extentional"
utterances.  If we take as true Jones belief that there are unicorns in
Central Park, then we can infer that there are horses with horns in the
foreheads in Central Park.  Truth is preserved in substitution.  The point
is that science works with these sorts of utterances where truth is
preserved in the substitution of terms.  In fact, science often works
through the substitution of ever more precise terms to describe a
phenomenon.   We start out with "the elderly lady in Bodung Province in
China was killed by the new flu virus" and when we culture her virus and
discover that it was H4N3, or whatever, we can substitute with truth and
issue a press release saying that "the elderly lady in Bodung Province in
China was killed by the H4N3 virus."  This is referential transparency.  The
same substitution cannot be made in the sentence, "The Premier of China
knows that the elderly lady's death was caused by the h4N3 virus."  

 

In short, referential opacity BAD; referential transparency GOOD.   Are we
on the same page here, or are the values flipped in compsci.  

 

Nick   

 

From: Friam [mailto:[email protected]] On Behalf Of Steve Smith
Sent: Wednesday, April 17, 2013 1:18 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick, Glen, Marcus, et al -

The stew is getting nicely rich here.   While I wanted to ignore Owen's
original question regarding isomorphisms between computing
(language/concepts/models?) and philosophy as being naive, I know it isn't
totally and the somewhat parallel conversation that has been continuing that
started with circular reasoning has brought this out nicely (IMO).

This particular subthread on referential opacity is a nice example.  I would
now try to frame (not dismiss or belittle) Owen's question with this
particular exchange (Nick, Glen, Marcus) in mind.  In the world of natural
language speaking, we expect that all things are ultimately translatable and
that anyone from any language/culture can learn the language/culture of any
other with enough time and diligence.   Aptitude, age and other
circumstances make this harder or easier, but in principle, we tend to
believe that there is really only one natural human language and learning
the various examples (after the first) is doable.

On the other hand, we also have experiences that suggest just the opposite.
For example, I used to think that (american) English *must* bet the
simplest/most-concise language because when you read instructions in a
consumer product in several languages, somehow the English always seems to
be the shortest or have the most white-space.   I'm pretty sure now, that in
these cases, it is because the original instructions were in English and the
others were translations which ended up being more wordy than the original
because that is the nature of (simple) translation.   A single word, chosen
for a specific reason in one language often needs a phrase or at least a
modifier to make it more like the original one.   While it is possible that
one of the languages being translated INTO has a more precise word that
could replace a phrase or a noun with an adjective, it is unlikely that a
simple translation is going to hit upon it.  

Thus so between computer science/engineering and Philosophy.   The words,
while sometimes the same, have subtly importantly different meanings.   In
this case, Referential Opacity.    I have coded on projects involving
evidence theory where we in fact modeled the evidenciary processes and
circumstances such as Nick's Jones, Unicorns, Squirrels and Central Park and
I can attest that this ideal of referential opacity is NOT the same as in
code.   Ideas with elaborate semantics (Unicorns, Horses, Horns, Magic,
Locations) can be *referenced* by simple names ("Unicorns" ...) in a similar
manner to the natural language discussion between Nick and Jones, and I
believe the invention of OO programming was intended to align the act of
programming more closely with natural language, it only does so, *at best*
for the lexicons and dictionaries we design a set of class libraries for and
build programs with.   It doesn't improve the alignment between the language
*of* philosophy (or psychology or physics) with the language of programming.
If anything it distinguishes it.

If you look a  library, in particular, a specialized library like a Law
Library, you notice it grows and grows and grows.   One might say the law
started with the golden rule... but somehow that wasn't enough so somebody
had to go meet Jehova on a mountain behind a burning bush to get receive 10
commandments... which were nearly as self-evident as the Golden Rule, but
somehow spelled things out "just a little better".   The US constitution
starts rightin "We hold these truths to be self-evident" and for the most
part US Citizens for a couple hundred years (and I presume scholars from
other nations and cultures) have read it over and over and nodded when they
read those words.   There may be a few alternate world views where they
shake their head and grimace at what we call "self-evident"...  but I think
in general this large document is roughly as self evident as 10 commandments
or a golden rule (or pick your own culture's equivalent)..   but we are
compelled to split these hairs, to elaborate (said the man whose e-mails
here are way too long) on most anything.

Computer science *adopted* or *inherited* it's terms from mathematics and
logic which shares their own with Philosophy just as English inherited many
words from Latin and Greek and Gaelic handed down through intermediate
languages... and sometimes the words are dead on the same between languages
and other times they are anything but.    Referential Opacity in Computer
Programming means something very precise (if context dependent) as Glen so
eloquently described just now.  It has a vague resemblance to what Nick
means, due to it's heritage but to demand (or even wish for) the two to
become an isomorphism lames one or both domains.   

One of our biggest limitations in this culture (american, european, western)
in my opinion is that we were mostly raised up and trained up under the
metaphors of factories, cities, and a zero-sum (scarcity vs abundance)
economy.   If we were to (try to) constrain the Philosophy, for example, to
fit within the (much more constrained and specialized) metaphors of Computer
Science, it would at *best* be as bad as raising and educating our children
on a factory model where they may all be churned out to have some degree of
regularity and functionality, they are also taught (by their circumstance)
that they are interchangeable, replaceable parts.  Later in life they end up
having to sign up with a union (The brotherhood of gears and levers) to keep
from being abused (and then simply replaced if they go out of tolerance)
which is *also* built on the same metaphors. 

So, no Nick... a programmer is not questioning whether she can know what a
function (or object) means or to what degree of confidence or accuracy you
can believe what is reports when you read out the value of a variable it has
returned to you (Object Jones, return the state of your "Most Interesting
Thing I Saw Today" variable).  A programmer is saying... "can I know
anything more than what Jones tells me?"   In a very loose sense, you can
draw a parallel...  Jones may know he did not see a unicorn, but it is in
his programming to always make up something fanciful (squirrels or horses or
babies in prams are unicorns in his strange or obtuse lexicon) when asked.
As with Glen's concurrency example,  between the time you ask Jones what was
the most interesting thing he saw in Central Park, he may answer honestly
("I saw a Unicorn!") he may in fact see something more interesting, or
recognize that there was no horn, and his *internal* state would shift from
"Unicorn" to "Horse".   In programming, it is as likely as not that Object
Jones would not have been designed to recognize the import of the question
and quickly volunteer," wait... hold the phone... I just realized it was not
a Unicorn, it was a SQUIRREL!".  There are programming models that *do*
attempt this kind of stuff, but that is because the standard models tend not
to do this obviously or easily.   User Interface and parallel, distributed
models (e.g. federated models) *do* have mechanisms for this... but welll...
a whole 'nother story.

So, in summary, I feel like I'm among Russian and Polynesian speakers (well,
maybe just French and Spanish or Dutch and English) arguing over the meaning
of words that sound the same in each language.  At best the two might be
able to suss out the etymology and roots of the same words in a mother
tongue that they share or were informed by... but it would be silly to go
back and forth arguing that one is *more right* than another?   I know Nick
is being genuinely curious, and I think Owen is being (stubbornly)
idealistic... but the translations here are going to be on the order of
translating between (or learning) a foriegn language, and one that might not
have more than a passing relationship to the other (via mathematical logic).


- Steve

In my (leetle) world, referential opacity refers to ambiguities that arise
in intentional utterances ... utterances of the form, "Jones believes
(wants, thinks, hopes, etc.) that X is the case. "  They are opaque in that
they tell us nothing about the truth of X.  So, for instance, "Jones
believes that there are unicorns in central park"  tells us neither that
such a thing as a horse with a horn in its forehead exists (because Jones
may confuse unicorns with squirrels) or that there are any "unicorns" in
central park, whatever Jones may conceive them to be (because Jones may be
misinformed).  

 

What does the computer community think "referential opacity" means.  Are
there statements in computer code that take the form , "from the point of
view of circuit A, switch S has value V".  And do have later to worry that
somewhere, later in the program, some other circuit, circuit B will
encounter switch S and take it to have the value V?  

 

Nick 

 

-----Original Message-----
From: Friam [mailto:[email protected]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 10:52 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Marcus G. Daniels wrote at 04/16/2013 07:55 PM:

> A more important issue is whether a model has referential 

> transparency. Are all the possible ways an object can change or reveal 

> state made evident, or are they hidden away in obscure ways due to 

> implementation issues?

> 

> [...] The issue is whether a modeler is prepared to put all of the 

> degrees of freedom on the table and find and remove those that are not 

> essential, or imagine that 1 piece on each of 100 tables is somehow 

> different from the same 100 pieces on 1 table.

 

Yes, exactly.  The conversation Nick started regarding tautologies is
fundamentally about separating [non-]essential, or in the extreme case,
no-ops.  I (think I intellectually, if not behaviorally) share your
preference for functional computation because it helps force me to be more
rigorous in my intent.  I'm as lazy as they come, though, and when given too
many bells and whistles, my product tends to be sloppy.  But I tend to also
argue that, sometimes, depending on the requirements set out by the task,
the sloppiness is not bad but merely a trivial side-effect.

 

But this might be where we're talking about different things, below...

 

> Maybe we aren't talking about the same thing.  I'm not sure what you 

> mean by "size" above.  I think you might mean that "All eventualities

> must be covered by top-down analysis."   I think you might mean that not

> having to make types fit together means there are more ways entertain 

> the parts and pieces.

 

Sorry, I was being obtuse.  I meant it in the sense of set measures, or
perhaps counting the members of a state space. In general, when we look
around us at the world, we tend to focus, to slice off a subset.  Then we go
about justifying that the focal subset is "smaller" than the ambience from
which we sliced it.  There seems to be 2 ways to do that, by measuring the
size of sets vs. iteratively, i.e. showing how various subsets can be
composed (unioned, accumulated) to construct various sets.

 

It's not entirely clear to me where "type" fits (at least not the specific
sense of "type" we use in programming).  But it seems to be synonymous with
the predicate that defines the set.  "Type" seems like a state-oriented
conception, whereas "predicate" seems like a process-oriented conception.
We talk about things being "of a type".

But we talk about "satisfying a predicate".  I could easily be wrong in my
intuition, there.

 

>   If so, I don't see it that way.   If there are

> paths a computation can take which will result in failure, it's better 

> to know sooner than later about them.  If certain state configurations 

> require logic, generics, or big union types, to do nothing but 

> something benign -- until the appropriate treatment is identified -- 

> being confronted with those configurations as classes (at compile 

> time) is better than hitting the edge cases one by one at runtime.

 

Well, to go back to my defense of my sloppiness.  Sometimes the sloppiness
is not bad or merely ignorable.  Sometimes, it's crucial to re-use (or, more
appropriately [mis|ab]use).  This is the concept I was trying to get at
earlier when I mispoke and claimed that iteration is more open-ended than
recursion.  It's not, since they're duals.  But iteration, being
state-oriented rather than process-oriented seems more amenable to
sloppiness.  When we finite-minded, hyper-focusing, pattern recognizers
wander around in the ambience, trying to "do stuff", we face a kind of
action threshold, a hurdle we have to get over in order to get anything
done.

 

When we try to be as rigorous as possible and put all our DoF on the table,
so to speak, that raises the threshold and makes action more difficult.
Granted, it also might make the eventual action more effective or powerful,
but it does make it more difficult.

 

Given the variety of types of people out there, we end up with a nice spread
of people, those who would prefer to "just do it" versus those who feel they
should think long and hard before they do anything.  My speculation is that
it's easier for the sloppy people to "grab onto"

whatever they slice out of the ambience if they use a state-oriented world
view.  It seems very difficult to be a purely Taoistic floating process,
continuously, sloppily transforming/filtering things from birth till death.

 

--

=><= glen e. p. ropella

This body of mine, man I don't wanna turn android

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

to unsubscribe  <http://redfish.com/mailman/listinfo/friam_redfish.com>
http://redfish.com/mailman/listinfo/friam_redfish.com






============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to