In my (leetle) world, referential opacity refers to ambiguities that arise in intentional utterances ... utterances of the form, "Jones believes (wants, thinks, hopes, etc.) that X is the case. " They are opaque in that they tell us nothing about the truth of X. So, for instance, "Jones believes that there are unicorns in central park" tells us neither that such a thing as a horse with a horn in its forehead exists (because Jones may confuse unicorns with squirrels) or that there are any "unicorns" in central park, whatever Jones may conceive them to be (because Jones may be misinformed).
What does the computer community think "referential opacity" means. Are there statements in computer code that take the form , "from the point of view of circuit A, switch S has value V". And do have later to worry that somewhere, later in the program, some other circuit, circuit B will encounter switch S and take it to have the value V? Nick -----Original Message----- From: Friam [mailto:[email protected]] On Behalf Of glen Sent: Wednesday, April 17, 2013 10:52 AM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning. Marcus G. Daniels wrote at 04/16/2013 07:55 PM: > A more important issue is whether a model has referential > transparency. Are all the possible ways an object can change or reveal > state made evident, or are they hidden away in obscure ways due to > implementation issues? > > [...] The issue is whether a modeler is prepared to put all of the > degrees of freedom on the table and find and remove those that are not > essential, or imagine that 1 piece on each of 100 tables is somehow > different from the same 100 pieces on 1 table. Yes, exactly. The conversation Nick started regarding tautologies is fundamentally about separating [non-]essential, or in the extreme case, no-ops. I (think I intellectually, if not behaviorally) share your preference for functional computation because it helps force me to be more rigorous in my intent. I'm as lazy as they come, though, and when given too many bells and whistles, my product tends to be sloppy. But I tend to also argue that, sometimes, depending on the requirements set out by the task, the sloppiness is not bad but merely a trivial side-effect. But this might be where we're talking about different things, below... > Maybe we aren't talking about the same thing. I'm not sure what you > mean by "size" above. I think you might mean that "All eventualities > must be covered by top-down analysis." I think you might mean that not > having to make types fit together means there are more ways entertain > the parts and pieces. Sorry, I was being obtuse. I meant it in the sense of set measures, or perhaps counting the members of a state space. In general, when we look around us at the world, we tend to focus, to slice off a subset. Then we go about justifying that the focal subset is "smaller" than the ambience from which we sliced it. There seems to be 2 ways to do that, by measuring the size of sets vs. iteratively, i.e. showing how various subsets can be composed (unioned, accumulated) to construct various sets. It's not entirely clear to me where "type" fits (at least not the specific sense of "type" we use in programming). But it seems to be synonymous with the predicate that defines the set. "Type" seems like a state-oriented conception, whereas "predicate" seems like a process-oriented conception. We talk about things being "of a type". But we talk about "satisfying a predicate". I could easily be wrong in my intuition, there. > If so, I don't see it that way. If there are > paths a computation can take which will result in failure, it's better > to know sooner than later about them. If certain state configurations > require logic, generics, or big union types, to do nothing but > something benign -- until the appropriate treatment is identified -- > being confronted with those configurations as classes (at compile > time) is better than hitting the edge cases one by one at runtime. Well, to go back to my defense of my sloppiness. Sometimes the sloppiness is not bad or merely ignorable. Sometimes, it's crucial to re-use (or, more appropriately [mis|ab]use). This is the concept I was trying to get at earlier when I mispoke and claimed that iteration is more open-ended than recursion. It's not, since they're duals. But iteration, being state-oriented rather than process-oriented seems more amenable to sloppiness. When we finite-minded, hyper-focusing, pattern recognizers wander around in the ambience, trying to "do stuff", we face a kind of action threshold, a hurdle we have to get over in order to get anything done. When we try to be as rigorous as possible and put all our DoF on the table, so to speak, that raises the threshold and makes action more difficult. Granted, it also might make the eventual action more effective or powerful, but it does make it more difficult. Given the variety of types of people out there, we end up with a nice spread of people, those who would prefer to "just do it" versus those who feel they should think long and hard before they do anything. My speculation is that it's easier for the sloppy people to "grab onto" whatever they slice out of the ambience if they use a state-oriented world view. It seems very difficult to be a purely Taoistic floating process, continuously, sloppily transforming/filtering things from birth till death. -- =><= glen e. p. ropella This body of mine, man I don't wanna turn android ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe <http://redfish.com/mailman/listinfo/friam_redfish.com> http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
