I think you are conflating objects with destructive state (mutable
assignment), which makes static analysis of programs difficult.  Objects are
simple a data abstraction technique.  They allow for implementation hiding,
as different from "abstract state".  It is entirely possible an object wraps
a functional specification that uses equational reasoning.

Objects (that is to say, object identities) are a bad way to allow *access
to* state.  For example, Clojure decouples the access of an object identity
from an object state.  This is the "stop the world" model vs. the
"transactional memory" model.

Access to state and the design of an object system showed up in many early
papers on OOP. e.g Wegner and Zdonik's ECOOP '89 paper on Inheritance and
polymorphism, where they define "sane" rules for inheritance in an object
system.  Later, Cook and Braha augmented this idea with the notion of
"Mix-in inheritance".  Even later, Nathaneal Scharli and Andy Black further
augmented this with the notion of "traits".  I would argue both Mix-in
inheritance and traits have its roots in the design rationale for
inheritance in object systems with Wegner and Zdonik's ECOOP '89 paper.

On Fri, Mar 5, 2010 at 5:24 PM, Andrey Fedorov <[email protected]> wrote:

>
>
> On Fri, Mar 5, 2010 at 4:56 PM, John Zabroski <[email protected]>wrote:
>
>> On Fri, Mar 5, 2010 at 1:14 PM, Andrey Fedorov <[email protected]>wrote:
>>>
>>> If we *do** *want to define "complexity", we could put a constraint on
>>> these CRT graphs, like "nodes have no state"? This is starting to smell like
>>> the classical argument against OOP.
>>>
>>> What "classical argument" are you referring to?
>>
>
> That objects are a bad way to maintain state. I'm still undecided on if
> it's a good argument or not. If we add that constraint: that each node
> represents a function without side effects, I imagine the complexity of a
> system can be wholly defined from a graph representing it. Such a graph is
> more or less what Haskell code defines.
>
> In system A, nodes have no explicit state - in other words, state is not
>> given a name the outside world can refer to and inquire upon.
>>
>
> But what are the nodes? Maybe I'm still just not following what that
> graph-like thing represents - according to wikipedia, Current Reality
> Trees <http://en.wikipedia.org/wiki/Current_reality_tree_%28TOC%29> are
> graphs (*not* necessarily trees) which are meant to model observed
> phenomena, the type that occurs in the real world. This has nothing to do
> with writing algorithms - algorithms are structures which are rigorously
> defined and run on hardware you trust to adhere to those rigorous
> definitions upon execution.
>
>
>> From the perspective of side effects, system A can still have deadlocks
>> and/or race conditions (ordering side-effects that lead to unsafe
>> computational sequences).  We can adorn each node in A with an effect type,
>> possibly allowing each node to have a type defined by a separate type
>> system.
>>
>
>> In system B, nodes have explicit coordination of computational sequences,
>> but that explicit coordination does not guarantee safety.  As I understand
>> current thinking in category theory, such as work by Glynn Winskel on
>> general nets, the big idea is to unfold System B into an occurrence net,
>> thus giving its precise semantics, which we can then show to be either (a)
>> inconsistent (b) consistent (c) undecidable (d) unknown [know technique for
>> unfolding is known].
>>
>
> So are nodes computational sequences, and the arrows just represent the
> sequence in which they run? But if they have side-effects of the kind where
> any node can change the behavior of any other node, what's the point of what
> order they run in?
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to