> > I believe each generation of languages should address a few more of the > cross-cutting problems relative to their predecessors, else why the new > language?
Well, there are schools of thought that every problem merits a domain specific language to solve it :D. But setting my quick response aside, I think I appreciate somewhat more what I think you're trying to communicate. Your statement here: But they don't suggest rich compositional reasoning (i.e. the set of > compositional properties may be trivial or negligible). Thus, trait > composition, soft constraints, etc. tend to be 'shallow'. Still convenient > and useful, though. helped to anchor this richer composition concept in my mind. It's definitely something I will think about. Actors/messaging is much more about reasoning in isolation (understanding > 'each part') than composition. Consider: You can't treat a group of two > actors as a single actor. You can't treat a sequence of two messages as a > single message. There are no standard composition operators for using two > actors or messages together, e.g. to pipe output from one actor as input to > another. > It is very difficult, with actors, to reason about system-level properties > (e.g. consistency, latency, partial failure). But it is not difficult to > reason about actors individually. You can definitely group two actors as a single actor. It's almost trivial to do so. It requires creating another actor though :D. Also, you yourself, in "Life With Objects" mention object capability model. So to counter the statement of difficulty to reason about system-level properties, object capability model definitely makes it simpler to reason about the system-level property of security (by walking the actor "graph"). Correctly implemented actor systems (requirement is non-forgeable actor addresses) implement object capability model trivially. So, the level of difficulty of reasoning seems to be dependent on what it is we are reasoning about. Additionally, we are getting better at understanding evented systems. As the market pressure for "always available" services grows, the survivors are very good in reasoning about system availability, inconsistency, and unpredictability (Netflix, Etsy, AWS, Google, Facebook, etc.). Yes, this is at the layer of entire machines (although often virtual machines), but lessons from that layer can be readily applied to evented systems in layers closer to concerns of programming individual applications/services. I would agree with you on the general lack of tooling within the actor computing paradigm itself. As far as standard composition, I've seen lambda calculus implemented in actors, and I've seen actors implemented in Kernel (which itself was implemented in actors written in C). These are, after all, Turing equivalent. Using both is probably the "right" approach. At some point, we have to leave the functional world and deal with network communications and emergent swarm behavior, actors FTW. Also, are you familiar with Storm ( https://github.com/nathanmarz/storm ) ? It is one example of composing things and providing guarantees at a slightly higher level of abstraction. Much like my previous metaphor comments, it's important we get to pick and choose our metaphors for the task at hand. >From reading "Why Not Events", a lot of the accidental complexity that serves as examples seems to be poor implementation of event systems, because it's a hard thing to do. This statement, in itself, may be conceding the point. However, there is always the one bright shining example of an event system that definitely counters, for example, your point that *Event systems lack generic resilience.* As Alan likes to point out, the Internet was built by people. So event systems can become powerful if we can get them right. In having event system discussion, we can't start our statements with "Aside from the Internet...". There is something to event systems that make them work incredibly well (Internet). Perhaps we don't understand enough and that makes them difficult to understand right now. But if we want to describe interesting things, sometimes we need Quantum Dynamics, because other things will be wrong in profound ways deep down. Going back to "Life With Objects", you mention stateless objects. That's a compelling proposition, and something I will continue to think about. I've tried to wrap my mind a while back about how to reason with stateless actors when trying to see if one could layer an actor system on top of GPGPU paradigm. I don't think I understand objects as dependently-typed functions, so I can't comment on "Objects as Dependently-Typed Functions" article. I remember reading the "Stone Soup Programming" post. It's interesting to see that in my mind, I saw actors interacting as if molecules/cells in a cell/organism. I now think that in your mind you had perhaps a more functional and compositional as in "function" view of that model. Lastly... I'm a bit wary about developing computer systems that 'hit back' when > attacked, at least as a default policy. The adaptation does not have to be so drastic :D. Think of a read caching system. As long as it has the capacity, the more requests are executed, the more they are cached, the better the system response overall. That's an example of getting better in face of variability and demand. This followed by hot-spot detection, horizontal scaling in response, and load balancing across are all examples of a system improving when stressed. (although, on second thought, in precise antifragile terms these examples might not qualify, as I think they may lack convexity as explained by Nassim). Lastly lastly :D ... The Robust Systems paper was a good read: We are taught that the \correctness" of software is paramount, and that > correctness is to be achieved by establishing formal speci cation of compo- > nents and systems of components and by providing proofs that the speci - > cations of a combination of components are met by the speci cations of the > components and the pattern by which they are combined. I assert that this > discipline enhances the brittleness of systems. In fact, to make truly > robust > systems we must discard such a tight discipline I often find myself deliberately disregarding "correctness", and thinking about how correctness can become an emergent property of the system. As in biological forms, the resulting system is likely to be incredibly complex and hard to understand, but at that point, perhaps we are in the "complexity" domain, instead of the "accidental complexity" domain. It is interesting that the paper reads almost as "danger ... danger ... danger... this is dangerous... danger.. risk.. danger". I love it. "Here be dragons" - that's usually where the interesting things are. Also, this was very interesting (from the paper) and related to the topic at hand: Dynamically con gured interfaces How can entities talk when they don't share a common language? A compu- > tational experiment by Simon Kirby has given us an inkling of how language > may have evolved. In particular, Kirby [16] showed, in a very simpli ed > sit- > uation, that if we have a community of agents that share a few semantic > structures (perhaps by having common perceptual experiences) and that try > to make and use rules to parse each other's utterances about experiences > they have in common, then the community eventually converges so that the > members share compatible rules. While Kirby's experiment is very primi- > tive, it does give us an idea about how to make a general mechanism to get > disparate modules to cooperate. Jacob Beal [5] extended and generalized the work of Kirby. He built and > demonstrated a system that allowed computational agents to learn to com- > municate with each other through a sparse but uncontrolled communication > medium. The medium has many redundant channels, but the agents do not > have an ordering on the channels, or even an ability to name them. > Neverthe- > less, employing a coding scheme reminiscent of Calvin Mooers's Zatocoding > (an early kind of hash coding), where descriptors of the information to be > retrieved are represented in the distribution of notches on the edge of a > card, > Mr. Beal exchanges the sparseness and redundancy of the medium for reliable > and recon gurable communications of arbitrary complexity. Beal's scheme > allows multiple messages to be communicated at once, by superposition, be- > cause the probability of collision is small. Beal has shown us new insights > into this problem, and the results may be widely applicable to engineering > problems. Another inspiration for me in similar fashion was the Conscientious Software paper: http://pleiad.dcc.uchile.cl/_media/bic2007/papers/conscientioussoftwarecc.pdf On Sun, Apr 7, 2013 at 10:50 AM, David Barbour <[email protected]> wrote: > > On Sun, Apr 7, 2013 at 5:44 AM, Tristan Slominski < > [email protected]> wrote: > >> I agree that largely, we can use more work on languages, but it seems >> that making the programming language responsible for solving all of >> programming problems is somewhat narrow. >> > > I believe each generation of languages should address a few more of the > cross-cutting problems relative to their predecessors, else why the new > language? > > But to address a problem is not necessarily to automate the solution, just > to push solutions below the level of conscious thought, e.g. into a path of > least resistance, or into simple disciplines that (after a little > education) come as easily and habitually (no matter how unnaturally) as > driving a car or looking both ways before crossing a street. > > >> imagine that I write a really "crappy" piece of code that works, in a >> corner of the program that nobody ever ends up looking in, nobody >> understands it, and it just works. If nobody ever has to touch it, and no >> bugs appear that have to be dealt with, then as far as the broader >> organization is concerned, it doesn't matter how beautiful that code is, or >> which level of Dante's Inferno it hails from >> > > Unfortunately, it is not uncommon that bugs are difficult to isolate, and > may even exhibit in locations far removed from their source. In such cases, > having code that nobody understands can be a significant burden - one you > pay for with each new bug, even if each time you eventually determine that > the cause is elsewhere. > > Such can motivate use of theorem provers: if the code is so simple or so > complex that no human can readily grasp why it works, then perhaps such > understanding should be automated, with humans on the periphery asking for > proofs that various requirements and properties are achieved. > > >> >> Of course, I can only defend the "deal with it if it breaks" strategy >> only so far. Every component that is built shapes it's "surface" area and >> other components need to mold themselves to it. Thus, if one of them is >> wrong, it gets non-linearly worse the more things are shaped to the wrong >> component, and those shape to those, etc. >> > > Yes. Of course, even being right in different ways can cause much > awkwardness - like a bridge built from both ends not quite meeting in he > middle. > > > > We then end up thinking about protocols, objects, actors, and so on.. and >> I end up agreeing with you that composition becomes the most desirable >> feature of a software system. I think in terms of actors/messages first, so >> no argument there :D >> > > Actors/messaging is much more about reasoning in isolation (understanding > 'each part') than composition. Consider: You can't treat a group of two > actors as a single actor. You can't treat a sequence of two messages as a > single message. There are no standard composition operators for using two > actors or messages together, e.g. to pipe output from one actor as input to > another. > > It is very difficult, with actors, to reason about system-level properties > (e.g. consistency, latency, partial failure). But it is not difficult to > reason about actors individually. > > I've a few articles on related issues: > > [1] http://awelonblue.wordpress.com/2012/07/01/why-not-events/ > [2] http://awelonblue.wordpress.com/2012/05/01/life-with-objects/ > [3] > http://awelonblue.wordpress.com/2013/03/07/objects-as-dependently-typed-functions/ > > > >> >> To me, the most striking thing about this being the absence of a strict >> hierarchy at all, i.e., no strict hierarchical inheritance. The ability to >> mix and match various attributes together as needed seems to most closely >> resemble how we think. That's composition again, yes? >> > > Yes, of sorts. > > The ability to combine traits, flavors, soft constraints, etc. in a > standard way constitutes a form of composition. But they don't suggest rich > compositional reasoning (i.e. the set of compositional properties may be > trivial or negligible). Thus, trait composition, soft constraints, etc. > tend to be 'shallow'. Still convenient and useful, though. > > I mention some related points in the 'life with objects' article (linked > above) and also in my stone soup programming article [4]. > > [4] http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/ > > (from a following message) > >> >> robustness is a limited goal, and antifragility seems a much more worthy >> one. > > > Some people interpret 'robustness' rather broadly, cf. the essay 'Building > Robust Systems' from Gerald Jay Sussman [5]. In my university education, > the word opposite fragility was 'survivability' (e.g. 'survivable > networking' was a course). > > I tend to break various survivability properties into robustness > (resisting damage, security), graceful degradation (breaking cleanly and > predictably; succeeding within new capabilities), resilience (recovering > quickly; self stabilizing; self healing or easy fixes). Of course, these > are all passive forms; I'm a bit wary about developing computer systems > that 'hit back' when attacked, at least as a default policy. > > [5] > http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf > > > > _______________________________________________ > fonc mailing list > [email protected] > http://vpri.org/mailman/listinfo/fonc > >
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
