On Sun, Apr 7, 2013 at 5:44 AM, Tristan Slominski <
tristan.slomin...@gmail.com> wrote:

> I agree that largely, we can use more work on languages, but it seems that
> making the programming language responsible for solving all of programming
> problems is somewhat narrow.
>

I believe each generation of languages should address a few more of the
cross-cutting problems relative to their predecessors, else why the new
language?

But to address a problem is not necessarily to automate the solution, just
to push solutions below the level of conscious thought, e.g. into a path of
least resistance, or into simple disciplines that (after a little
education) come as easily and habitually (no matter how unnaturally) as
driving a car or looking both ways before crossing a street.


> imagine that I write a really "crappy" piece of code that works, in a
> corner of the program that nobody ever ends up looking in, nobody
> understands it, and it just works. If nobody ever has to touch it, and no
> bugs appear that have to be dealt with, then as far as the broader
> organization is concerned, it doesn't matter how beautiful that code is, or
> which level of Dante's Inferno it hails from
>

Unfortunately, it is not uncommon that bugs are difficult to isolate, and
may even exhibit in locations far removed from their source. In such cases,
having code that nobody understands can be a significant burden - one you
pay for with each new bug, even if each time you eventually determine that
the cause is elsewhere.

Such can motivate use of theorem provers: if the code is so simple or so
complex that no human can readily grasp why it works, then perhaps such
understanding should be automated, with humans on the periphery asking for
proofs that various requirements and properties are achieved.


>
> Of course, I can only defend the "deal with it if it breaks" strategy only
> so far. Every component that is built shapes it's "surface" area and other
> components need to mold themselves to it. Thus, if one of them is wrong, it
> gets non-linearly worse the more things are shaped to the wrong component,
> and those shape to those, etc.
>

Yes. Of course, even being right in different ways can cause much
awkwardness - like a bridge built from both ends not quite meeting in he
middle.



We then end up thinking about protocols, objects, actors, and so on.. and I
> end up agreeing with you that composition becomes the most desirable
> feature of a software system. I think in terms of actors/messages first, so
> no argument there :D
>

Actors/messaging is much more about reasoning in isolation (understanding
'each part') than composition. Consider: You can't treat a group of two
actors as a single actor. You can't treat a sequence of two messages as a
single message. There are no standard composition operators for using two
actors or messages together, e.g. to pipe output from one actor as input to
another.

It is very difficult, with actors, to reason about system-level properties
(e.g. consistency, latency, partial failure). But it is not difficult to
reason about actors individually.

I've a few articles on related issues:

[1] http://awelonblue.wordpress.com/2012/07/01/why-not-events/
[2] http://awelonblue.wordpress.com/2012/05/01/life-with-objects/
[3]
http://awelonblue.wordpress.com/2013/03/07/objects-as-dependently-typed-functions/



>
> To me, the most striking thing about this being the absence of a strict
> hierarchy at all, i.e., no strict hierarchical inheritance. The ability to
> mix and match various attributes together as needed seems to most closely
> resemble how we think. That's composition again, yes?
>

Yes, of sorts.

The ability to combine traits, flavors, soft constraints, etc. in a
standard way constitutes a form of composition. But they don't suggest rich
compositional reasoning (i.e. the set of compositional properties may be
trivial or negligible). Thus, trait composition, soft constraints, etc.
tend to be 'shallow'. Still convenient and useful, though.

I mention some related points in the 'life with objects' article (linked
above) and also in my stone soup programming article [4].

[4] http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/

(from a following message)

>
> robustness is a limited goal, and antifragility seems a much more worthy
> one.


Some people interpret 'robustness' rather broadly, cf. the essay 'Building
Robust Systems' from Gerald Jay Sussman [5]. In my university education,
the word opposite fragility was 'survivability' (e.g. 'survivable
networking' was a course).

I tend to break various survivability properties into robustness (resisting
damage, security), graceful degradation (breaking cleanly and predictably;
succeeding within new capabilities), resilience (recovering quickly; self
stabilizing; self healing or easy fixes). Of course, these are all passive
forms; I'm a bit wary about developing computer systems that 'hit back'
when attacked, at least as a default policy.

[5]
http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to