On Wednesday, 12 February 2014 at 17:38:30 UTC, H. S. Teoh wrote:


I would say that while it's insightful to apply different paradigms to solve the same problem, one shouldn't make the mistake of shoehorning *everything* into the same approach. This is what Java does with OO, for example, to the detriment of every other paradigm, and frankly, after a while all those singleton classes with static methods just start to smell more and more like ways of working around the OO rather than with
it.

I found myself using singelton classes more and more until I decided it was time to drop a strict OO approach.

Having said that, though, the component approach is highly applicable, often in unexpected areas and unexpected ways, esp. when you couple it with D's range-based concept. There are certainly algorithms where it makes more sense to treat your data as a graph rather than a linear sequence of nodes, but it's also true that a good percentage of all code is just variations on linear processing, so pipelined component-style
programming would definitely be applicable in many places.

And nothing says you can't intermix component-style code with OO, or
something else.

That's what I've been doing for the last 1 1/2 years. I use classes where it makes _sense_, not as the ruling paradigm, then add structs (components), ranges and templates. The good thing about the freedom D offers is that it encourages you to think about the fundamental logic of your program and use tailor made solutions for a given problem - instead of a one size fits all approach that is bound to lead you down a cul de sac. In a way D has given the power back to the programmer's brain.

One key insight is that sometimes you want to separate
the object itself from a range over that object -- for example, I work with polytopes (higher-dimensional analogues of polygons and polyhedra), and it's useful to have, say, a range over all vertices, or a range over all edges, but it's also useful to separate these ranges from the polytope itself, which can be stored in a more compact form, or in a form that's more amenable to fast queries, e.g., find all faces that contain vertex X without needing to iterate over every face in the polytope (which you'd end up doing if you use filter() on the range of all faces). The query function can return a range over faces, so that it can be piped into other range-based functions for further processing. Thus, you can have a mix of different paradigms complementing each
other.

The other underlying theme in my article, which is also one of the key points of the Jackson Structured Programming that I alluded to, is the identification and separation of mismatching structures in order to simplify the code and eliminate code smells caused by ad hoc methods of structure conflict resolution (boolean flags are a common symptom of this malady). This isn't limited to pipelined programs, but applies in general. One could analyze OOP in this way, for example. OO lore says that objects should be cohesive and loosely-coupled -- we could say that cohesiveness means that the data stored in the object has corresponding structures, and loose coupling means that if an object's data has conflicting structures, it's time to consider splitting it into two
different objects instead.


T

Reply via email to