On Wed, Jul 20, 2011 at 7:57 AM, Paul Homer <[email protected]> wrote:
> If we flip that, and consider the data as the primary element, then we can > look for ideas that essentially make the code trivial. Users enter data, the > system stores data, and we want to analyze the data. The code can be seen as > just the way the data propagates through the system. > You might like Duncan Cragg's work on FOREST (functional object representational state transfer) and OTS (object type specification): * http://forest-object-web.org/ * http://ots-object.net/ I do not believe a focus on data will make the code trivial. Your article describes use of transforms (which are valuable for system integration), but seems to ignore the related issues of scale, security, and liveness (independent maintenance, upgrade, and deprecation). * http://awelonblue.wordpress.com/2011/06/15/data-model-independence/ However, I do agree that a focus on data-flow can help lift us to a much more productive model of programming. I do like the idea of 'assembling the transformations required'. I've been working on the side on the notion of matchmakers and linkers-as-constraint-solvers to a similar end, albeit tamed for security by how we discover the transforms and code. I'll blog on this eventually. I do not believe there is any real need to distinguish between the data and the transforms, especially in the context of maintenance - i.e. any specific reference to data might be deprecated, so we should also be able to declaratively specify what sources of data we want to use, with the possibility of a primary source later becoming a 'view'. > Still, I don't now if the ideas are feasible and although I didn't mention > it, they will not work unless we can root them on truly dynamic data-stores. > Much of the thinking that was been removed from the code ends up in the > context in the form of extended types. It's a trade-off that I am not sure > is possible due to scale and performance issues. > The idea is feasible, just not 'trivial'. The context - the model of programming 'glue' - must effectively support the concerns of performance at scale (incremental, live queries), bi-directional influence (mutable views), concurrency, consistency, resilience, and security. I describe some of these issues in the 'data model independence' article I linked above. My reactive demand programming model handles these concerns, and a few more. > > I felt that Uncle Bob had tunneled his vision way too early. It takes a > long time for us to absorb new technologies and find maximal ways to use > them. Computers are easily the most significant technology that we have > physically created to date and we've only just begun to grasp what we've > done. We also have a long way to go with physical computers, i.e. with respect to sensor clouds, pervasive and ambeint computing, augmented reality.
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
