Your idea of "first specifying the model... then adding translations" can
be made simpler and more uniform, btw, if you treat acquiring initial data
(the model) as a "translation" between, say, a URL or query and the result.

If you're interested in modeling computation as continuous synchronization
of bidirectional views between data models, you would probably be
interested in RDP (https://github.com/dmbarbour/Sirea/blob/master/README.md
).

Though, reuse of data models is necessarily more sophisticated than you are
imagining. There are many subtle and challenging issues in any conversion
between data models.  I discuss a few such issues here: (
http://awelonblue.wordpress.com/2011/06/15/data-model-independence/)




On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer <paul_ho...@yahoo.ca> wrote:

> A bit long, but ...
>
> The way most people think about programming is that they are writing
> 'code'. As a lessor side-effect, that code is slinging around data. It
> grabs it from the user, throws it into memory and then if it is interesting
> data, it writes it to disk so that it can be looked at or edited later. The
> code is the primary thing they are creating, while the data is just a
> side-effect of using that code.
>
> Way back I got introduced to seeing it the other way around. Data is
> everything. It's what the user types in, which is moved into some
> data-structures in memory and then is eventually restructured for
> persistence to be stored for later usage. Data sometimes contains 'static
> linkages', that is one datam points to another explicitly. Sometimes the
> linkages are dynamic. A piece of code has to be run to make the connection
> between the data. In this perspective, code is nothing more than dynamic
> linkages or transformations between data-structures/formats (one could see
> the average of a bunch of floats for example as a transformation to a more
> simplified summation of the original data). The system is really just a
> massive flow of data, while the code is just what helps it get from place
> to place.
>
> In the second perspective, an inventory system allows the data to flow
> from the users to the persistence medium. Sometimes the users need the data
> to flow back to them again, possibly summarized, or just for re-editing.
> The core of the system holds very simple data, basically a series of
> physical items, each with many associated properties and probably a bunch
> of cross-relationships. The underlying types, properties and relationships
> form a model of the data. For our modern systems that model might be
> implemented as a relational schema, but it could also be more exotic like
> NoSQL.
>
> In this sort of system, if the model where stored explicitly in the
> persistence and it is simple enough that the users could do data entry
> directly on a flat representation of it on the screen, then the whole
> system would be as simple as flinging the data back and forth between the
> disks and the screen. However as we all know, systems are never this
> trivial in the real world.
>
> Users need to navigate to specific data, and they often want the computer
> to fill in any 'global context information' for them as they move around.
> As well, they generally enter data in a simplified format, store the data
> in another, and then want a third way to view it. All of this amounts to a
> series of transformations happening to the data as it flows back and forth.
> Some transformations are simple, such as displaying a floating point number
> as a string truncated to some level of precision. Some are very complex,
> such as displaying a report that cross-checks the inventory to determine
> data or real-life problems. But all of the things on the screen are either
> directly data, or algorithmic transformations of the existing data.
>
> As for programming, this type of system could be build by first specifying
> the model. To add to this would be a series of transformations, each
> basically a black box that specifies a set of input and a set of output.
> With the model and the transformations, someone could lay out a series of
> screens for the users (or power users could do it themselves). The
> underlying kernel of the system would then take requests for the screens
> and use that to work out the flow from or to the database. One could
> generalize this a bit further by ignoring any difference between the screen
> and the disks, and just thinking of them as a generalized 'context' of some
> type.
>
> What I like about this idea is that once someone creates a model, it can
> be re-used as is, elsewhere. Gradually industries will build up common
> models (with less being secret). And as they add billions of little
> transformations, these too can be shared. The kernel (if it it possible to
> actually write one :-) only needs to exist once. Then all that remains is
> for people to toss screens together as they need them (this part of
> programming is likely to never be static). As for performance, once a flow
> has been established, it would be possible to store and reuse any static
> data or transformation sequences, and that auto-optimization would only
> exist in the kernel so it could focus precisely on what provides the best
> results.
>
> In a grand sense, you can see everything on the screen -- even little
> rounded corners, images and gadgets -- as just data that has flowed there
> from the disk somewhere (or network :-). The transformations behind
> something like a windowing system can appear daunting, but we know that
> they all started life as data somewhere that moved and bounced through a
> huge number of different data-structures, until finally ending up as a set
> of bits toggled in a screen buffer.
>
> The on-going work to enhance the system would consistent of modeling data,
> and creating transformations. In comparison to modern software development,
> these would be very little pieces, and if they were shared are
> intrinsically reusable (and recombination).
>
> So I'd basically go backwards :-) No higher abstractions and bigger
> pieces, but rather a sea of very little ones. It would be fun to try :-)
>
>
> Paul.
>
>   ------------------------------
> *From:* Loup Vaillant <l...@loup-vaillant.fr>
> *To:* Paul Homer <paul_ho...@yahoo.ca>; Fundamentals of New Computing <
> fonc@vpri.org>
> *Sent:* Wednesday, October 3, 2012 11:10:41 AM
>
> *Subject:* Re: [fonc] How it is
>
> De : Paul Homer <paul_ho...@yahoo.ca>
>
> > If instead, programmers just built little pieces, and it was the
> > computer itself that was responsible for assembling it all together into
> > mega-systems, then we could reach scales that are unimaginable today.
> > […]
>
> Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
> I have no idea what assembling mechanisms could be used.  Could you
> sketch a trivial example?
>
> Loup.
>
>
>
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>


-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to