Sure, but isn't my wheel rounder :-)

Of course, from my quick peek Linda still seems somewhat code-oriented:

"Linda is a model of coordination and communication among several 
parallel processes operating upon objects stored in and retrieved from 
shared, virtual, associative memory." 

If I built the full-monty version of what I am thinking, the idea of 
'processes' would basically disappear. How the flow gets established, and how 
many computers are actually involved is really an implementation detail. The 
data would flow from context to context with essentially one 'thread of 
control' per flow, but that 'thread' could span many machines (or even break 
off into some horrific number of sub-threads if the kernel could establish that 
it was dealing with independent sub-flows). Because it all starts with a user I 
suspect that it should be a rather more tangible subset of a distributed system 
(for background work, one could just fake out a virtual user and then throw 
away the output).


Paul.





>________________________________
> From: Alan Moore <[email protected]>
>To: Fundamentals of New Computing <[email protected]> 
>Sent: Wednesday, October 3, 2012 4:01:36 PM
>Subject: Re: [fonc] How it is
> 
>
>Paul,
>
>This sounds a little like Linda and TupleSpaces... what was that you were 
>saying about re-inventing the wheel over and over?
>
>
>LOL...
>
>
>Alan Moore
>
>
>
>
>
>
>
>On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer <[email protected]> wrote:
>
>A bit long, but ...
>>
>>
>>
>>The way most people think about programming is that they are writing 'code'. 
>>As a lessor side-effect, that code is slinging 
around data. It grabs it from the user, throws it into memory and then 
if it is interesting data, it writes it to disk so that it can be looked at or 
edited later. The code is the primary thing they are creating, 
while the data is just a side-effect of using that code.
>>
>>
>>
>>Way back I got introduced to seeing it the other way around. Data is 
everything. It's what the user types in, which is moved into some 
data-structures in memory and then is eventually restructured for 
persistence to be stored for later usage. Data sometimes contains 
'static linkages', that is one datam points to another explicitly. 
Sometimes the linkages are dynamic. A piece of code has to be run to 
make the connection between the data. In this perspective, code is 
nothing more than dynamic linkages or transformations between 
data-structures/formats (one could see the average of a bunch of floats 
for example as a transformation to a more simplified summation of the 
original data). The system is really just a massive flow of data, while 
the code is just what helps it get from place to place.
>>
>>
>>In the second perspective, an inventory system allows the data to flow 
from the users to the persistence medium. Sometimes the users need the 
data to flow back to them again, possibly summarized, or just for 
re-editing. The core of the system holds very simple data, basically a 
series of physical items, each with many associated properties and 
probably a bunch of cross-relationships. The underlying types, properties and 
relationships form a model of the data. For our modern systems that model might 
be implemented as a relational schema, but it could also be more exotic like 
NoSQL. 
>>
>>
>>
>>In this sort of system, if the model where stored explicitly in the 
persistence and it is simple enough that the users could do data entry 
directly on a flat representation of it on the screen, then the whole 
system would be as simple as flinging the data back and forth between 
the disks and the screen. However as we all know, systems are never this 
trivial in the real world. 
>>
>>
>>
>>Users need to navigate to specific data, and they often want the computer to 
fill in any 'global context information' for them as they move around. 
As well, they generally enter data in a simplified format, store the 
data in another, and then want a third way to view it. All of this 
amounts to a series of transformations happening to the data as it flows back 
and forth. Some transformations are simple, such as displaying a 
floating point number as a string truncated to some level of precision. 
Some are very complex, such as displaying a report that cross-checks the 
inventory to determine data or real-life problems. But all of the things on 
the screen are either directly data, or algorithmic transformations of 
the existing data.
>>
>>
>>
>>As for programming, this type of system could be build by first specifying 
>>the model. To add to this would be a series of transformations, each 
basically a black box that specifies a set of input and a set of output. With 
the model and the transformations, someone could lay out a series 
of screens for the users (or power users could do it themselves). The 
underlying kernel of the system would then take requests for the screens and 
use that to work out the flow from or to the database. One could 
generalize this a bit further by ignoring any difference between the 
screen and the disks, and just thinking of them as a generalized 'context' of 
some 
type. 
>>
>>
>>What I like about this idea is that once someone creates a model, it can be 
re-used as is, elsewhere. Gradually industries will build up common models 
(with less being secret). And as they add billions of little 
transformations, these too can be shared. The kernel (if it it possible 
to actually write one :-) only needs to exist once. Then all that 
remains is for people to toss screens together as they need them (this 
part of programming is likely to never be static). As for performance, once a 
flow has been established, it would be possible to store and reuse any 
static data or transformation sequences, and that auto-optimization 
would only exist in the kernel so it could focus precisely on what 
provides the best results.
>>
>>In a grand sense, you can see everything on the screen -- even little rounded 
>>corners, images and gadgets -- as just data that has flowed there from the 
>>disk somewhere (or network :-). The transformations behind something like a 
>>windowing system can appear daunting, but we know that they all started life 
>>as data somewhere that moved and bounced through a huge number of different 
>>data-structures, until finally ending up as a set of bits toggled in a screen 
>>buffer.
>>
>>The on-going work to enhance the system would consistent of modeling data, 
>>and creating transformations. In comparison to modern software development, 
>>these would be very little pieces, and if they were shared are intrinsically 
>>reusable (and recombination).
>>
>>So I'd basically go backwards :-) No higher abstractions and bigger pieces, 
>>but rather a sea of very little ones. It would be fun to try :-)
>>
>>
>>Paul.
>>
>>
>>
>>>________________________________
>>> From: Loup Vaillant <[email protected]>
>>>To: Paul Homer <[email protected]>; Fundamentals of New Computing 
>>><[email protected]> 
>>>Sent: Wednesday, October 3, 2012 11:10:41 AM
>>>
>>>Subject: Re: [fonc] How it is
>>> 
>>>
>>>De : Paul Homer <[email protected]>
>>>
>>>> If instead, programmers just built little
 pieces, and it was the
>>>> computer itself that was responsible for assembling it all together into
>>>> mega-systems, then we could reach scales that are unimaginable today.
>>>> […]
>>>
>>>Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
>>>I have no idea what assembling mechanisms could be used.  Could you
>>>sketch a trivial example?
>>>
>>>Loup.
>>>
>>>
>>>
>>>
>>_______________________________________________
>>fonc mailing list
>>[email protected]
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>_______________________________________________
>fonc mailing list
>[email protected]
>http://vpri.org/mailman/listinfo/fonc
>
>
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to