Don't get too handwavy about performance of the algorithm before you've
implemented it!  The technique I'm using is definitely a search. The search
is performed by the linker, which includes a constraint solver with
exponential-time worst-case performance. This works out in practice because:

   - I can memoize or learn (machine learning) working solutions or
   sub-solutions
   - I can favor stability and incrementally update a solution in the face
   of change

One concern I continuously return to is the apparent conflict of stability
vs. determinism.

Suppose the available components and the context can both vary over time.
Therefore, over time (e.g. minute to minute), the "best" configuration can
change. How do you propose to handle this? Do you select the best
configuration when the developer hits a button? Or when a user pushes a
button? Do you reactively adapt the configuration to the resources
available in the context? In the latter case, do you favor stability (which
resources are selected) or do you favor quality (the "best" result at a
given time)? How do you modulate between the two?

I've been exploring some of these issues with respect to stateless
stability (
http://awelonblue.wordpress.com/2012/03/14/stability-without-state/) and
potential composition of state with stateless stability.

I agree that configuration should be represented as a resource, but often
the configuration problem is a configurations problem, i.e. plural, more
than one configuration. You'll need to modularize your contexts quite a
bit, which will return you to the issue of modeling access to contexts...
potentially as yet another dataflow.

Anyhow, your vision seems young. It isn't the same as mine, but I don't
want to discourage you. Start hammering it out and try a prototype
implementation.

Regards,

Dave

On Thu, Oct 4, 2012 at 11:22 AM, Paul Homer <[email protected]> wrote:

> That's a pretty good summary, but I'd avoid calling the 'glue' a search.
> If this were to work, it would be a deterministic algorithm that choose the
> best possible match given a set of input and output data-types (and avoided
> N^2 or higher processing).
>
> Given my user-screen based construction, it would be easy to do something
> like add a hook to display the full-set of transformations used to go from
> the persistent context to the user one. Click on any data and get the full
> list. I see the contexts as more or less containing the raw data*, and the
> transformations as sitting outside of that in the kernel (although they
> could be primed from a context, like any other data). I would expect the
> users might acquire many duplicate transformations that were only partially
> overlapping, so perhaps from any display, they could fiddle with the
> precedences.
>
> * Systems often need performance boosts at the cost of some other
> trade-off. For this, I could see specialized contexts that are basically
> per-calculated derived data or caches of other contexts. Basically any sort
> of memorization could be encapsulated into it's own context, leaving the
> original data in a raw state.
>
> Configuration would again be data, grabbed from a context somewhere, as
> well as all of the presentation and window-dressing. Given that the user
> starts in a context (basically their home screen), they would always be
> rooted in someway. Another logical extension would be for the data to be
> 'things' that references other data in other contexts (recursively), in the
> same way that the web-based technologies work.
>
> Three other points I think worth noting are:
>
> - All of the data issues (like ACID) are encapsulated within the 'context'.
> - All of the flow issues (like distributed and concurrency) are
> encapsulated within the kernel.
> - All of the formatting and domain issues are encapsulated within the
> transformations.
>
> That would make it fairly easy to know where to place or find any of the
> technical or domain issues.
>
>
> Paul.
>
>   ------------------------------
> *From:* David Barbour <[email protected]>
> *To:* Paul Homer <[email protected]>; Fundamentals of New Computing <
> [email protected]>
> *Sent:* Wednesday, October 3, 2012 7:10:53 PM
>
> *Subject:* Re: [fonc] How it is
>
> Distilling what you just said to its essence:
>
>    - humans develop miniature dataflows
>    - search algorithm automatically glues flows together
>    - search goal is a data type
>
> A potential issue is that humans - both engineers and end-users - will
> often want a fair amount of control over which translations and data
> sources are used, options for those translations, etc.. You need a good way
> to handle preferences, policy, configurations.
>
> I tend to favor soft constraints in those roles. I'm actually designing a
> module systems around the idea, and an implementation in Haskell (for RDP)
> using the plugins system and dynamic types. (Related:
> http://awelonblue.wordpress.com/2011/09/29/modularity-without-a-name/ ,
> http://awelonblue.wordpress.com/2012/04/12/make-for-haskell-values-part-alpha/
> ).
>
> Regards,
>
> Dave
>
> On Wed, Oct 3, 2012 at 3:33 PM, Paul Homer <[email protected]> wrote:
>
> I'm in a slightly different head-space with this idea.
>
> A URL for instance, is essentially an encoded set of instructions for
> navigating to somewhere and then if it is a GET, grabbing the associated
> data, lets say an image. If my theoretical user where to create a screen
> (or perhaps we could call it a visual context), they'd just drag-and-drop
> an image-type into the position they desired. They'd have to have some way
> of tying that to 'which image', but for simplicity lets just say that they
> already created something that allows them to search, and then list all of
> the images from a known database context, so that the 'which image' is
> cascaded down from their earlier work. Once they 'made the screen live' and
> searched and selected, the underlying code would essentially get a request
> for a data flow that specified the context (location), some 'type'
> information (an image) and a context-specific instance id (as passed in
> from the search and list). The kernel would then arrange for that data to
> be moved from where-ever it is (local or remote, but lets go with remote)
> and converted (if its base format was something the user's screen couldn't
> handle, say a custom bitmap). So along the way there might be a translation
> from one image format to another, and perhaps a 'compress and decompress'
> if the source is remote.
>
> That whole flow wouldn't be constructed by a programmer, just the
> translations, say bitmap->png, bits->compressed and compressed->bits. The
> kernel would work backwards, knowing that it needed an image in png format,
> and knowing that there exists base data stored in another context as a
> bitmap, and knowing that for large data it is generally cheaper to
> compress/decompress if the network is involved. The kernel would
> essentially know the absolute minimum about the flow, and thus could
> algorithmically decide on the optimal amount of work.
>
> For most basic systems, for most data, once the user navigated into
> something it's just a matter of shifting the data. I've done an end-run
> around any of the processing issues, by jumping dumping them into the
> kernel. From your list, scatter-gather, queries and views, etc. are all
> left up the the translations. Incremental is just having the model in the
> context handles updates. ACID is a property of the context.
>
> I haven't given any real thought to issues like pulls or bi-directional
> but I think that the screen would just send a flow back to the original
> context in an observer style pattern associated with the raw pre-translated
> data. If any of that changed in the context, the screen would redo any
> 'dirty' flows, but that might not be a workable approach for millions of
> users watching the same data.
>
> The crux of this (crazy) idea is really that the full intelligence
> necessary for moving the data about and playing with it is highly
> fragmented. Programmers don't have to write massive intelligent sets of
> instructions, they just have to know how data goes from one format to
> another. They can do their thing in small bits and pieces and be as
> organized or inconsistent as they like. The system comes together from the
> intelligence embedded in the kernel, but the kernel isn't concerned with
> what are essentially domain or data issues. It's all just bits that are on
> their way from one place to another, and translations that are required
> along the way. Most of the code-specific issues like security melt away
> (you have access to a context or you don't) mostly because the linkage
> between the user and data is under control of just one single (distributed)
> program.
>
>
> Paul.
>
>   ------------------------------
> *From:* David Barbour <[email protected]>
>
> *To:* Paul Homer <[email protected]>; Fundamentals of New Computing <
> [email protected]>
> *Sent:* Wednesday, October 3, 2012 5:27:12 PM
>
> *Subject:* Re: [fonc] How it is
>
> Your idea of "first specifying the model... then adding translations" can
> be made simpler and more uniform, btw, if you treat acquiring initial data
> (the model) as a "translation" between, say, a URL or query and the result.
>
> If you're interested in modeling computation as continuous synchronization
> of bidirectional views between data models, you would probably be
> interested in RDP (
> https://github.com/dmbarbour/Sirea/blob/master/README.md).
>
> Though, reuse of data models is necessarily more sophisticated than you
> are imagining. There are many subtle and challenging issues in any
> conversion between data models.  I discuss a few such issues here: (
> http://awelonblue.wordpress.com/2011/06/15/data-model-independence/)
>
>
>
>
> On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer <[email protected]> wrote:
>
> A bit long, but ...
>
> The way most people think about programming is that they are writing
> 'code'. As a lessor side-effect, that code is slinging around data. It
> grabs it from the user, throws it into memory and then if it is interesting
> data, it writes it to disk so that it can be looked at or edited later. The
> code is the primary thing they are creating, while the data is just a
> side-effect of using that code.
>
> Way back I got introduced to seeing it the other way around. Data is
> everything. It's what the user types in, which is moved into some
> data-structures in memory and then is eventually restructured for
> persistence to be stored for later usage. Data sometimes contains 'static
> linkages', that is one datam points to another explicitly. Sometimes the
> linkages are dynamic. A piece of code has to be run to make the connection
> between the data. In this perspective, code is nothing more than dynamic
> linkages or transformations between data-structures/formats (one could see
> the average of a bunch of floats for example as a transformation to a more
> simplified summation of the original data). The system is really just a
> massive flow of data, while the code is just what helps it get from place
> to place.
>
> In the second perspective, an inventory system allows the data to flow
> from the users to the persistence medium. Sometimes the users need the data
> to flow back to them again, possibly summarized, or just for re-editing.
> The core of the system holds very simple data, basically a series of
> physical items, each with many associated properties and probably a bunch
> of cross-relationships. The underlying types, properties and relationships
> form a model of the data. For our modern systems that model might be
> implemented as a relational schema, but it could also be more exotic like
> NoSQL.
>
> In this sort of system, if the model where stored explicitly in the
> persistence and it is simple enough that the users could do data entry
> directly on a flat representation of it on the screen, then the whole
> system would be as simple as flinging the data back and forth between the
> disks and the screen. However as we all know, systems are never this
> trivial in the real world.
>
> Users need to navigate to specific data, and they often want the computer
> to fill in any 'global context information' for them as they move around.
> As well, they generally enter data in a simplified format, store the data
> in another, and then want a third way to view it. All of this amounts to a
> series of transformations happening to the data as it flows back and forth.
> Some transformations are simple, such as displaying a floating point number
> as a string truncated to some level of precision. Some are very complex,
> such as displaying a report that cross-checks the inventory to determine
> data or real-life problems. But all of the things on the screen are either
> directly data, or algorithmic transformations of the existing data.
>
> As for programming, this type of system could be build by first specifying
> the model. To add to this would be a series of transformations, each
> basically a black box that specifies a set of input and a set of output.
> With the model and the transformations, someone could lay out a series of
> screens for the users (or power users could do it themselves). The
> underlying kernel of the system would then take requests for the screens
> and use that to work out the flow from or to the database. One could
> generalize this a bit further by ignoring any difference between the screen
> and the disks, and just thinking of them as a generalized 'context' of some
> type.
>
> What I like about this idea is that once someone creates a model, it can
> be re-used as is, elsewhere. Gradually industries will build up common
> models (with less being secret). And as they add billions of little
> transformations, these too can be shared. The kernel (if it it possible to
> actually write one :-) only needs to exist once. Then all that remains is
> for people to toss screens together as they need them (this part of
> programming is likely to never be static). As for performance, once a flow
> has been established, it would be possible to store and reuse any static
> data or transformation sequences, and that auto-optimization would only
> exist in the kernel so it could focus precisely on what provides the best
> results.
>
> In a grand sense, you can see everything on the screen -- even little
> rounded corners, images and gadgets -- as just data that has flowed there
> from the disk somewhere (or network :-). The transformations behind
> something like a windowing system can appear daunting, but we know that
> they all started life as data somewhere that moved and bounced through a
> huge number of different data-structures, until finally ending up as a set
> of bits toggled in a screen buffer.
>
> The on-going work to enhance the system would consistent of modeling data,
> and creating transformations. In comparison to modern software development,
> these would be very little pieces, and if they were shared are
> intrinsically reusable (and recombination).
>
> So I'd basically go backwards :-) No higher abstractions and bigger
> pieces, but rather a sea of very little ones. It would be fun to try :-)
>
>
> Paul.
>
>   ------------------------------
> *From:* Loup Vaillant <[email protected]>
> *To:* Paul Homer <[email protected]>; Fundamentals of New Computing <
> [email protected]>
> *Sent:* Wednesday, October 3, 2012 11:10:41 AM
>
> *Subject:* Re: [fonc] How it is
>
> De : Paul Homer <[email protected]>
>
> > If instead, programmers just built little pieces, and it was the
> > computer itself that was responsible for assembling it all together into
> > mega-systems, then we could reach scales that are unimaginable today.
> > […]
>
> Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
> I have no idea what assembling mechanisms could be used.  Could you
> sketch a trivial example?
>
> Loup.
>
>
>
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> --
> bringing s-words to a pen fight
>
>
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> --
> bringing s-words to a pen fight
>
>
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
>


-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to