"Anyhow, your vision seems young. It isn't the same as mine, but I don't 
want to discourage you."

Indeed. It's right out on the extreme edge; as data-centric as I can imagine. I 
wasn't really putting it out there with the intent to build it, but rather as 
just an example of heading away from the crowds. We often implicitly go towards 
finding higher and higher abstractions that can be used as a toolset for 
programmers building larger systems. This is the way I build commercial 
products right now, and it is also the way our languages and tools have 
evolved. But this always runs into at least two problems: a) higher 
abstractions are harder to learn, and b) abstractions can often be leaky. The 
first problem is exemplified by APL. In the hands of a master, I've seen 
amazing systems built rapidly. But it isn't the easiest language to learn, and 
some people never seem to get it. The second problem was described quite well 
by Joel Spolsky but I've never really been sure whether or not it is avoidable 
in some way. 


I don't really know if going this other way is workable, but sometimes it's 
just fun to explore the edges. These days I'm busy paying off the mortgage, 
writing, playing with math, traveling (not enough) and generally trying to keep 
my very old house (1904) from falling down, so it's unlikely that I'll get a 
chance to play around here in the near (<100 years) future.


Paul.




>________________________________
> From: David Barbour <dmbarb...@gmail.com>
>To: Paul Homer <paul_ho...@yahoo.ca>; Fundamentals of New Computing 
><fonc@vpri.org> 
>Sent: Thursday, October 4, 2012 4:12:34 PM
>Subject: Re: [fonc] How it is
> 
>
>Don't get too handwavy about performance of the algorithm before you've 
>implemented it!  The technique I'm using is definitely a search. The search is 
>performed by the linker, which includes a constraint solver with 
>exponential-time worst-case performance. This works out in practice because:
>       * I can memoize or learn (machine learning) working solutions or 
> sub-solutions
>       * I can favor stability and incrementally update a solution in the face 
> of changeOne concern I continuously return to is the apparent conflict of 
> stability vs. determinism. 
>
>
>Suppose the available components and the context can both vary over time. 
>Therefore, over time (e.g. minute to minute), the "best" configuration can 
>change. How do you propose to handle this? Do you select the best 
>configuration when the developer hits a button? Or when a user pushes a 
>button? Do you reactively adapt the configuration to the resources available 
>in the context? In the latter case, do you favor stability (which resources 
>are selected) or do you favor quality (the "best" result at a given time)? How 
>do you modulate between the two?
>
>
>I've been exploring some of these issues with respect to stateless stability 
>(http://awelonblue.wordpress.com/2012/03/14/stability-without-state/) and 
>potential composition of state with stateless stability. 
>
>
>I agree that configuration should be represented as a resource, but often the 
>configuration problem is a configurations problem, i.e. plural, more than one 
>configuration. You'll need to modularize your contexts quite a bit, which will 
>return you to the issue of modeling access to contexts... potentially as yet 
>another dataflow. 
>
>
>Anyhow, your vision seems young. It isn't the same as mine, but I don't want 
>to discourage you. Start hammering it out and try a prototype implementation.
>
>
>Regards,
>
>
>Dave
>
>
>On Thu, Oct 4, 2012 at 11:22 AM, Paul Homer <paul_ho...@yahoo.ca> wrote:
>
>That's a pretty good summary, but I'd avoid calling the 'glue' a search. If 
>this were to work, it would be a deterministic algorithm that choose the best 
>possible match given a set of input and output data-types (and avoided N^2 or 
>higher processing). 
>>
>>
>>
>>Given my user-screen based construction, it would be easy to do something 
>>like add a hook to display the full-set of transformations used to go from 
>>the persistent context to the user one. Click on any data and get the full 
>>list. I see the contexts as more or less containing the raw data*, and the 
>>transformations as sitting outside of that in the kernel (although they could 
>>be primed from a context, like any other data). I would expect the users 
>>might acquire many duplicate transformations that were only partially 
>>overlapping, so perhaps from any display, they could fiddle with the 
>>precedences. 
>>
>>
>>
>>* Systems often need performance boosts at the cost of some other trade-off. 
>>For this, I could see specialized contexts that are basically per-calculated 
>>derived data or caches of other contexts. Basically any sort of memorization 
>>could be encapsulated into it's own context, leaving the original data in a 
>>raw state.
>>
>>
>>
>>Configuration would again be data, grabbed from a context somewhere, as well 
>>as all of the presentation and window-dressing. Given that the user starts in 
>>a context (basically their home screen), they would always be rooted in 
>>someway. Another logical extension would be for the data to be 'things' that 
>>references other data in other contexts (recursively), in the same way that 
>>the web-based technologies work.
>>
>>
>>Threeother points I think worth noting are:
>>
>>
>>- All of the data issues (like ACID) are encapsulated within the 'context'.
>>- All of the flow issues (like distributed and concurrency) are encapsulated 
>>within the kernel.
>>- All of the formatting and domain issues are encapsulated within the 
>>transformations. 
>>
>>
>>
>>That would make it fairly easy to know where to place or find any of the 
>>technical or domain issues.
>>
>>
>>
>>
>>Paul.
>>
>>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to