On Wed, Jul 27, 2011 at 1:30 PM, BGB <[email protected]> wrote: > one does need recursion/... for many things to work. >
Even if we do have recursion, it does not imply being Turing-powerful. Primitive recursion and total recursion both terminate. But we don't need recursive functions. We only need a convenient and accessible solution to the problems for which recursive functions are traditionally applied: * incremental computation or behavior over time * model and process large structures There are many non-recursive solutions to those problems. For example, temporal semantics and modeling state can handle the first bullet, and various collections-oriented programming approaches can support the second. > otherwise, one has to develop software in a way completely devoid of > recursion, which would be a major hindrance. > Why would it be a major hindrance? > > typically, IME, the main place looping/recursion/... is needed is actually > in the lower-levels. > I'm discussing the level that developers experience: source code. I acknowledge that, at lower-levels - under-the-hood 'or such' - we may use of loops to implement these programs above certain, popular hardware architectures. That said, modeling a register machine rather than a stack machine can be an effective and efficient basis for avoiding explicit loops even in the lowest levels. > a simple example is drawing a scene using a BSP tree: > the BSP is itself a recursive structure, and so generally requires > recursion to draw the thing (even though something is terribly wrong if the > BSP drawing doesn't finish in finite time). > BSP would require only primitive recursion (which can be guaranteed to terminate, not Turing powerful). > it is not desirable that code should go into infinite loops or overflow the > stack, but there are ways for a compiler to "safely" deal with both > Safe for whom? What safety properties can your developers reason about with regards to a program killed or halted in the middle of a loop? > sandboxing is common, but IMO not really necessary, as for sanely written > code, the vast majority of runtime checks can be skipped, leaving runtime > checks mostly for operations that can't be statically proven (typically > constructions which would be either removed or disallowed in more "safe" > languages). > I don't mind this sort of 'soft typing'. I pursued it myself for a couple years. I now prefer static checks within a given application and use dynamic checks only as an extra security precaution, mostly at the serialization layers, where I have clean 'disruption' semantics I can apply. But there are many issues with soft typing that I learned to dislike: * proofs tend to consume a large amount of resources (space, time, energy) * poor applicability to embedded or mobile systems * the resources to achieve a good 'proof' is often unpredictable * depending on quality of proof, performance is often unpredictable * poor applicability to real-time systems * quality of proofs on 'neighboring' code (e.g. separate libraries) affects quality of local proof * with dynamic composition (e.g. pluggable architecture) often very difficult to prove anything * poor performance isolation; difficult to debug performance problems > basically, programming languages becoming amalgamated masses of features, > with the choice of "style" being more of an immediate preference, rather > than something imposed by the language. > I think this sort of 'feature-creep' is common for a language approaching maturity - i.e. we add new features because the prior model was inadequate or inefficient for some useful abstractions or frameworks. The real improvements in language design, however, require identifying the 'root causes' for which these extra features and frameworks are symptoms, then simplifying. And we ultimately achieve this by taking 'unnecessary power' away from the language. A common theme today is that we can, in fact, evolve much of our language from within the language, i.e. with extensible syntax and some mechanism to 'flatten' the resulting tower of abstractions (staging, specialization, partial evaluation, etc.). This allows a simpler language to model more features, efficiently. Languages will become simpler over time, with fewer (but more generic) features, not more complex. Software is what describes behavior for programmable hardware. "Algorithmic" > software is software that focuses on the internal computation of a function, > as opposed to workflows or control relationships. > a language generally needs to deal with all of these cases to be useful for > developing software. > I agree. Yet, a language that effectively handles all these cases will look considerably different than one that focuses on algorithmic software. A lot of features one might gloss over or fail to think about (such as the ability to kill an app through your OS) benefit from being modeled more explicitly, especially as we go distributed. Concerns about concurrency and job control will shape how we express computation of large functions. > > bugs and debugging are likely inevitable though. > That wasn't the issue. To a large degree we can control which errors express themselves, when, and how they propagate through a *system*. There is a lot wrong with the methodologies today, at the *systemic* level. Secure programming models would allow us to more effectively isolate and track bugs, and limit the harm they can cause. After all, malign and buggy are not easily distinguished. > > I don't personally think that the elimination of local storage would be > ideal. > I have no objection at all to local storage, and I do not hesitate to take full advantage. But I still would like automated redundancy for any information that should survive loss of the device. And I would like a precise semantic analysis of just which information needs redundancy (e.g. based on sharding of agents and declared dependencies). > one adds on new features for the new things, and has them alongside the old > ones, then people pick which they like more. People must use libraries, modules, and frameworks developed by other people. It is important that features work well and predictably together, otherwise development won't scale. You cannot easily remove features from a language, since that breaks existing libraries - though, if you have a good language, you can add a new feature and transparently rewrite the old version in a library. All the real improvements come from simplifying languages. One treats proliferation of features or frameworks as symptom of some unmet need in the programming model. We learn from those requirements, and distill a new language or model. > one needs both more powerful browser apps (more like standalone apps), and > probably also the ability to make "web-apps" which have a look and feel more > like traditional standalone apps (traditionally, UIs built inside the > browser generally suck a lot more than equivalent UIs built as standalone > apps). > I agree we need more powerful browser apps. As far as look and feel, though... I think the desktop metaphor is pretty awful, and I would like to see more attention to 'secure interaction design'. My vision for the future desktop is, indeed, based API integration between services (combined with zero-tier architectures).
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
