On Wed, Apr 4, 2012 at 2:44 PM, Miles Fidelman <[email protected]>wrote:
> >> Outside of mainstream, there are a lot more options. Lightweight time >> warp. Synchronous reactive. Temporal logic. Event calculus. Concurrent >> constraint. Temporal concurrent constraint. Functional reactive programming. >> > > Few of which, however, are conceptually simple (maybe elegant). Complicated but complete is often better than simplistic. We should account for the complexity of those architectures, design patterns, biological metaphors, etc. that make up for deficiencies in the initial abstraction. Though, even without that measuring stick, reactive demand programming is the conceptually simplest model I know - it's a sort of an OOP with eventless FRP behaviors for all messages and responses. > > I wouldn't consider anything that's inherently synchronous as an example > of concurrency. Parallel yes. > I suspect a misunderstanding with terminology here. People with imperative background get familiar with the relationship between `asynchronous call` and concurrency (which isn't strongly implied, but is common) and so they think of regular procedural calls are `synchronous`. Which they aren't. Synchronous means things happen at the same time. If procedure calls were synchronous, many procedure calls would at the same logical time. Instead, what we have are asynchronous calls vs. sequential calls. And I would agree that sequential is not concurrent. Procedures generally compose sequentially (do x then do y then doozie). Synchronous programming actually refers to a very different style [1][2]. And in this style we have a logical clock. And there is concurrency when things happen at the same time according to the clock, i.e. on the same `tick` or instant. Sequencing is then performed across multiple instants. Anyhow, concurrent (many things happen) does not contradict synchronous (at the same time). To the contrary, `synchronous` implies concurrency just like `asynchronous` does. [1] http://en.wikipedia.org/wiki/Synchronous_programming_language [2] http://www.ibr.cs.tu-bs.de/cm/events/tubs.CITY-symposium-2009/slides/Mendler_Michael.pdf Synchronous is orthogonal to parallelism, as is asynchronous (we can get asynchronous calls without parallelism by use of a queue, for example). But, like asynchronous programming, synchronous programming does offer opportunity for parallelism if we can find volumes of computation that don't communicate much. > Re. LOS calcuations: right off that bat, from a global viewpoint I see an > n-squared problem reduced to an n-factorial problem (if I can see you, you > can see me - don't need to calculate that twice). But that's an > optimization that's independent of concurrency. Reducing N^2 to N! is the wrong direction! But I suspect that was just a brain glitch from all those asynchronous neurons. ;) In practice we can take advantage of the fact that most vehicles are obscured by relatively static terrain, so we can prepare a coarse-grained index of which volumes can see which other volumes. We could also do some dynamic optimizations to avoid redundant computation, e.g. by adding a frame-counter to each region that is updated to the current frame-number whenever its contents change. > >> But your particular choice for massive concurrency - asynchronous >> processes or actors - does introduce many additional problems. >> > > And eliminates many more - at least if one takes a shared-nothing, message > passing approach. Compared to sequential, shared-memory imperative programming, I agree that shared-nothing actors do offer some advantages. Though, in practice, shared-nothing doesn't happen - you'll have actors representing shared queues and databases in no time. The coarse-grained updates are nice, but eventually you'll reach a point where you need to update state in two different actors. Then consistency becomes a problem. So the solution is to either abandon modularity and combine the two state actors, or try to solve a pervasive consistency problem. A common experience is that trying to solve these problems will often undermine many of the advantages you sought by choosing actors or multiple processes in the first place. > >> But is consistency the issue at hand? >> >> Yes. Of course, it wasn't an issue until you discarded it in pursuit of >> your `simple` concurrency model. >> > > We're talking about "fundamentals of new computing" - and a far as I can > tell, Ungar and Hewitt have it about right in pointing out that consistency > goes out the window as systems scale up in size and complexity. The > question to me is what are design patterns that are useful in designing > systems in an a world where inconsistency is a given. Probabilistic and > biological models seem to apply. Ungar and Hewitt have it only partially right. There will be indeterminism and inconsistency (very different concepts) in large-scale programming systems. But that does not mean consistency `goes out the window`. We can do much to control inconsistency and how it is experienced, even at very large scales. A hardware analogy to the position those men take is: "any large-scale computer system will experience water vapor, so we need water-tolerant computers, so we must embrace water and build computers that work in the bathtub." It is a ridiculous conclusion. But the conclusion might be appealing to many people regardless. You rush to answer the question of how to program in inconsistent systems. It is clear their conclusion appeals to you. > > > You seem to be making the case for sequential techniques that >> maintain consistency. >> >> Pipelines are a fine sequential technique, of course, and I think we >> should use them often. But more generally I'd say what we need is effective >> support for synchronous concurrent behavior - i.e. to model two or more >> things happening at the same time. >> > > And I find that synchronous behavior, at least at the micro level, leads > to huge amounts of problems, and very brittle systems. It is unclear to me why you would believe this. Could you offer some examples of problems and brittleness? (Even if you meant `sequential`, it is unclear to me.) > >> I also disagree with Hewitt (most recently at http://lambda-the-ultimate. >> **org/node/4453 <http://lambda-the-ultimate.org/node/4453>). Ungar and >> Hewitt both argue "we need indeterminism, so let's embrace it". But they >> forget that every medicine is a poison if overdosed or misapplied, and it >> doesn't take much indeterminism to become poisonous. >> >> To accept and tolerate indeterminism where necessary does not mean to >> embrace it. It should be controlled, applied carefully and explicitly. >> > > Then we're in violent disagreement. I've yet to see a complex system > where indeterminism isn't the norm, and where attempts to impose > determinism cause huge, often insurmountable problems (think societies, or > mono-culture agriculture). Societies? Mono-culture agriculture? Programming should be easier than politics or farming and pest control. Anyhow, to keep indeterminism explicit and under control doesn't mean it would be abnormal. I plan to similarly control state, but I expect many apps will either have some state or utilize services that are stateful. Same will be true for indeterminism. > > >> The basic reason for this is that you: (a) want to model lots of things >> happening at once (not `asynchronously` but truly `at the same time`), (b) >> you don't want any participant to have special advantage by ordering in a >> turn, (c) you want consistency, freedom from glitches and anomalies, and >> ability to debug and regression-test your model, (d) you want precise >> real-time reactions - i.e. as opposed to delaying messages indefinitely. >> > > not when you're building things like wargames - things are very > probabilistic at the macro level - if you want to understand a range of > outcomes, you have to re-run the exercise, possibly many times using monte > carlo techniques I disagree. First, I may want a probabilistic game, but that doesn't contradict any of my reasons. I would be well served by a deterministic approach to probability, such as a pseudo-random number generator. Second, it is a severe mistake to confuse indeterminism with probability; one should not depend on message arrival order to provide sufficient entropy. > Not at all. The world is asynchronous. Email provides a very good model > for the environment that we're building systems in today. Your argument was analogous to: "I can't even begin to imagine any four-sided triangles." E-mail is asynchronous by its definition and protocol, so of course any parallelism it exhibits would also be called asynchronous. It is easy to implement asynchronous parallelism atop synchronous parallelism. A representation for each asynchronous task is stored in a database. At each instant, you step each task forward in parallel. Some tasks will complete before others. You can queue them, if you wish, so you're only running a few in parallel at a time. Patterns like these are described in the Dedalus papers. Regards, Dave -- bringing s-words to a pen fight
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
