David Barbour wrote:
On Wed, Apr 4, 2012 at 2:44 PM, Miles Fidelman
<[email protected] <mailto:[email protected]>> wrote:
Outside of mainstream, there are a lot more options.
Lightweight time warp. Synchronous reactive. Temporal logic.
Event calculus. Concurrent constraint. Temporal concurrent
constraint. Functional reactive programming.
Few of which, however, are conceptually simple (maybe elegant).
Complicated but complete is often better than simplistic. We should
account for the complexity of those architectures, design patterns,
biological metaphors, etc. that make up for deficiencies in the
initial abstraction.
Though, even without that measuring stick, reactive demand programming
is the conceptually simplest model I know - it's a sort of an OOP with
eventless FRP behaviors for all messages and responses.
David, I'm sorry to say, but every time I see a description of reactive
demand programming, I'm left scratching my head trying to figure out
what it is you're talking about. Do you have a set of slides, or a
paper, or something that boils down the model? (I mean, Von Neumann
architecures are clearly defined, Actors are clearly defined, but what
the heck does "sort of an OOP with eventless FRP behaviors for all
messages and response" actually mean?).
I wouldn't consider anything that's inherently synchronous as an
example of concurrency. Parallel yes.
I suspect a misunderstanding with terminology here. People with
imperative background get familiar with the relationship between
`asynchronous call` and concurrency (which isn't strongly implied, but
is common) and so they think of regular procedural calls are
`synchronous`. Which they aren't. Synchronous means things happen at
the same time. If procedure calls were synchronous, many procedure
calls would at the same logical time.
I'm using things in the hardware sense, as in synchronous with a clock -
as seems to be the model with quite a few parallel computing approaches
(e.g., pipelined graphics processors).
Asynchronous or concurrent as in processes executing on different cores
(or machines), or the virtual equivalent thereof (e.g., Unix processes).
Re. LOS calcuations: right off that bat, from a global viewpoint I
see an n-squared problem reduced to an n-factorial problem (if I
can see you, you can see me - don't need to calculate that twice).
But that's an optimization that's independent of concurrency.
Reducing N^2 to N! is the wrong direction! But I suspect that was just
a brain glitch from all those asynchronous neurons. ;)
Probably not enough coffee. Meant to say N^N reduced to N! (if one
takes the really simple case).
In practice we can take advantage of the fact that most vehicles are
obscured by relatively static terrain, so we can prepare a
coarse-grained index of which volumes can see which other volumes. We
could also do some dynamic optimizations to avoid redundant
computation, e.g. by adding a frame-counter to each region that is
updated to the current frame-number whenever its contents change.
As I recall, the practice is to build some databases with some rather
funky geospatial indices, and do some tree pruning types of analyses.
But my memory is fairly foggy on this. I expect that LOS calculations
are amenable to attack by special-purpose hardware (high degrees of
parallelism.
But your particular choice for massive concurrency -
asynchronous processes or actors - does introduce many
additional problems.
And eliminates many more - at least if one takes a shared-nothing,
message passing approach.
Compared to sequential, shared-memory imperative programming, I agree
that shared-nothing actors do offer some advantages.
Though, in practice, shared-nothing doesn't happen - you'll have
actors representing shared queues and databases in no time. The
coarse-grained updates are nice, but eventually you'll reach a point
where you need to update state in two different actors. Then
consistency becomes a problem. So the solution is to either abandon
modularity and combine the two state actors, or try to solve a
pervasive consistency problem.
A common experience is that trying to solve these problems will often
undermine many of the advantages you sought by choosing actors or
multiple processes in the first place.
Which is where MVCC and other eventual consistency models come into
play, and where starting with an assumption of "never consistent" leads
down interesting paths.
But is consistency the issue at hand?
Yes. Of course, it wasn't an issue until you discarded it in
pursuit of your `simple` concurrency model.
We're talking about "fundamentals of new computing" - and a far as
I can tell, Ungar and Hewitt have it about right in pointing out
that consistency goes out the window as systems scale up in size
and complexity. The question to me is what are design patterns
that are useful in designing systems in an a world where
inconsistency is a given. Probabilistic and biological models
seem to apply.
Ungar and Hewitt have it only partially right. There will be
indeterminism and inconsistency (very different concepts) in
large-scale programming systems. But that does not mean consistency
`goes out the window`. We can do much to control inconsistency and how
it is experienced, even at very large scales.
A hardware analogy to the position those men take is: "any large-scale
computer system will experience water vapor, so we need water-tolerant
computers, so we must embrace water and build computers that work in
the bathtub." It is a ridiculous conclusion. But the conclusion might
be appealing to many people regardless.
You rush to answer the question of how to program in inconsistent
systems. It is clear their conclusion appeals to you.
Well yes... it strikes me as a "fundamental of new computing."
You seem to be making the case for sequential techniques that
maintain consistency.
Pipelines are a fine sequential technique, of course, and I
think we should use them often. But more generally I'd say
what we need is effective support for synchronous concurrent
behavior - i.e. to model two or more things happening at the
same time.
And I find that synchronous behavior, at least at the micro level,
leads to huge amounts of problems, and very brittle systems.
It is unclear to me why you would believe this. Could you offer some
examples of problems and brittleness? (Even if you meant
`sequential`, it is unclear to me.)
It's late at night, so bear with me - only two example comes immediately
to mind:
- High availability data storage. The simple case is mirroring (RAID1),
more complicated cases involve a mix of replication and error correction
coding, to make more efficient use of disk space (i.e., so that triple
redundancy doesn't require 3x capacity). Generally, data writes are
synchronous. All of which is well and good until a drive fails and has
to be replaced - suddenly there's a huge amount of processing required
to rebuild all the data structures - which can take hours, or days, and
not infrequently doesn't work quite as advertised. To me, that's brittle.
- The aforementioned example of CGF simulations. What I know from
experience is that adding a new entity type (object class), or property,
or behavior invariably seems to involve lost of ripple effects that
involve changes to the main control threads, as well as to other object
classes (at least I watched an awful lot of pretty sharp people tearing
their hair out over such things). On the other hand, adding new
entities to a networked simulation generally does not involve changes to
either the protocols in use, or other simulators on the net. Some of
this has to do with good/not-so-good software design - but my sense is
that the differing architectural approaches impose different kinds of
disciplines that effect the degree to which localized changes propagate
to other parts of the system.
I also disagree with Hewitt (most recently at
http://lambda-the-ultimate.org/node/4453). Ungar and Hewitt
both argue "we need indeterminism, so let's embrace it". But
they forget that every medicine is a poison if overdosed or
misapplied, and it doesn't take much indeterminism to become
poisonous.
To accept and tolerate indeterminism where necessary does not
mean to embrace it. It should be controlled, applied carefully
and explicitly.
Then we're in violent disagreement. I've yet to see a complex
system where indeterminism isn't the norm, and where attempts to
impose determinism cause huge, often insurmountable problems
(think societies, or mono-culture agriculture).
Societies? Mono-culture agriculture? Programming should be easier than
politics or farming and pest control.
Ummm... why would you say that?
Systems that address complicated problems involve similar levels of
complexity.
Developing a domain-specific language for a particular discipline
requires an in-depth understanding of that discipline, if it's to be
useful. And probably requires a broader understanding than that of an
individual practitioner (we all do a lot of things by "feel," but run
into trouble when asked to describe how we reached a conclusion -
building tools requires getting very concrete about what we're doing,
and how we're doing it.)
Developing a useful command-and-control system requires a pretty
in-depth knowledge of military doctrine, strategy, tactics - probably
beyond that of an individual practitioner - a lieutenant, commanding an
infantry platoon, needs to understand small-group tactical command),
someone developing a tactical command and control system has to
understand all the types of units involved in combat operations, how
they interact, their information needs, and so forth. Maybe an
individual coder, working on a particular graphic display doesn't need
that broad range of knowledge, but the system architects certainly do.
not when you're building things like wargames - things are very
probabilistic at the macro level - if you want to understand a range
of outcomes, you have to re-run the exercise, possibly many times
using monte carlo techniques
I disagree. First, I may want a probabilistic game, but that doesn't
contradict any of my reasons. I would be well served by a
deterministic approach to probability, such as a pseudo-random number
generator.
Second, it is a severe mistake to confuse indeterminism with
probability; one should not depend on message arrival order to provide
sufficient entropy.
Ok... you disagree, and I don't think we're going to converge.
Not at all. The world is asynchronous. Email provides a very
good model for the environment that we're building systems in today.
Your argument was analogous to: "I can't even begin to imagine any
four-sided triangles." E-mail is asynchronous by its definition and
protocol, so of course any parallelism it exhibits would also be
called asynchronous.
It is easy to implement asynchronous parallelism atop synchronous
parallelism. A representation for each asynchronous task is stored in
a database. At each instant, you step each task forward in parallel.
Some tasks will complete before others. You can queue them, if you
wish, so you're only running a few in parallel at a time. Patterns
like these are described in the Dedalus papers.
But why would I want to, expect in cases where special purpose hardware
comes into play? (That's a rhetorical question, by the way. This
interaction is really going nowhere at this point.)
Miles
--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc