On Wed, Apr 4, 2012 at 9:37 PM, Miles Fidelman
<[email protected]>wrote:
>
> David,  I'm sorry to say, but every time I see a description of reactive
> demand programming, I'm left scratching my head trying to figure out what
> it is you're talking about.  Do you have a set of slides, or a paper, or
> something that boils down the model?  (I mean, Von Neumann architecures are
> clearly defined, Actors are clearly defined, but what the heck does "sort
> of an OOP with eventless FRP behaviors for all messages and response"
> actually mean?).


I should probably write up an elevator speech for RDP at some point, but at
the moment you'd need familiarity with arrowized FRP to make much progress
in my descriptions of the subject (because that was my own stepping stone).
If you are familiar with arrowized FRP, try my blog.


> I'm using things in the hardware sense, as in synchronous with a clock -
> as seems to be the model with quite a few parallel computing approaches
> (e.g., pipelined graphics processors).
>

> Asynchronous or concurrent as in processes executing on different cores
> (or machines), or the virtual equivalent thereof (e.g., Unix processes).


I disagree with your terminology. Not worth fighting much over, though.


>
>> Reducing N^2 to N! is the wrong direction! But I suspect that was just a
>> brain glitch from all those asynchronous neurons. ;)
>>
>
> Probably not enough coffee.  Meant to say N^N reduced to N! (if one takes
> the really simple case).


I believe N^2 was the right answer. And we can do better, with spatial
indexing and an index for visibility.


>
> As I recall, the practice is to build some databases with some rather
> funky geospatial indices, and do some tree pruning types of analyses.  But
> my memory is fairly foggy on this.   I expect that LOS calculations are
> amenable to attack by special-purpose hardware (high degrees of parallelism.


I expect it would be easy in OpenCL or CUDA. Not much specialization of
hardware needed.


>
>> A common experience is that trying to solve these problems will often
>> undermine many of the advantages you sought by choosing actors or multiple
>> processes in the first place.
>>
>
> Which is where MVCC and other eventual consistency models come into play,
> and where starting with an assumption of "never consistent" leads down
> interesting paths.


I'm sure you'll have an interesting time conceptualizing or debugging the
system, too.

Eventual consistency makes a good safety net for inconsistency tolerance,
but isn't a good excuse to not pursue actual consistency. In practice, it's
too difficult to reason about.


>
> It is unclear to me why you would believe this. Could you offer some
>> examples of problems and brittleness?  (Even if you meant `sequential`, it
>> is unclear to me.)
>>
>
> It's late at night, so bear with me - only two example comes immediately
> to mind:
>
> - High availability data storage.  The simple case is mirroring (RAID1),
> more complicated cases involve a mix of replication and error correction
> coding, to make more efficient use of disk space (i.e., so that triple
> redundancy doesn't require 3x capacity).  Generally, data writes are
> synchronous.  All of which is well and good until a drive fails and has to
> be replaced - suddenly there's a huge amount of processing required to
> rebuild all the data structures - which can take hours, or days, and not
> infrequently doesn't work quite as advertised.  To me, that's brittle.
>

The loss and replacement of a RAID1 drive does not seem to me an issue of
synchronous vs. asynchronous operations. Either way, I would want to have
an error report near real-time rather than waiting until the drive is
rebuilt. (And how does your use of the word `synchronous` here correspond
to your earlier description `synchronous with a clock`, precisely? I get
the impression you've changed meanings on me.)


>
> - The aforementioned example of CGF simulations.  What I know from
> experience is that adding a new entity type (object class), or property, or
> behavior invariably seems to involve lost of ripple effects that involve
> changes to the main control threads, as well as to other object classes (at
> least I watched an awful lot of pretty sharp people tearing their hair out
> over such things).  On the other hand, adding new entities to a networked
> simulation generally does not involve changes to either the protocols in
> use, or other simulators on the net.  Some of this has to do with
> good/not-so-good software design - but my sense is that the differing
> architectural approaches impose different kinds of disciplines that effect
> the degree to which localized changes propagate to other parts of the
> system.


I'm not fond of threads myself. I prefer synchronous concurrency as opposed
to sequential control flow.

Adding entities is an orthogonal concern to the properties we've been
discussing. It is affected by architecture, of course, but there are
synchronous architectures that make it easy to inject new rules or entities
or so on (temporal logic, discrete event calculus, temporal concurrent
constraint programming, and reactive demand programming would all qualify).



>
>> Societies? Mono-culture agriculture? Programming should be easier than
>> politics or farming and pest control.
>>
>>  Ummm... why would you say that?
>
> Systems that address complicated problems involve similar levels of
> complexity.
>

Software must be a lot cheaper to build and improve per unit of complexity
than hardware to do the same thing. That's the basic reason software exists.

Software also need to be effective where humans are not - doing the same
monotonous thing a million times and getting it right every time.


>
> Developing a useful command-and-control system requires a pretty in-depth
> knowledge of military doctrine, strategy, tactics - probably beyond that of
> an individual practitioner - a lieutenant, commanding an infantry platoon,
> needs to understand small-group tactical command), someone developing a
> tactical command and control system has to understand all the types of
> units involved in combat operations, how they interact, their information
> needs, and so forth.  Maybe an individual coder, working on a particular
> graphic display doesn't need that broad range of knowledge, but the system
> architects certainly do.


Sure. But obtaining that expertise is still an easy problem compared to
politics. Often cheaper, too. And it can be achieved incrementally,
concurrent with progress in developing the program.


>
>> It is easy to implement asynchronous parallelism atop synchronous
>> parallelism. A representation for each asynchronous task is stored in a
>> database. At each instant, you step each task forward in parallel. Some
>> tasks will complete before others. You can queue them, if you wish, so
>> you're only running a few in parallel at a time. Patterns like these are
>> described in the Dedalus papers.
>>
>>  But why would I want to, expect in cases where special purpose hardware
> comes into play?


There are a couple reasons.
1) You started with the synchronous model and use it for almost everything.
But you want to model some asynchronous communication for problems
involving, say, intermittent connectivity or batch-processing tasks.
2) You started with asynchronous but decide you want more control.
Effectively observe progress or kill an task in real-time. Control progress
and priority of many tasks. Ability to revoke a message that has been sent
but not fully delivered. Etc.



(That's a rhetorical question, by the way.  This interaction is really
> going nowhere at this point.)


I agree. You've already married yourself to your e-mail-as-objects model,
IIRC.

Regards,

Dave


-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to