I hope this may be of general interest, but I'm personally interested on Dan's
thoughts on whether Disruptors might be a suitable "compilation target" for
Nile. My intuition is that it may be the right way for Nile to run efficiently
on commonly available multicore processors (i.e. by minimizing
branch-misprediction and memory contention, and optimizing cache behavior).
I'm referring to:
http://code.google.com/p/disruptor/
Google will turn up plenty more references, but the StackOverflow topic is
worthwhile:
http://stackoverflow.com/questions/6559308/how-does-lmaxs-disruptor-pattern-work
One limitation that I saw was it appears to work only for simple linear
dataflows:, each pipeline stage consumes and produces a single value. This is
a consequence of each entry in the ring-buffer having a fixed size (each entry
is a data structure containing the "scratch space" necessary to record the
input/output of each dataflow stage.
This seems to be a serious limitation, since Nile allows you to easily express
dataflows with more complicated topologies. For example, to draw a filled
shape bounded by a sequence of bezier curves, you might write a Nile program
that recursively splits curves until they are "small enough" or "straight
enough" to be approximated by linear segments, and then to rasterize these to
produce pixels to be shaded by downstream Nile elements. The problem is that
you don't know how many outputs will be produced by each pipeline stage.
A solution that occurred to me this morning is to use multiple ring buffers.
Linear subgraphs of the dataflow (i.e. chained sequences of
one-input-to-one-output elements) can fit into a single ring buffer, but
elements that produce a varying number of outputs would output to a different
ring buffer (or multiple ring buffers if it can produce multiple types of
output). This would be extremely cumbersome to program manually, but not if
you compile down to it from Nile.
I don't understand the Nile C runtime very well, so it's possible that it's
already doing something analogous to this (or even smarter).
Thoughts?
Cheers,
Josh
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc