On Tue, Nov 8, 2011 at 11:13 PM, Dan Amelang <daniel.amel...@gmail.com>wrote:

>
> I have never seen input prefixing in a stream-processing/dataflow
> language before. I could only find one passing reference in the
> literature, so unless someone points me to previous art, I'll be
> playing this up as an original contribution in my dissertation :)
>

It's old, old art. Even C file streams and C++ iostreams allow get, put,
putback - where `putback` means put something back onto a stream you just
`get` from. I've seen this pattern many times - often in lexers and
parsers, Iteratees, and various other stream-processing models.


> Regarding your question about which processes would map poorly: the
> built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
> too). Many Gezira processes are a problem, such as ExpandSpans,
> CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
> the processes in the file stroke.nl (pen stroking). There's probably
> more, these are off the top of my head.
>

Thanks. I'll peruse these.


>
> The theory behind using Unicode in Nile is that source code is read a
> lot more than it is written. So I'm willing to make code a bit harder
> to write for a payoff in readability. And if Nile becomes what it
> should be, one shouldn't have to write much code anyway.
>

With that philosophy, maybe we should be writing markup. That way we can
read code in a comfortable `document` format. I think Fortress takes that
approach.


>
> He's never taken on pen stroke approximation (which is vital for

2D vector graphics).


Why is this vital? I think there are different understandings of the
`image` abstraction here. One can understand images in terms of drawing
arcs then filling between edges - and such a model is commonly seen in
PostScript and Cairo and apparently Gezira. But it is not an authoritative
abstraction. Pen-strokes with fill is a very imperative approach to
graphics modeling.

Elliott favors modeling lines in terms of areas. So do I. This seems to
shift pen stroke approximation to a utility role - valuable, but not vital.

Areas seem an effective basis for scalable scene-graph maintenance,
declarative models, occlusion, and level-of-detail indexing compared to a
line/fill approach. With the assumption that a pen stroke is modeled as an
area - perhaps defined by a cubic bezier path, a width, and a brush (e.g.
for dashes and colors and flair) - one is still left with a challenge of
building a useful library of glyphs and brushes.



He's never taken on, say, analytical geometry clipping.


Granted. Elliott focuses on the rather generic (Real,Real)->PixelData
abstractions, and doesn't bother with a static ontology of geometries
subject to easy analysis. Clipping is certainly achieved, though.

One could work with geometry based analyses, bounding boxes, and the like.
The diagrams package certainly does so.



there's a lot _after_ rasterization


True. And your ability to squeeze all this stuff into a few hundred lines
of Nile code is certainly a valuable contribution to the Steps project.


> > Anti-aliased rasterization can certainly be modeled in
> > a purely functional system,
>
> Easier said than done, I think. Again, I struggled quite a bit to come
> up with the Gezira rasterizer (which is basically purely functional).
> I don't know of any previous anti-aliased rasterizer done in a purely
> functional style, do you? Pointers appreciated.
>

I think the challenge you are imagining is a technical one, not a logical
one. Modeling anti-aliased rasterization in a purely functional system is
quite straightforward, at least if you aren't composing images in
rasterized form. The best anti-aliasing is very much mathematical (cf.
claims by Morphic 3 project,
http://www.jvuletich.org/Morphic3/Morphic3-201006.html). The trick is to
make such a model high performance. At the moment, one will still
ultimately compile down to an imperative machine.


>
> You could just reproduce Foley et. al, but that's such an imperative
> algorithm, I would think you'd end up with C-in-Haskell looking code.
> If so, I wouldn't count that.
>

It is true that there may be some imperative algorithms to implement pure
functions. Haskell offers facilities for doing this (ST monad, State
monad). But, while writing the algorithms may be imperative, using them can
still be pure functional. So I guess the question is whether you'll be
spending more time writing them or using them. ;)


> > My own interest in this: I've been seeking a good graphics model for
> > reactive systems, i.e. rendering not just one frame, but managing
> > incremental computations and state or resource maintenance for future
> > frames. I don't think Gezira is the right answer for my goals,
>
> I think you're probably right. Gezira is fundamentally about the
> ephemeral process of rendering. Managing state and resources is a
> whole other ball game. At Viewpoints, I think the Lesserphic project
> is closer to what you're looking for.
>

Thanks for the suggestion.

One of the `features` that interests me for reactive systems is properly
modeling motion-blur per frame, possibly in a shader. According to the
studies I've read, framerates must be a lot higher to avoid perceptual
`jerkiness` unless motion blur is included.

Regards,

Dave
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to