On Wed, Nov 9, 2011 at 1:31 AM, David Barbour <[email protected]> wrote:
>
> On Tue, Nov 8, 2011 at 11:13 PM, Dan Amelang <[email protected]>
> wrote:
>>
>> I have never seen input prefixing in a stream-processing/dataflow
>> language before. I could only find one passing reference in the
>> literature, so unless someone points me to previous art, I'll be
>> playing this up as an original contribution in my dissertation :)
>
> It's old, old art. Even C file streams and C++ iostreams allow get, put,
> putback - where `putback` means put something back onto a stream you just
> `get` from. I've seen this pattern many times - often in lexers and parsers,
> Iteratees, and various other stream-processing models.
Of course I'm aware of these :) There's a Nile parser written in
OMeta, and there's one in Maru now. Both put objects on their input.
And I'm familiar with C++ streams, notice how I based the Nile ">>"
and "<<" syntax on them.
Notice the first sentence of the paragraph that you quoted. I'm
pointing out that, as useful as input prefixing is, it doesn't appear
at all in stream processing languages. Furthermore, it doesn't appear
in stream processing models of computation.
Here's a bit of background. Take the early research, such as Duane
Adams' "A Computation Model with Data Flow Sequencing" in 1968.
(Strachey used streams to model I/O before that, like UNIX uses file
handles). Around this time, you also had Seror's DCPL, and Scott's
"Outline of a Mathematical Theory of Computation".
If you start there, and go through Karp and Miller "Properties of a
Model for Parallel Computations", Kahn's process network papers,
Dennis' dataflow work (esp. Id and VAL), Wadge and Ashcroft's dataflow
(particularly GLU), McGraw's SISAL, Lee's Dataflow Process Networks,
up to recent work like Streamit and GRAMPS, you won't find a single
one that even proposes input prefixing (corrections welcome).
My point is that introducing this feature into a stream processing
language and demonstrating its utility might be a research
contribution.
I do appreciate your interest in Nile/Gezira, and you've brought up
interesting questions. Due to time constraints, though, I'm going to
have to put less effort into comments like the above that strike me as
somewhat glib. I hope not to offend anyone or dismiss truly informed
comments, though. I just have a lot on my plate right now.
>> Regarding your question about which processes would map poorly: the
>> built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
>> too). Many Gezira processes are a problem, such as ExpandSpans,
>> CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
>> the processes in the file stroke.nl (pen stroking). There's probably
>> more, these are off the top of my head.
>
> Thanks. I'll peruse these.
As you look those over, it might help to know that the double arrow
"⇒" is for "process substitution", which is analogous to Kahn's
"reconfiguration" (see Kahn and MacQueen, 1976). That is, the effect
of the statement is to dynamically replace the current process with
the newly created sub-network following the arrow.
>> The theory behind using Unicode in Nile is that source code is read a
>> lot more than it is written. So I'm willing to make code a bit harder
>> to write for a payoff in readability. And if Nile becomes what it
>> should be, one shouldn't have to write much code anyway.
>
> With that philosophy, maybe we should be writing markup. That way we can
> read code in a comfortable `document` format. I think Fortress takes that
> approach.
Yes, similar idea. Though as Alan points out, markup is very weak, and
we can do better with interactive, graphical environments. Thus, I've
always felt that my games with Nile syntax are somewhat futile.
>> He's never taken on pen stroke approximation (which is vital for
>>
>> 2D vector graphics).
>
> Why is this vital? I think there are different understandings of the `image`
> abstraction here. One can understand images in terms of drawing arcs then
> filling between edges - and such a model is commonly seen in PostScript and
> Cairo and apparently Gezira. But it is not an authoritative abstraction.
> Pen-strokes with fill is a very imperative approach to graphics modeling.
> Elliott favors modeling lines in terms of areas. So do I. This seems to
> shift pen stroke approximation to a utility role - valuable, but not vital.
Is this conclusion really important enough to argue for? That
rendering lines should be considered valuable but not vital? I think
graphic designers would generally disagree. Regardless, just replace
all instances of "vital" with "valuable" in my original argument, and
I still stand by it.
I'm sorry but at this point, I think you're grasping at straws. I can
address one more comment, then I have to move on:
> Pen-strokes with fill is a very imperative approach...
This is just too much. Let's go over the details. In Gezira, I use the
"stroke-to-path" approach to pen stroking. This means that the
stroking pipeline takes a stream of Beziers that define the path of
the pen, and emits a stream of Beziers that define the outline of a
shape approximating the stroked path. Yes, I am aware that there are
other ways to do this, but let's focus on the "very imperative" claim
for now.
The output of the stroking pipeline, being just another Bezier
outlined shape, is fed to the same rasterize pipeline I use for, say,
glyph shapes. The output of the rasterize pipeline is then (typically)
sent to the ApplyTexture pipeline ("texture" here is any mapping of
(x,y) to colors, like a "brush" or "fill" in other frameworks), then
to the WriteToImage process.
In Nile, it looks pretty close to this:
StrokeBeziers → Rasterizer → ApplyTexture → WriteToImage
(I'm eliding the likely-used ClipBeziers and TransformBeziers in the
pipeline for simplicity's sake)
Given your description, Gezira uses the "pen-strokes with fill"
approach that is "very imperative." And yet, every Nile process (and
sub-process) from StrokeBeziers down to WriteToImage is implemented in
a purely functional way (not even fancy stuff imperative-ish
constructs like monads). And they are composed with just plain old
function composition.
So I don't get how a purely functional program, with no imperative-ish
constructs can be the embodiment of a "very imperative"
algorithm/approach.
OK, I cannot give any more detailed responses to comments like this.
My employer, my PhD advisor, probably members of this list, and of
course myself, need me to spend my time elsewhere.
> Areas seem an effective basis for scalable scene-graph maintenance,
> declarative models, occlusion, and level-of-detail indexing compared to a
> line/fill approach.
> With the assumption that a pen stroke is modeled as an
> area - perhaps defined by a cubic bezier path, a width, and a brush (e.g.
> for dashes and colors and flair)
> - one is still left with a challenge of
> building a useful library of glyphs and brushes.
You are apparently partial to implicit surfaces. OK, why not give your
ideas a shot and show us how it turned out? Maybe try rendering
something similar to VPRI's Frank as a benchmark of functionality?
>> He's never taken on, say, analytical geometry clipping.
>
> Granted.
> Elliott focuses on the rather generic (Real,Real)->PixelData
> abstractions, and doesn't bother with a static ontology of geometries
> subject to easy analysis. Clipping is certainly achieved, though.
> One could work with geometry based analyses, bounding boxes, and the like.
> The diagrams package certainly does so.
Could seem that way. I'd be interested in how it works out for you.
>> there's a lot _after_ rasterization
>
> True. And your ability to squeeze all this stuff into a few hundred lines of
> Nile code is certainly a valuable contribution to the Steps project.
Thank you.
>> > Anti-aliased rasterization can certainly be modeled in
>> > a purely functional system,
>>
>> Easier said than done, I think. Again, I struggled quite a bit to come
>> up with the Gezira rasterizer (which is basically purely functional).
>> I don't know of any previous anti-aliased rasterizer done in a purely
>> functional style, do you? Pointers appreciated.
>
> I think the challenge you are imagining is a technical one, not a logical
> one.
Fine, call it a technical challenge instead of a logical one. I still
stand by my point.
> Modeling anti-aliased rasterization in a purely functional system is
> quite straightforward, at least if you aren't composing images in rasterized
> form.
My challenge to find a previous anti-aliased rasterizer done in a
purely functional style still stands. I issued it to counter what I
felt were unsubstantiated claims. You just gave me more claims :)
With more time, I'd dive into the "...at least if you aren't composing
images in rasterized form" part.
> The best anti-aliasing is very much mathematical (cf. claims by
> Morphic 3 project, http://www.jvuletich.org/Morphic3/Morphic3-201006.html).
Juan's anti-aliasing approach, though better than Gezira's in certain
ways, isn't any more mathematical than Gezira's in terms of form (see
the published Gezira anti-aliasing coverage formula). In terms of
signal processing theory, though, his is more interesting. But this
doesn't relate to the main point.
> One of the `features` that interests me for reactive systems is properly
> modeling motion-blur per frame, possibly in a shader. According to the
> studies I've read, framerates must be a lot higher to avoid perceptual
> `jerkiness` unless motion blur is included.
Sounds interesting, I would give it a go and see what happens.
Beware of brain crack (a bit of fun pop philosophy)
http://www.youtube.com/watch?v=24prm3XjVgk&feature=player_detailpage#t=14s
.
Dan
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc