Re: [fonc] Morphic 3 defensive disclosure

2014-09-24 Thread Dan Amelang
Hi Juan,

Yes, that is some of the best TTF non-hinted rendering I've seen. Nice work!

And, yes, it does look like the bug is gone, thanks!

It will be interesting to look through a simplified, stand-alone(ish)
version of the code to fully grasp the detail of your approach. Again, no
rush, though.

Dan

On Tue, Sep 23, 2014 at 6:50 PM, J. Vuletich (mail lists) 
juanli...@jvuletich.org wrote:

  Hi Dan,

 Quoting Dan Amelang daniel.amel...@gmail.com:

   Hi Juan,

 Thanks for the screenshots, that helps a lot! Now, it would be ideal to
 have a visual like this to for the comparison:
 http://typekit.files.wordpress.com/2013/05/jensonw-900.png. But, I know
 that you've got limited time to work on this, and such a thing wouldn't be
 very high priority. Maybe down the road.


 Please take a look at
 https://dl.dropboxusercontent.com/u/13285702/Morphic3-TimesNewRomanSample.png
 I used Times New Roman for the sample. It is similar but not identical to
 the font in the Adobe demo image. I did it by converting the text to SVG in
 Inkscape, then using Morphic 3 to draw the svg file.

 There is no hinting at all here! Just better rasterization. The shape and
 weight is truer and more uniform (especially at smaller sizes), most glyphs
 look sharper. Starting from the third line, the quality is consistently
 better.

   Also, comparing your renderer+stroke font to the recently open sourced
 Adobe font rasterizer would be interesting, too (
 http://blog.typekit.com/2013/05/01/adobe-contributes-cff-rasterizer-to-freetype/).
 As far as I can tell, Adobe's rasterizer is pretty much the the
 state-of-the-art rasterizer for outline font rasterization. If you're
 making the case that outline fonts are intrinsically unable to match the
 quality of your stroke font, this comparison would be a convincing way to
 do so.


 I think the real contribution of Morphic 3 here is better rasterization,
 that doesn't need hinting to give very crisp and detailed results.

   Going back to the topic of Morphic 3 rendering TrueType fonts,  I'm
 attaching a few unfiltered zooms from your M3-TTF.png (your more recent
 M3-TTF-5.png looks the same in these areas). Notice the saturated colors in
 the middle of the black text. You mentioned that you have color fringing
 problems with 9 point sizes, but this font is about 12pt and the problem
 doesn't look like color fringing (i.e., the coloring isn't light nor just
 on the fringes, see
 http://typekit.files.wordpress.com/2010/10/gdi-cleartype.png for what I
 understand color fringing to look like). Maybe something else is going on
 here?

 ... snip ...

 Dan


 Yes. There was a bug there. It only happened for curve segments shorter
 than one pixel, affecting only very small point sizes. Thanks for pointing
 it out! The sample I prepared today clearly shows that the bug was fixed.

 Cheers,
 Juan Vuletich

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Morphic 3 defensive disclosure

2014-09-18 Thread Dan Amelang
Hi Juan,

Thanks for the screenshots, that helps a lot! Now, it would be ideal to
have a visual like this to for the comparison:
http://typekit.files.wordpress.com/2013/05/jensonw-900.png. But, I know
that you've got limited time to work on this, and such a thing wouldn't be
very high priority. Maybe down the road.

Also, comparing your renderer+stroke font to the recently open sourced
Adobe font rasterizer would be interesting, too (
http://blog.typekit.com/2013/05/01/adobe-contributes-cff-rasterizer-to-freetype/).
As far as I can tell, Adobe's rasterizer is pretty much the the
state-of-the-art rasterizer for outline font rasterization. If you're
making the case that outline fonts are intrinsically unable to match the
quality of your stroke font, this comparison would be a convincing way to
do so.

Going back to the topic of Morphic 3 rendering TrueType fonts,  I'm
attaching a few unfiltered zooms from your M3-TTF.png (your more recent
M3-TTF-5.png looks the same in these areas). Notice the saturated colors in
the middle of the black text. You mentioned that you have color fringing
problems with 9 point sizes, but this font is about 12pt and the problem
doesn't look like color fringing (i.e., the coloring isn't light nor just
on the fringes, see
http://typekit.files.wordpress.com/2010/10/gdi-cleartype.png for what I
understand color fringing to look like). Maybe something else is going on
here?

Back to your comments...I also like the idea of having a single rasterizer
for text and general graphics. At least one that can be just parametrized
or extended to handle text nicely as needed.

Yes, there is no question that one can improve on the visual output of the
popular rasterizers (cairo, skia, antigrain, qt, etc.). The question has
always been at what cost to software complexity and at what cost to
performance.

I wasn't able to mentally separate your rasterization code from the rest of
the Morphic 3 code (I'm not a big Smalltalker, so maybe it's just me), so I
couldn't evaluate the complexity cost. It also looked like there were
several optimizations mixed in that could have thrown off my understanding.

Would you be interested in creating a clean, totally not optimized (and
thus slow), stand alone version of the rasterizer just for exposition
purposes? Something for people like me to learn from? Again, I know you
have very limited time. No rush.

Dan

On Thu, Sep 18, 2014 at 6:38 AM, J. Vuletich (mail lists) 
juanli...@jvuletich.org wrote:

  Hi Dan,

 Quoting Dan Amelang daniel.amel...@gmail.com:

  Hi Juan,

 Glad that you're making progress! One question: how hard would it be to
 use a TrueType font (or any fill-based font) with your rasterizer?


 It is some work, as the TrueType font needs to be imported. I already did
 this for DejaVu, printing a text sample to pdf, then converting that to svg
 with Inkscape, and then loading the svg in Cuis / Morphic 3 and using a
 CodeGeneratingCanvas to write the Smalltalk code for me. The attach is a
 sample image using just that font.

  And, I would be interested in comparing the visual results of rendering
 1) a TrueType font via FreeType, 2) a TrueType font via your Morphic 3
 rasterizer, 3) your stroke font via the Morphic 3 rasterizer.


 Taking a look at the attach, and the original attach in the mail linked
 below, and comparing with FreeType samples (for example, the regular Cuis
 fonts), I think that (sorted by visual quality):

 a) For pointSize =14
   1) Morphic 3 / StrokeFont with autohinting
   2) Feetype / TrueType with autohinting
   3) Morphic 3 / TrueType (no autohinting possible yet)
 Note 1: For M3/TTF I could take the autohinting algorithm from Freetype,
 and quality would be at least on par with it, for point sizes = 9
 Note 2: For point sizes  9 (fills less than one pixel), M3/TTF produces
 color fringes. I think this can be enhanced with some work.
 I didn't spend much time on these issues, as I focused on StrokeFonts,
 that give best results, at least for a programming environment.
 Applications might need TTF, and there are possible enhancements to be done.

 b) Rotated text. Here the difference in quality is rather small.
   1) Morphic 3 / StrokeFont (autohinting off)
   2) Feetype / TrueType
   3) Morphic 3 / TrueType

 c) Point sizes  14. Here I think the three alternatives look really good,
 no autohinting is needed, and there is no clear winner. (Same would go for
 most point sizes on a Retina or other hi dpi display, such as phones.)

  I know option 3) produces the best quality, I'm just interested in
 the visual details. Such a comparison might also be helpful to showcase
 and explain your work to others.


 It is also worth noting that the usual Cairo + Freetype (or Cairo + Pango
 + Freetype) combo uses different algorithms for text and graphics, as
 Freetype can do much better than Cairo, but can not do general vector
 graphics. But Morphic 3 gives the same top quality for vector graphics too,
 as text is done simply

Re: [fonc] Morphic 3 defensive disclosure

2014-09-17 Thread Dan Amelang
Hi Juan,

Glad that you're making progress! One question: how hard would it be to use
a TrueType font (or any fill-based font) with your rasterizer? And, I would
be interested in comparing the visual results of rendering 1) a TrueType
font via FreeType, 2) a TrueType font via your Morphic 3 rasterizer, 3)
your stroke font via the Morphic 3 rasterizer.

I know option 3) produces the best quality, I'm just interested in the
visual details. Such a comparison might also be helpful to showcase and
explain your work to others.

Dan

On Wed, Sep 17, 2014 at 6:25 AM, J. Vuletich (mail lists) 
juanli...@jvuletich.org wrote:

 Hi Dan, Folks,

 I finally published the Morphic 3 code in its current state. It is still
 unfinished, and in need of cleanup. I hope you are still interested in this
 stuff.

 See http://jvuletich.org/pipermail/cuis_jvuletich.org/
 2014-September/001692.html I attached there a demo image with some SVG
 drawings, and some text at rather small sizes, and some rotated text too.
 This took me a lot of time, because for maximum text quality I had to
 design a new font, based on pen strokes (and not fills!). I based it on the
 technical lettering I learned at high school.

 I think I'm now close to the limit of what is possible on regular LCDs
 when trying to optimize crispness, absence of pixellation and absence of
 color fringes. What I need to do now is to fill in some details, then
 optimization and a VM plugin. Then it could become the default graphics
 engine for Cuis ( www.cuis-smalltalk.org ).

 Cheers,
 Juan Vuletich

 Quoting Dan Amelang daniel.amel...@gmail.com:

  Hi Juan,

 I think it's great that you are sharing your rasterization approach.
 So far it sounds pretty interesting. FWIW, after you've released the
 code, I would be interested in using this approach to create a higher
 quality, drop-in replacement for the current Rasterize stage in the
 Gezira rendering pipeline.

 Best,

 Dan

 On Tue, Dec 3, 2013 at 6:24 PM, J. Vuletich (mail lists)
 juanli...@jvuletich.org wrote:

 Hi Folks,

 The first defensive disclosure about Morphic 3 has been accepted and
 published at
 http://www.defensivepublications.org/publications/prefiltering-
 antialiasing-for-general-vector-graphics
 and http://ip.com/IPCOM/000232657 ..

 Morphic 3 is described at
 http://www.jvuletich.org/Morphic3/Morphic3-201006.html

 This paves the way for releasing all the code, as no one will be able to
 patent it.

 Cheers,
 Juan Vuletich

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Morphic 3 defensive disclosure

2013-12-04 Thread Dan Amelang
Hi Juan,

I think it's great that you are sharing your rasterization approach.
So far it sounds pretty interesting. FWIW, after you've released the
code, I would be interested in using this approach to create a higher
quality, drop-in replacement for the current Rasterize stage in the
Gezira rendering pipeline.

Best,

Dan

On Tue, Dec 3, 2013 at 6:24 PM, J. Vuletich (mail lists)
juanli...@jvuletich.org wrote:
 Hi Folks,

 The first defensive disclosure about Morphic 3 has been accepted and
 published at
 http://www.defensivepublications.org/publications/prefiltering-antialiasing-for-general-vector-graphics
 and http://ip.com/IPCOM/000232657 ..

 Morphic 3 is described at
 http://www.jvuletich.org/Morphic3/Morphic3-201006.html

 This paves the way for releasing all the code, as no one will be able to
 patent it.

 Cheers,
 Juan Vuletich

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Efficiency in Nile Was: Thoughts on disruptor pattern

2012-08-08 Thread Dan Amelang
Hi Shawn,

On Sun, Jul 15, 2012 at 12:00 PM, Shawn Morel shawnmo...@me.com wrote:

 The runtime also is designed to minimize L1 cache misses, more on that
 if there is interest.

 I would be interested in some of the details.

I'll write more about this in a separate email.

 Regarding the Nile C runtime, inter-thread communication is currently
 not a bottleneck.

 what are the current Nile bottlenecks?

It depends on the Nile program. For several graphics pipelines, the
SortBy stage (a runtime-supplied Nile process) often takes a good part
of the time. This is because rasterization often involves sorting many
stream elements, twice. I have a handful of ideas for optimizing
SortBy that I just haven't implemented yet.

The important point here though, is that most often the performance
profiles show that most time is spent performing the essential
computations of the Nile program, rather than incidental runtime work.

  Queuing is a relatively infrequent
 occurrence (compared to computation and stream data read/writing),

 How is that so, I would have assumed that joins in the stream processor 
 effectively become a reader writer problem.

I'm not sure I understand the question, but I'll try to answer anyway.

The Nile model is about stream processing (rather than say,
fine-grained/reactive dataflow). So the data is processed in batches
(i.e., data is buffered). Queuing/dequeuing stream data is done at
batch granularity, rather than individual data element granularity.
Thus, queuing is a relatively infrequent occurrence compared to
regular computation and individual stream element reading/writing.

Regarding joins in the process network, a single process performs the
work of combining the two incoming streams. So we have basically a
two-writer-one-reader scenario, but the two writers do not share a
queue. The joining process (the reader) pulls from two queues, and
produces output on a single queue.

It might help to know that joins in Nile are limited to zipping
(combining input elements from the two streams in an alternating
fashion) and concatenating (appending the entire input stream of one
branch to the entire stream of the other). There is no first element
to arrive on either stream like in reactive/event systems.

  plus the queues are kept on a per-process basis (there are many more
 Nile processes than OS-level threads running) which scales well.


 Could this arbitrary distinction between process scheduling, OS threads and 
 app threads (greenlets, nile process etc) be completely removed with a 
 pluggable hierarchical scheduler? For example, at the top-level you might 
 have a completely fair scheduler (4 processes = 1/4 of the time to your 
 process assuming you can make use of it). Within that, it's up to you to 
 divvy-up time. I'm visualizing this kind of how the Mach kernel had external 
 memory pagers that you could plug in if ever you had better page eviction 
 models for your domain.

 Then there's obviously how that interacts with the HW model for thread / 
 process switching and memory barriers but that seems like a separate problem.

Again, I'm not sure I follow you here. But here goes:

The Nile runtime (the multithreaded C-based one) uses OS threads only
to get access to multiple cores, not because I want to use the OS
scheduler to do anything for me regarding load balancing or Nile
process scheduling. Ideally, the number of OS threads used in the
runtime equals (nearly) the number of cores (or virtual cores if SMT
is present), and the OS will assign each OS thread to a separate
(virtual) core.

The main scheduling and load balancing is done by the runtime at the
Nile process level (as in green threads). Scheduling is very
specific to the Nile computational model, so I don't see how having a
pluggable scheduler might help.

Sorry for the late response -- was on vacation.

Dan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Any thoughts on Disruptor pattern for high-throughput, low latency concurrency?

2012-07-13 Thread Dan Amelang
Hi Josh, good to hear from you again. Hope Google is treating you well :)

The Disruptor looks to me like it could be an interesting approach
to optimizing inter-thread communication. Their technical paper is
very readable. For their problem domain, it makes a lot of sense.

Regarding the Nile C runtime, inter-thread communication is currently
not a bottleneck. So even if you created a C-based disruptor for Nile,
or generated Java code from Nile (in order to use theirs, which is
Java-based), I'm pretty sure you wouldn't see a performance
improvement.

The Nile runtime currently does several things to keep inter-thread
communication/contention down. It uses queues of fixed-sized buffers
for batched stream data.  Once a Nile process gets a hold of a buffer
for reading/writing, it has exclusive access to it, so no
synchronization is needed during reading/writing. Once the process is
done with the buffer, then it might contend with another when queueing
the buffer, but this is unlikely. Queuing is a relatively infrequent
occurrence (compared to computation and stream data read/writing),
plus the queues are kept on a per-process basis (there are many more
Nile processes than OS-level threads running) which scales well.

On top of that, the buffers (and other managed memory objects) are
laid out carefully to avoid false sharing (of cache lines).

The runtime also is designed to minimize L1 cache misses, more on that
if there is interest.

I apologize that as usual with my stuff, none of the Nile C runtime is
documented (and there aren't even program comments). But, I'm
currently writing all this up for my dissertation, which will be
available in case these details are of interest to anyone.

Dan

On Fri, Jul 13, 2012 at 8:46 AM, Josh Gargus j...@schwa.ca wrote:
 I hope this may be of general interest, but I'm personally interested on
 Dan's thoughts on whether Disruptors might be a suitable compilation
 target for Nile.  My intuition is that it may be the right way for Nile to
 run efficiently on commonly available multicore processors (i.e. by
 minimizing branch-misprediction and memory contention, and optimizing cache
 behavior).

 I'm referring to:

 http://code.google.com/p/disruptor/


 Google will turn up plenty more references, but the StackOverflow topic is
 worthwhile:
 http://stackoverflow.com/questions/6559308/how-does-lmaxs-disruptor-pattern-work

 One limitation that I saw was it appears to work only for simple linear
 dataflows:, each pipeline stage consumes and produces a single value.  This
 is a consequence of each entry in the ring-buffer having a fixed size (each
 entry is a data structure containing the scratch space necessary to record
 the input/output of each dataflow stage.

 This seems to be a serious limitation, since Nile allows you to easily
 express dataflows with more complicated topologies.  For example, to draw a
 filled shape bounded by a sequence of bezier curves, you might write a Nile
 program that recursively splits curves until they are small enough or
 straight enough to be approximated by linear segments, and then to
 rasterize these to produce pixels to be shaded by downstream Nile elements.
 The problem is that you don't know how many outputs will be produced by each
 pipeline stage.

 A solution that occurred to me this morning is to use multiple ring buffers.
 Linear subgraphs of the dataflow (i.e. chained sequences of
 one-input-to-one-output elements) can fit into a single ring buffer, but
 elements that produce a varying number of outputs would output to a
 different ring buffer (or multiple ring buffers if it can produce multiple
 types of output).  This would be extremely cumbersome to program manually,
 but not if you compile down to it from Nile.

 I don't understand the Nile C runtime very well, so it's possible that it's
 already doing something analogous to this (or even smarter).

 Thoughts?

 Cheers,
 Josh
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread Dan Amelang
On Wed, Nov 9, 2011 at 1:31 AM, David Barbour dmbarb...@gmail.com wrote:

 On Tue, Nov 8, 2011 at 11:13 PM, Dan Amelang daniel.amel...@gmail.com
 wrote:

 I have never seen input prefixing in a stream-processing/dataflow
 language before. I could only find one passing reference in the
 literature, so unless someone points me to previous art, I'll be
 playing this up as an original contribution in my dissertation :)

 It's old, old art. Even C file streams and C++ iostreams allow get, put,
 putback - where `putback` means put something back onto a stream you just
 `get` from. I've seen this pattern many times - often in lexers and parsers,
 Iteratees, and various other stream-processing models.

Of course I'm aware of these :) There's a Nile parser written in
OMeta, and there's one in Maru now. Both put objects on their input.
And I'm familiar with C++ streams, notice how I based the Nile 
and  syntax on them.

Notice the first sentence of the paragraph that you quoted. I'm
pointing out that, as useful as input prefixing is, it doesn't appear
at all in stream processing languages. Furthermore, it doesn't appear
in stream processing models of computation.

Here's a bit of background. Take the early research, such as Duane
Adams' A Computation Model with Data Flow Sequencing in 1968.
(Strachey used streams to model I/O before that, like UNIX uses file
handles). Around this time, you also had Seror's DCPL, and Scott's
Outline of a Mathematical Theory of Computation.

If you start there, and go through Karp and Miller Properties of a
Model for Parallel Computations, Kahn's process network papers,
Dennis' dataflow work (esp. Id and VAL), Wadge and Ashcroft's dataflow
(particularly GLU), McGraw's SISAL, Lee's Dataflow Process Networks,
up to recent work like Streamit and GRAMPS, you won't find a single
one that even proposes input prefixing (corrections welcome).

My point is that introducing this feature into a stream processing
language and demonstrating its utility might be a research
contribution.

I do appreciate your interest in Nile/Gezira, and you've brought up
interesting questions. Due to time constraints, though, I'm going to
have to put less effort into comments like the above that strike me as
somewhat glib. I hope not to offend anyone or dismiss truly informed
comments, though. I just have a lot on my plate right now.

 Regarding your question about which processes would map poorly: the
 built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
 too). Many Gezira processes are a problem, such as ExpandSpans,
 CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
 the processes in the file stroke.nl (pen stroking). There's probably
 more, these are off the top of my head.

 Thanks. I'll peruse these.

As you look those over, it might help to know that the double arrow
⇒ is for process substitution, which is analogous to Kahn's
reconfiguration (see Kahn and MacQueen, 1976). That is, the effect
of the statement is to dynamically replace the current process with
the newly created sub-network following the arrow.

 The theory behind using Unicode in Nile is that source code is read a
 lot more than it is written. So I'm willing to make code a bit harder
 to write for a payoff in readability. And if Nile becomes what it
 should be, one shouldn't have to write much code anyway.

 With that philosophy, maybe we should be writing markup. That way we can
 read code in a comfortable `document` format. I think Fortress takes that
 approach.

Yes, similar idea. Though as Alan points out, markup is very weak, and
we can do better with interactive, graphical environments. Thus, I've
always felt that my games with Nile syntax are somewhat futile.

 He's never taken on pen stroke approximation (which is vital for

 2D vector graphics).

 Why is this vital? I think there are different understandings of the `image`
 abstraction here. One can understand images in terms of drawing arcs then
 filling between edges - and such a model is commonly seen in PostScript and
 Cairo and apparently Gezira. But it is not an authoritative abstraction.
 Pen-strokes with fill is a very imperative approach to graphics modeling.
 Elliott favors modeling lines in terms of areas. So do I. This seems to
 shift pen stroke approximation to a utility role - valuable, but not vital.

Is this conclusion really important enough to argue for? That
rendering lines should be considered valuable but not vital? I think
graphic designers would generally disagree. Regardless, just replace
all instances of vital with valuable in my original argument, and
I still stand by it.

I'm sorry but at this point, I think you're grasping at straws. I can
address one more comment, then I have to move on:

 Pen-strokes with fill is a very imperative approach...

This is just too much. Let's go over the details. In Gezira, I use the
stroke-to-path approach to pen stroking. This means that the
stroking pipeline takes a stream of Beziers

Re: [fonc] Fibonacci Machine

2011-11-09 Thread Dan Amelang
Hi Dale,

You are right, that's a mistake in the paper. You should switch
start-0 and start-1 on the top branch. Believe it or not, I did not
write this section :) It's not really about Nile, but about dataflow
in Nothing.

In fact, beware of the sentences The way the Nile runtime works was
generalized. Instead of expecting each kernel to run only when all of
its input is ready, then run to completion and die, the Nothing
version keeps running any kernel that has anything to do until they
have all stopped [1]. It makes it sound like Nile runtimes in general
wait for all the input to be ready before running a process. This is
only true of the Squeak and Javascript versions of the Nile runtime.
The C-based multithreaded one does not. Sorry I didn't catch this
before publication.

Regardless, the Nothing work strayed a bit from the Nile model of
computation, and not in directions I would take it, so don't take too
much about Nile from that section. Also, I wouldn't advocate writing
Fibonacci like this in Nile. Nile was designed for coarse-grained
dataflow, not fine-grained dataflow.

The main reason for this was my opinion that 1) mathematical
statements are often more readable than their visual, fine-grained
dataflow equivalents* and 2) coarse-grained dataflow can be quite
readable due to fewer communication paths, and thus easier
composition, and in many cases they contain only a simple
left-to-right flow.

On top of that, is it easier to efficiently parallelize coarse-grained
dataflow because the communication between components is much less,
allowing parallel hardware to operate more independently.

[1] For very simple statements, this may not be so true, but when
scaling up to more practical examples, I think fine-grained dataflow
gets messy fast.

Regards,

Dan

On Wed, Nov 9, 2011 at 6:45 AM, Dale Schumacher
dale.schumac...@gmail.com wrote:
 Thanks for disseminating the latest report.  It is, as always, an
 inspiration to see all the fine work being done.  I can hardly wait to
 play with the final system, and perhaps extend and build on it.

 One of the first things I did was re-create (in Humus) the Fibonacci
 Machine from the Nile data-flow model.  I think that the start-0 and
 start-1 processes in the upper branch (b1, b2, b3) should be
 reversed.  It seems that the add process should first receive 0
 (from b3) and 1 (from b5), then the 1 from b2 can be forwarded
 and combined with the 1 from the feedback loop (b7).  I like how the
 pair of forwarders in the upper branch form a kind of delay-line
 effect to stagger the previous and next results.

 I appreciate the opportunity to explore and experiment with the ideas
 here.  It will be even better when I can do it in the same environment
 that you do, rather than translating into my own system.

 On Mon, Nov 7, 2011 at 5:08 PM, karl ramberg karlramb...@gmail.com wrote:
 http://www.vpri.org/pdf/tr2011004_steps11.pdf

 Karl

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Fibonacci Machine

2011-11-09 Thread Dan Amelang
My original reply had a mistake in it, please disregard it. The
following completely replaces it:

Hi Dale,

You are right, that's a mistake in the paper. You should switch
start-0 and start-1 on the top branch. Believe it or not, I did not
write this section :) It's not really about Nile, but about a
particular approach to dataflow in Nothing.

In fact, beware of the sentences:

The way the Nile runtime works was generalized. Instead of expecting
each kernel to run only when all of its input is ready, then run to
completion and die, the Nothing version keeps running any kernel that
has anything to do until they have all stopped

The above makes it sound like Nile runtimes in general wait for all
the input to be ready before running a process. This is only true of
the Squeak and Javascript versions of the Nile runtime. The C-based
multithreaded one does not. Sorry I didn't catch this before
publication.

Regardless, the Nothing work strayed a bit from the Nile model of
computation, and not in directions I would take it, so don't take too
much about Nile from that section. Also, I wouldn't advocate writing
Fibonacci like this in Nile. Nile was designed for coarse-grained
dataflow, not fine-grained dataflow. The main reason for this was my
opinion that 1) mathematical statements are often more readable than
their visual, fine-grained dataflow equivalents* and 2) coarse-grained
dataflow can be quite readable due to fewer communication paths, and
thus easier composition, and in many cases they contain only a simple
left-to-right flow.

On top of that, is it easier to efficiently parallelize coarse-grained
dataflow because the communication between components is much less,
allowing parallel hardware to operate more independently.

* For very simple statements, this may not be so true, but when
scaling up to more practical examples, I think fine-grained dataflow
gets messy fast.

Regards,

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] new document

2011-11-08 Thread Dan Amelang
On Tue, Nov 8, 2011 at 4:22 PM, David Barbour dmbarb...@gmail.com wrote:
 Can you elucidate the distinctions between Nile and Gezira?

Nile is the programming language. Its syntax is a bit like Haskell.
The high-level model of computation is a variation of Kahn process
networks. The low-level part is a single-assignment,
mathematics-oriented language for specifying the internal behavior of
a process.

 Based on the
 (undocumented) code, I guess that Nile is more of a process model (queues,
 heaps, threads)

You're looking at the implementation details of one of the Nile
execution environments (i.e., runtimes), the multithreaded C-based
one. The queues, threads, etc. are used for implementing the process
network part of Nile on multithreaded CPUs.

 and Gezira is more of the rendering.

Gezira is a 2D vector graphics renderer written in Nile.

 In that case, it may be
 Gezira I was thinking would compile well to shaders on a GPU.

Certain parts of Gezira belong to the subset of Nile that could be
efficiently executed on a GPU.

 OpenCL is certainly one approach to leveraging a GPGPU to a reasonable
 degree. Might be worth pursuing that route. But I've been surprised what can
 be done with just the rendering pipelines. Pure functional graphics convert
 to shaders + uniforms quite well.

Certain stages of Gezira's rendering pipeline would not convert to
shaders very well. Gezira covers different territory than, say, Pan,
Vertigo, etc. None of Conal Elliot's pure functional graphics
projects ever tried to perform, say, anti-aliased rasterization
(AFAIK). They always relied on non-pure functional systems
underneath to do the heavy lifting.

Gezira, on the other hand, strives to do it all. And in a mostly
functional way. The processes of Nile are side-effect free, with the
exception of the final WriteToImage stage.

Regards,

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-08 Thread Dan Amelang
Hi David,

On Tue, Nov 8, 2011 at 6:23 PM, David Barbour dmbarb...@gmail.com wrote:

 The high-level model of computation is a variation of Kahn process
 networks. The low-level part is a single-assignment,
 mathematics-oriented language for specifying the internal behavior of
 a process.

 I've been reading through Nile and Gezira code and understand the model
 better at this point. It's basically pure functional stream processing,
 consuming and generating streams. I understand that `` generates one
 output, and `` seems to push something back onto the input stream for
 re-processing.

Yes, you are correct. The spacial metaphor here is that streams flow
from left to right, so  x pushes x to the right, onto the tail of
the output stream.  x pushes (pulls? :) x to the left, onto the
head of the input stream. Pipeline construction works this way too,
e.g., ClipBeziers → Rasterize → ApplyTexture → WriteToImage . A bit
silly, perhaps, but it works.

Input prefixing is what I call this pushing of data onto the input
stream, though I'm not set on that term. You used the term pushback,
which I like, but the problem is that we're pushing onto the _front_
of the input stream, and pushfront just doesn't have the same ring
:)

Whatever the name, this feature is vital to writing expressive
programs in Nile. It provides a recursion-like capability. For
example, the DecomposeBeziers process successively decomposes Beziers
until they are small enough to process. This is done by splitting the
Bezier into two parts (à la De Casteljau), and pushing each sub-Bezier
onto the input stream.

I have never seen input prefixing in a stream-processing/dataflow
language before. I could only find one passing reference in the
literature, so unless someone points me to previous art, I'll be
playing this up as an original contribution in my dissertation :)

 Which Nile operators do you anticipate would translate poorly to shaders? I
 guess `zip` might be a problem. SortBy and pushback operators - at least if
 finite - could be modeled using shader global state, but that would be a bit
 of a hack (e.g. receive some sort of EOF indicator to emit final elements).
 Hmmm

Yes, this is the beginning of the difficulties. Just taking the input
prefixing issue, it's problematic to model the unbounded input stream
as global state. You have issues because of the finiteness of the
global state, and because of the inefficiency of global write/read
access in the shader (see GPU docs).

And as I brought up before, even if one can get something to run on
the GPU, that's very different from getting something to run much
faster than on the CPU.

Regarding your question about which processes would map poorly: the
built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
too). Many Gezira processes are a problem, such as ExpandSpans,
CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
the processes in the file stroke.nl (pen stroking). There's probably
more, these are off the top of my head.

 I think I'd be in trouble actually writing Nile code... I don't have a text
 editor with easy Unicode macros. Which do you use?

I use vim. So I hit ctrl-v u2200 for ∀.

Ideally, we'd have a Nile IDE with keyboard macros in addition to a
little char map to click on (Bert built one for Frank).

The theory behind using Unicode in Nile is that source code is read a
lot more than it is written. So I'm willing to make code a bit harder
to write for a payoff in readability. And if Nile becomes what it
should be, one shouldn't have to write much code anyway.

 I agree that Conal Elliott's focus has certainly been on composable,
 morphable, zoomable graphics models

I'm glad we agree...wait a second...when did I say the above?

 - primarily, everything that happens
 before rasterization.

Ah, well, now I don't agree that his focus has been on everything
that happens before rasterization. He's left out a lot. He's never
taken on pen stroke approximation (which is vital for 2D vector
graphics). I had to struggle a bit to come up with my functional
approach to pen stroking (if I missed prior art, let me know!). He's
never taken on, say, analytical geometry clipping. On top of that,
there's a lot _after_ rasterization, and he doesn't addresses that
territory much either.

I like Conal's work, really. I read all his papers on functional
graphics several years ago, and it probably subconsciously influenced
my research. I'm just objecting to the idea that he covered very much
functionality in the computer graphics space. I think he took on the
easiest niche to model in a purely functional language.

 Anti-aliased rasterization can certainly be modeled in
 a purely functional system,

Easier said than done, I think. Again, I struggled quite a bit to come
up with the Gezira rasterizer (which is basically purely functional).
I don't know of any previous anti-aliased rasterizer done in a purely
functional style, do you? Pointers appreciated.

You could 

Re: [fonc] Other interesting projects?

2010-05-10 Thread Dan Amelang
Hi Chris, glad to have you around!

On Sun, May 9, 2010 at 9:50 PM, Chris Double chris.dou...@double.co.nz wrote:
 On 10/05/10 04:59, Alan Kay wrote:

 There are already
 quite a few Smalltalk elements in Factor (and the postfix language
 itself (for most things) could be used as the byte-code engine for a
 Smalltalk (looking backwards) and for more adventurous designs (looking
 forward)).

 Factor already has a Smalltalk implementation (a parser and compiler to
 Factor code) that Slava did a while back as a proof of concept. I'm not sure
 how performant or complete it is however.

 Dan Amelang has been moving Nile to a really nice place, and it would be
 relatively easy to retarget the OMeta compiler for this (particularly
 the JS grounded one) to ground in Factor.

 Is there a Nile grammar somewhere? I tried searching for it and didn't come
 up with anything. I see Dan's github repository but it doesn't seem to
 include the Ometa definition.

There is a preliminary version of the Nile grammar embedded in the
Ometa-based Nile-to-C compiler in my nile repository. I hope to
finalize (i.e., remove the ugly warts from) the Nile syntax in the
next couple weeks. In addition, Alex and I have been working on the
formal semantics of Nile. In the end, I hope to have both a small,
clean language and a small, clean compiler for others to play with. I
hope to pique your interest!

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Other interesting projects?

2010-05-10 Thread Dan Amelang
FYI, there are some failed experiments left over in this grammar.
Like the [ exprs:xs ] syntax for tuples and the whole idea of
tuple reductions (e.g., ∧[ expr:x ]).

Dan

On Sun, May 9, 2010 at 11:02 PM, Alessandro Warth alexwa...@gmail.com wrote:
 Hi Chris,
 Here's the nile parser that I wrote in OMeta/Squeak.
 Cheers,
 Alex

 On Sun, May 9, 2010 at 9:50 PM, Chris Double chris.dou...@double.co.nz
 wrote:

 On 10/05/10 04:59, Alan Kay wrote:

 There are already
 quite a few Smalltalk elements in Factor (and the postfix language
 itself (for most things) could be used as the byte-code engine for a
 Smalltalk (looking backwards) and for more adventurous designs (looking
 forward)).

 Factor already has a Smalltalk implementation (a parser and compiler to
 Factor code) that Slava did a while back as a proof of concept. I'm not sure
 how performant or complete it is however.

 Dan Amelang has been moving Nile to a really nice place, and it would be
 relatively easy to retarget the OMeta compiler for this (particularly
 the JS grounded one) to ground in Factor.

 Is there a Nile grammar somewhere? I tried searching for it and didn't
 come up with anything. I see Dan's github repository but it doesn't seem to
 include the Ometa definition.

 Chris.
 --
 http://bluishcoder.co.nz

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Fonc on Mac Snow Leopard?

2010-05-10 Thread Dan Amelang
On Mon, May 10, 2010 at 10:46 AM, John Zabroski johnzabro...@gmail.com wrote:
 Alan,

 If I took the time to write an idea memo, would you (and possibly others)
 at VPRI take the time to comment on it?  It would mainly be example-driven
 and use an interweaving storywriting style, since I am not well-versed in
 scientific writing style, but, if necessary, I could cite and compare to
 various academic work.

I'll read it.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-28 Thread Dan Amelang
On Sun, Feb 28, 2010 at 8:50 AM, Reuben Thomas r...@sc3d.org wrote:

 Think of a software project as like Plato's model of the soul as a
 charioteer with two horses, one immortal and one mortal, only without
 the goal of reaching heaven. The mortal horse is the imperatives of
 the real world: developers, money, users, releases and so on, while
 the immortal horse represents elegance, simplicity, performance,
 design perfection. A successful project usually manages to keep the
 two horses in relative harmony, making something good and practical.
 VPRI seems to have started off with just the immortal horse

This could well be. How else should an ambitious research project start off?

Research in general involves incubating fragile ideas that might not
be ready to face what you call the real world (assuming earth is
more real than heaven :)
Money, users, releases, etc.

 In other words, I think you have it the wrong way round: it is
 precisely by caring about one's public that one fixes the rough edges

One man's rough edge is another's great idea in the making :)

 I think it's scandalous that a publically-funded non-secret project
 does not have far stricter requirements for public engagement than are
 apparent here.

Scandalous! :) Actually, in my experience, many publically (sic)
-funded projects don't have public repositories that are updated in
real-time (like many of ours are). So the scandal may be more
widespread than we initially suspected!

 I would add that the reason I care is because I have a great deal of
 respect for Ian Piumarta in particular: I was blown away by his
 Virtual Virtual Machine work when I went to INRIA Rocquencourt in
 1999, greatly impressed by his code generation work on Smalltalk (at
 least that did get out the door), and really excited when I first came
 across COLA. This stuff should be out there!

Ian does do great stuff. And much of his work is out there:

http://piumarta.com/software/

And there is more coming. But please consider what I said about
incubating great ideas.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-28 Thread Dan Amelang
On Sun, Feb 28, 2010 at 9:53 AM, Andrey Fedorov anfedo...@gmail.com wrote:
 Considering the ambition of the project relative to its resources, I think
 it's reasonable for STEPS to keep a low profile and spend less effort on
 educating than one might like.

Thank you :) We do have limited resources and wild ambitions. And I
won't be able to answer emails as thoroughly as I am today for that
reason.

 That said, I'd appreciate a simple suggested reading list for independent
 study - in my case, for someone with an undergrad in CS.

A reasonable suggestions. Besides the list on the vpri website, you
could also look at the references in the writings. Also, Alan likes to
give people references to read, so you could try him, and report back
here (with his permission).

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-28 Thread Dan Amelang
On Sun, Feb 28, 2010 at 1:48 PM, Reuben Thomas r...@sc3d.org wrote:
 On 28 February 2010 17:53, Andrey Fedorov anfedo...@gmail.com wrote:
 Considering the ambition of the project relative to its resources, I think
 it's reasonable for STEPS to keep a low profile and spend less effort on
 educating than one might like.

 A software research project that does not aggressively push its code
 out is a waste of time.

We'll have to agree to disagree, then. My understanding of the history
of computer science does not seem to line up with this assertion,
though.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-28 Thread Dan Amelang
On Sun, Feb 28, 2010 at 2:21 PM, Reuben Thomas r...@sc3d.org wrote:
 On 28 February 2010 22:16, Dan Amelang daniel.amel...@gmail.com wrote:
 (standard disclaimer: I don't represent the official stance of VPRI or Alan 
 Kay)

 On Sun, Feb 28, 2010 at 6:37 AM, Reuben Thomas r...@sc3d.org wrote:

 and the projects directly linked to on the Our work page
 did not originate at VPRI (Squeak Etoys  Croquet).

 It was the pretty much the same group of people, but the group has
 been hosted by different organizations over the years (Disney, HP,
 etc.)

 Indeed, but all this stuff is rather old now. That it's still in the
 headlines is worrying.

Obviously one's definition of old factors into the discussion.

I'm more worried about how all the supposedly new stuff dominates headlines :)

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-27 Thread Dan Amelang
Hi John,

Although I am a VPRI employee and work on the STEPS project, the
following is not an official position of the organization nor a
definitive guide to Alan Kay's views.

That said, I hope I can help clarify things somewhat.

On Fri, Feb 26, 2010 at 3:15 PM, John Zabroski johnzabro...@gmail.com wrote:

 ...one of the three key stumbling blocks to building real
 software engineering solutions -- size.

 But I am not convinced VPRI really has a solution to the remaining two
 stumbling blocks: complexity and trustworthiness.

I don't think anyone on the project is interested in reducing size
w/out reducing complexity. We're far more interested in the latter,
and how the former helps us gauge and tame the latter.

 I've read about Smalltalk and the history of its development, it appears the
 earliest version of Smalltalk I could read about/heard of, Smalltalk-72,
 used an actors model for message passing.  While metaobjects allow
 implementation hiding, so do actors.  Actors seems like a far better
 solution

FWIW, Alan likes (somewhat) the Erlang process model of execution and
has said how in some ways it is closer to his original idea of how
objects should behave.

(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)

 But it seems
 way more pure than AMOP because a model-driven compiler necessarily will
 bind things as late as necessary, in part thanks to a clockless, concurrent,
 asynchronous execution model.

See above.

 UNIX hit a blocking point almost immediately due
 to its process model, where utility authors would tack on extra functions to
 command-line programs like cat.  This is where Kernighan and Pike coined the
 term cat -v Considered Harmful, becaise cat had become way more than just
 a way to concatenate two files.  But I'd argue what KP miss is that the
 UNIX process model, with pipes and filters as composition mechanisms on
 unstructured streams of data, not only can't maximize performance,

The ability of a given programming model to maximize performance is
not a major draw for me. I just want fast enough, which rarely
requires maximum performance, in my experience.

 it can't
 maximize modularity,

Ditto. Both of these are important, but the idea of maximizing one
attribute of a system is not so appealing to me.

 because once a utility hits a performance wall, a
 programmer goes into C and adds a new function to a utility like cat so that
 the program does it all at once.

If only cat itself were designed in a more modular way, using a more
modular programming model. Then maybe adding optimizations as
necessary wouldn't be so bad. In that case, maybe the UNIX process
model and pipes aren't to blame?

Regardless, even in an ideal system, the need to peel away layers to
get better performance might only be reduced and never fully
eliminated.

  So utilities naturally grow to become
 monolithic.  Creating Plan9 and Inferno Operating Systems just seem like
 incredibly pointless from this perspective, and so does Google's Go
 Programming Language (even the tools for Go are monolithic).

Interesting related work: Butler Lampson on monolithic software
components. This stuff is worth drinking deeply from, IMO (as
opposed to skimming)

http://research.microsoft.com/en-us/um/people/blampson/Slides/ReusableComponentsAbstract.htm

http://research.microsoft.com/en-us/um/cambridge/events/needhambook/videos/1032/head.wmv

 Apart from AMOP, Alan has not really said much about what interests him and
 what doesn't interest him.  He's made allusions to people writing OSes in
 C++.

I think this is a Red Herring. I don't think that Alan really thinks
that writing an OS is C++ is a good idea. But you should go to the
source to understand what he meant.

 So I've been looking around, asking, Who is
 competing with VPRI's FONC project?

So the projects you mention are interesting, but they seem to be
missing a major component of the STEPS project: to actually build a
real, practical personal computing system.

 What do FONC people like Alan and Ian have to
 say?

I may have disappointed as I am not a FONC person like Alan or Ian.
But I hope I was helpful.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] s3 slides

2008-06-13 Thread Dan Amelang
On Fri, Jun 13, 2008 at 9:27 AM, Ted Kaehler [EMAIL PROTECTED] wrote:
 Folks,
 On slide #42, Gezira, Shouldn't the top line be Max and not Min?
 Otherwise every edge contributes at least 1 to every pixel, no matter where
 it is!

Hi Ted,

No, min is correct. The intention is that every edge contributes at
most 1. Using max instead would allow a contribution to exceed one
(which would be incorrect).

Perhaps there is confusion about what the min function does. It
returns the minimum of the two arguments. It might erroneously be
thought of as returning at least the magnitude of the arguments. I'm
guessing that that stems from our use of the word minimum in the
English language, for example At a minimum, you just do this... for
setting a lower bound. On the contrary, the min function actually sets
an upper bound. It sets the upper bound by restricting the result to
be the minimum of the arguments, where one of the arguments usually is
an upper bound. Strange, isn't it :)

Hope that helps,

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Compilation problem: undefined reference to `GC_malloc'

2007-11-25 Thread Dan Amelang
On Nov 22, 2007 11:28 PM, Antoine van Gelder [EMAIL PROTECTED] wrote:
 ...
 I've made a small modification to your patch which:

   * escapes the opening '(' on the awk expression


 Patch tested against a clean revision 362 tree.

Nice job, guys. All the libjolt-related parts of the patch look great.
Thanks for the help.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] tutorial

2007-11-25 Thread Dan Amelang
On Nov 23, 2007 2:39 AM, Stéphane Conversy [EMAIL PROTECTED] wrote:
 ...
 for example, there is a lot of canvas examples in the function directory,
 none in the object directory.
 Why is that ? can't I program OO graphical stuff  with LOLA ?

You can. The fact that the canvas stuff was done in jolt is not a sign
of what you can or can't do with the various pieces of the system. The
canvas stuff was created initially to support some jolt+javascript
work that needed graphical capabilities. I wouldn't try to infer
anything profound from its existence/implementation.

Dan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Retrieving an object's base

2007-06-18 Thread Dan Amelang
Anyone know how to retrieve an object's base? I think all I want is
the delegate member of the object's vtable, but I can't seem to get
that from within jolt. _vtable objects don't respond to the delegate
message.

Basically, I just want to walk one step up the clone family. Similar
existing code (the various implementations of isKindOf) all have some
way to get their grubby little paws on the delegate member of the
vtable. Besides poking around in memory, I don't see a way to do this
in jolt.

Dan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Copying a pepsi object

2007-06-18 Thread Dan Amelang
On 6/17/07, Ian Piumarta [EMAIL PROTECTED] wrote:
 On Jun 16, 2007, at 10:51 PM, Dan Amelang wrote:

  Anyone know how I can make a copy of a pepsi object?

 Here's one cheap, cheerful and potentially dangerous (it assumes
 'self _sizeof' returns the correct value, which may not be true for
 objects that implement 'inline indexable fields') way to do it...

 Object shallowCopy
 [
  | _size clone |
  _size := self _sizeof.
  clone := self _vtable _alloc: _size.
  { memcpy(v_clone, v_self, (long)v__size); }.
  ^clone
 ]

Beautiful, thanks!

Dan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc