On Sunday, 26 May 2019 at 16:39:53 UTC, Manu wrote:
On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce
<digitalmars-d-announce@puremagic.com> wrote:
What? ... this thread is bizarre.
Why would a high quality SVG renderer decide to limit to 16x
AA? Are
you suggesting that they use hardware super-sampling to render
the
SVG?
They do both super-sampling and multi-sampling.
https://developer.nvidia.com/nv-path-rendering
Why would you use SSAA to render an SVG that way?
I can't speak for their implementation, which you can only
possible
speculate upon if you read the source code... but I would; for
each
pixel, calculate the distance from the line, and use that as the
falloff value relative to the line weighting property.
Because "path" in vector graphics terms is not just a line with
thickness, it's like a glyph, it has inside and outside areas
defined by the winding rule, it could be self intersecting etc.
Working out how far a pixel is from a given line doesnt tell
whether you should fill the pixel or not, or by how much.
Whether a pixel should be filled or not depends on everything
that has happened either to the left or the right depending on
which way your processing. It's not GPU friendly apparently.
You could decompose the path into triangles which would be more
GPU friendly, but that's actually quite an involved problem. To
put it in perspective decomposing a glyph into triangles so the
gpu can render it is probably gonna take lot longer than just
rendering it on the CPU.
How is the web browser's SVG renderer even relevant? I have
absolutely no idea how this 'example' (or almost anything in
this thread) could be tied to the point I made way back at the
start before it went way off the rails. Just stop, it's killing
me.
Somebody said browsers have been doing 2D on the GPU for years i
just pointed out that it was more complicated than that. I wasn't
replying to anything you said and dont really know why what i've
said has got your hackles up.