Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-09 Thread Gregg Tavares
I know this doesn't address your CD-ROM/USB stick situation but FYI...

for the dev situation there are many *SUPER* simple web servers

https://greggman.github.io/servez/

https://github.com/cortesi/devd/

https://github.com/indexzero/http-server/

https://docs.python.org/2/library/simplehttpserver.html (not recommended,
haven't tried the python 3 one)

https://chrome.google.com/webstore/detail/web-server-for-chrome/ofhbbkphhbklhfoeikjpcbhemlocgigb?hl=en
 (soon to be deprecated)

more here
http://stackoverflow.com/questions/12905426/what-is-a-faster-alternative-to-pythons-http-server-or-simplehttpserver

On Mon, Apr 10, 2017 at 4:36 AM, Jan Tosovsky 
wrote:

> On 2017-04-09 David Kendal wrote:
> >
> > ... there are many possible uses for local static files accessing
> > other local static files: the one I have in mind is shipping static
> > files on CD-ROM or USB stick...
>
> In this case the file structure is fixed so it can be exported as JSON
> file and then linked via the HTML header in every HTML file where it is
> needed. This structure is then directly available for the further
> processing.
>
> However, I am not sure this covers your use case.
>
> Jan
>
>


Re: [whatwg] Reviving ImageBitmap options: Intend to spec and implement

2016-02-10 Thread Gregg Tavares
Is there a reason in the proposal many of the options default to
"implementation specific behavior"?

If the point of ImageBitmap is to get the data (use Image if you don't
care), then it seems like having any "implementation defined" options,
especially as the default, is just asking for lurking bugs in websites


Re: [whatwg] OffscreenCanvas from a worker and synchronization with DOM/CSS

2016-01-23 Thread Gregg Tavares
Never mind me. For whatever reason my mind blanked out.

You can transfer to the main thread and then apply to a canvas.


[whatwg] OffscreenCanvas from a worker and synchronization with DOM/CSS

2016-01-23 Thread Gregg Tavares
I just noticed Firefox shipped an OffscreenCanvas implementation. Looking
at the spec it seems there is no way to synchronize updates from a worker
with dom/css manipulations.

Was this already discussed? There are web apps that synchronize HTML dom
elements with canvas updates. I'm sure they'd all love to gain the benefits
of being able to render to their canvas from a worker. But, if they can't
synchronize the canvas update with their DOM element position updates there
will be unacceptable skewing/judder/issues

Maybe it was already decided but I couldn't find the discussion. It seems
like a pretty bold thing to do for HTML because it basically encourages
using as little HTML/DOM/CSS as possible rather than encouraging just doing
some fancy rendering in a worker and the rest in HTML.

A few example of apps that would love to get the benefit of offscreen
rendering in a worker but could not without some way to synchronize

Apple Maps
https://youtu.be/bBs3sqH27Kk

Baidu Maps
https://youtu.be/dT-k-xI5UYw

Yahoo Japan Maps
https://youtu.be/DYVEILUCRZQ

Worse, those things that could have been HTML but are no longer HTML can't
be used in standard ways. For example because Yahoo Japan's Maps use HTML a
translation extension, Rikaikun, is able to provide translations. The
current OffscreenCanvas spec effectively discourages using HTML elements in
these cases making features like these impossible.

Rakaikun on Yahoo Japan Maps
https://youtu.be/sQ68V8ggwB0

Another example which would seem really relevant to WebVR is to allow
various parts of a scene to be presented as a fully functional web page
like this example


http://learningthreejs.com/blog/2013/04/30/closing-the-gap-between-html-and-webgl/

but that won't work if you can't synchronize dom and offscreencanvas.


Re: [whatwg] Challenging canvas.supportsContext

2013-06-20 Thread Gregg Tavares
On Wed, Jun 19, 2013 at 3:24 PM, Kenneth Russell  wrote:

> On Wed, Jun 19, 2013 at 3:06 PM, James Robinson  wrote:
> > On Wed, Jun 19, 2013 at 3:04 PM, Kenneth Russell  wrote:
> >>
> >> On Wed, Jun 19, 2013 at 2:20 PM, Brandon Benvie 
> >> wrote:
> >> > On 6/19/2013 2:05 PM, James Robinson wrote:
> >> >>
> >> >> What would a page using Modernizr (or other library) to feature
> detect
> >> >> WebGL do if the supportsContext('webgl') call succeeds but the later
> >> >> getContext('webgl') call fails?
> >> >
> >> >
> >> > I don't have an example, I was just explaining how Mozernizr is often
> >> > used.
> >> >
> >> >
> >> >> I'm also failing to see the utility of the supportsContext() call.
> >> >> It's
> >> >> impossible for a browser to promise that supportsContext('webgl')
> >> >> implies
> >> >> that getContext('webgl') will succeed without doing all of the
> >> >> expensive
> >> >> work, so any correctly authored page will have to handle a
> >> >> getContext('webgl') failure anyway.
> >> >
> >> >
> >> > Given this, it would seem supportsContext is completely useless. The
> >> > whole
> >> > purpose of a feature detection check is to detect if a feature
> actually
> >> > works or not. Accuracy is more important than cost.
> >>
> >> supportsContext() can give a much more accurate answer than
> >> !!window.WebGLRenderingContext. I can only speak for Chromium, but in
> >> that browser, it can take into account factors such as whether the GPU
> >> sub-process was able to start, whether WebGL is blacklisted on the
> >> current card, whether WebGL is disabled on the current domain due to
> >> previous GPU resets, and whether WebGL initialization succeeded on any
> >> other page. All of these checks can be done without the heavyweight
> >> operation of actually creating an OpenGL context.
> >
> >
> > That's true, but the answer still doesn't promise anything about what
> > getContext() will do.  It may still return null and code will have to
> check
> > for that.  What's the use case for calling supportsContext() without
> calling
> > getContext()?
>
> Any application which has a complex set of fallback paths. For example,
>
>   - Preference 1: supportsContext('webgl', { softwareRendered: true })
>   - Preference 2: supportsContext('2d', { gpuAccelerated: true })
>   - Preference 3: supportsContext('webgl', { softwareRendered: false })
>   - Fallback: 2D canvas
>

How would those checks work? In Chrome for example there are opaque
heuristics for gpu accelerated 2d. So now you'd need

   supportsContext('2d', {gpuAccelerated: true, width: someWidth, height:
someHeight, intendedUse: "line drawing" });




>
> I agree that ideally, if supportsContext returns true then -- without
> any other state changes that might affect supportsContext's result --
> getContext should return a valid rendering context. It's simply
> impossible to guarantee this correspondence 100% of the time, but if
> supportsContext's spec were tightened somehow, and conformance tests
> were added which asserted consistent results between supportsContext
> and getContext, would that address your concern?
>
> -Ken
>


Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-06-19 Thread Gregg Tavares
On Wed, Jun 19, 2013 at 4:03 PM, Rik Cabanier  wrote:

>
>
> On Wed, Jun 19, 2013 at 3:24 PM, Gregg Tavares  wrote:
>
>>
>>
>>
>> On Wed, Jun 19, 2013 at 3:13 PM, Rik Cabanier  wrote:
>>
>>>
>>> On Wed, Jun 19, 2013 at 2:47 PM, Gregg Tavares  wrote:
>>>
>>>> In order for ImageBitmap to be useful for WebGL we need more options
>>>>
>>>> Specifically
>>>>
>>>> premultipliedAlpha: true/false (default true)
>>>> Nearly all GL games use non-premultipiled alpha textures. So all those
>>>> games people want to port to WebGL will require non-premultipied
>>>> textures.
>>>> Often in games the alpha might not even be used for alpha but rather for
>>>> glow maps or specular maps or the other kinds of data.
>>>>
>>>
>>> When would premultipliedAlpha ever be true?
>>> 2D Canvas always works with non-premultiplied alpha in its APIs.
>>>
>>
>> AFAIK the canvas API expects all images to be premultiplied. Certainly in
>> WebKit and Blink images used in the canvas and displayed in the img tag are
>> loaded premultiplied which is why we had to add the option on WebGL since
>> we needed those images before they had lost data.
>>
>>
>>>
>>>
>>>>
>>>> flipY: true/false (default false)
>>>> Nearly all 3D modeling apps expect the bottom left pixel to be the first
>>>> pixel in a texture so many 3D engines flip the textures on load. WebGL
>>>> provides this option but it takes time and memory to flip a large image
>>>> therefore it would be nice if that flip happened before the callback
>>>> from
>>>> ImageBitmap
>>>>
>>>
>>> Couldn't you just draw upside down?
>>>
>>
>> No, games often animate texture coordinates and other things making it
>> far more complicated. There are ways to work around this issue in code yes,
>> they often require a ton of work.
>>
>> Most professional game engines pre-process the textures and flip them
>> offline but that doesn't help when you're downloading models off say
>> http://sketchup.google.com/3dwarehouse/
>>
>>
>>>
>>>
>>>>
>>>> colorspaceConversion: true/false (default true)
>>>> Some browsers apply color space conversion to match monitor settings.
>>>> That's fine for images with color but WebGL apps often load heightmaps,
>>>> normalmaps, lightmaps, global illumination maps and many other kinds of
>>>> data through images. If the browser applies a colorspace conversion the
>>>> data is not longer suitable for it's intended purpose therefore many
>>>> WebGL
>>>> apps turn off color conversions. As it is now, when an image is
>>>> uploaded to
>>>> WebGL, if colorspace conversion is
>>>> off<
>>>> http://www.khronos.org/registry/webgl/specs/latest/#PIXEL_STORAGE_PARAMETERS
>>>> >,
>>>
>>>
> OK, I see what you're trying to accomplish. You want to pass
> non-premultiplied data and color converted (from sRGB to monitor) pixels to
> WebGL
> I think your API looks fine, except that the defaults should all be
> false...
>

Yes, that's what I meant. I think I choose bad labels. the intent of the
colorspaceConversion flag is

colorspaceConversion: true   = browser does whatever it thinks is best for
color images.
colorspaceConversion: false  = give me the bits in the image file. Don't
manipulate them with either embedded color data or local machine gamma
corrections or anything else.

So maybe a better name?

for premultipiedAlpha again, maybe there are 3 options needed

1) do whatever is best for drawing with drawImage for perf
2) give me the data with premutlipied alpha
3) give me the data with non-premultied alpha.

It's possible that #1 is not needed as maybe GPU code can use different
blend modes for drawImage with non-premultpiled alpha. It's just my
understanding that at least in Chrome all images are loaded premultiplied.
In fact I don't think you can get non-premultipied data from canvas. At
least this does not make it appear that way

c = document.createElement("canvas");
ctx = c.getContext("2d");
i = ctx.getImageData(0, 0, 1, 1);
i.data[0] = 255;
ctx.putImageData(i, 0, 0);
i2 = ctx.getImageData(0, 0, 1, 1);
console.log(i2.data[0])  // prints 0 on both FF and Chrome

I mean I know you get unpremultiplied data from getIamgeData but the data
in the canvas is premultipied which means if

Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-06-19 Thread Gregg Tavares
On Wed, Jun 19, 2013 at 3:13 PM, Rik Cabanier  wrote:

>
> On Wed, Jun 19, 2013 at 2:47 PM, Gregg Tavares  wrote:
>
>> In order for ImageBitmap to be useful for WebGL we need more options
>>
>> Specifically
>>
>> premultipliedAlpha: true/false (default true)
>> Nearly all GL games use non-premultipiled alpha textures. So all those
>> games people want to port to WebGL will require non-premultipied textures.
>> Often in games the alpha might not even be used for alpha but rather for
>> glow maps or specular maps or the other kinds of data.
>>
>
> When would premultipliedAlpha ever be true?
> 2D Canvas always works with non-premultiplied alpha in its APIs.
>

AFAIK the canvas API expects all images to be premultiplied. Certainly in
WebKit and Blink images used in the canvas and displayed in the img tag are
loaded premultiplied which is why we had to add the option on WebGL since
we needed those images before they had lost data.


>
>
>>
>> flipY: true/false (default false)
>> Nearly all 3D modeling apps expect the bottom left pixel to be the first
>> pixel in a texture so many 3D engines flip the textures on load. WebGL
>> provides this option but it takes time and memory to flip a large image
>> therefore it would be nice if that flip happened before the callback from
>> ImageBitmap
>>
>
> Couldn't you just draw upside down?
>

No, games often animate texture coordinates and other things making it far
more complicated. There are ways to work around this issue in code yes,
they often require a ton of work.

Most professional game engines pre-process the textures and flip them
offline but that doesn't help when you're downloading models off say
http://sketchup.google.com/3dwarehouse/


>
>
>>
>> colorspaceConversion: true/false (default true)
>> Some browsers apply color space conversion to match monitor settings.
>> That's fine for images with color but WebGL apps often load heightmaps,
>> normalmaps, lightmaps, global illumination maps and many other kinds of
>> data through images. If the browser applies a colorspace conversion the
>> data is not longer suitable for it's intended purpose therefore many WebGL
>> apps turn off color conversions. As it is now, when an image is uploaded
>> to
>> WebGL, if colorspace conversion is
>> off<
>> http://www.khronos.org/registry/webgl/specs/latest/#PIXEL_STORAGE_PARAMETERS
>> >,
>>
>> WebGL has to synchronously re-decode the image. It would be nice if
>> ImageBitmap could handle this case so it can decode the image without
>> applying any colorspace manipulations.
>>
>
> Shouldn't the color space conversion happen when the final canvas bit are
> blitted to the screen?
> It seems like you should never do it during compositing since you could
> get double conversions.
>

Maybe but that's not relevant to ImageBitmap is it? The point here is we
want the ImageBitmap to give us the data in the format we need. It's
designed to be async so it can do this but it we can't prevent it from
applying colorspace conversions.
Some browsers did that for regular img tags which pointed out the original
problem. The browser can't guess how the image is going to be used and
since it's a lot of work to decode an image you'd like to be able to tell
it what you really need before it guesses wrong.


>
>
>>
>> If it was up to me I'd make createImageBitmap take on object with
>> properties so that new options can be added later as in
>>
>> createImageBitmap(src, callback, {
>>premultipliedAlpha: false,
>>colorspaceConversion: false,
>>x: 123,
>> });
>>
>> But I'm not familiar if there is a common way to make APIs take a options
>> like this except for the XHR way which is to create a request, set
>> properties on the request, and finally execute the request.
>
>
>  Like Tab said, it's fine to implement it that way.
> Be aware that you might have to do some work in your idl compiler since I
> *think* there are no other APIs (in Blink) that take a dictionary.
>
>


[whatwg] Adding features needed for WebGL to ImageBitmap

2013-06-19 Thread Gregg Tavares
In order for ImageBitmap to be useful for WebGL we need more options

Specifically

premultipliedAlpha: true/false (default true)
Nearly all GL games use non-premultipiled alpha textures. So all those
games people want to port to WebGL will require non-premultipied textures.
Often in games the alpha might not even be used for alpha but rather for
glow maps or specular maps or the other kinds of data.

flipY: true/false (default false)
Nearly all 3D modeling apps expect the bottom left pixel to be the first
pixel in a texture so many 3D engines flip the textures on load. WebGL
provides this option but it takes time and memory to flip a large image
therefore it would be nice if that flip happened before the callback from
ImageBitmap

colorspaceConversion: true/false (default true)
Some browsers apply color space conversion to match monitor settings.
That's fine for images with color but WebGL apps often load heightmaps,
normalmaps, lightmaps, global illumination maps and many other kinds of
data through images. If the browser applies a colorspace conversion the
data is not longer suitable for it's intended purpose therefore many WebGL
apps turn off color conversions. As it is now, when an image is uploaded to
WebGL, if colorspace conversion is
off,
WebGL has to synchronously re-decode the image. It would be nice if
ImageBitmap could handle this case so it can decode the image without
applying any colorspace manipulations.

If it was up to me I'd make createImageBitmap take on object with
properties so that new options can be added later as in

createImageBitmap(src, callback, {
   premultipliedAlpha: false,
   colorspaceConversion: false,
   x: 123,
});

But I'm not familiar if there is a common way to make APIs take a options
like this except for the XHR way which is to create a request, set
properties on the request, and finally execute the request.

thoughts?


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-04-03 Thread Gregg Tavares
On Wed, Apr 3, 2013 at 5:09 PM, James Robinson  wrote:

> Fonts are not vector art
>

O RLY?   So you're saying the following 250pt ampersand is stored as a
bitmap in the font file?

 &




>  and are not rendered as paths at commonly read sizes.  I don't think
> anyone is using or would be tempted to use LCD subpixel AA for anything
> other than text.
>

I think google docs, as one example, would be happy to have graphs in
spreadsheets and drawings looks a beautiful as possible.

Why do you think the AA hint should be overly specific? I don't see the
downside.


>
> - James
>
>
> On Wed, Apr 3, 2013 at 5:07 PM, Gregg Tavares  wrote:
>
>> On Wed, Apr 3, 2013 at 5:04 PM, Rik Cabanier  wrote:
>>
>> >
>> >
>> > On Wed, Apr 3, 2013 at 9:04 AM, Gregg Tavares  wrote:
>> >
>> >> On Wed, Apr 3, 2013 at 8:41 AM, Stephen White <
>> senorbla...@chromium.org
>> >> >wrote:
>> >>
>> >> > Would Mozilla (or other browser vendors) be interested in
>> implementing
>> >> the
>> >> > hint as Gregg described above?
>> >> >
>> >> > If so, we could break out the LCD text issue from canvas opacity, and
>> >> > consider the latter on its own merits, since it has benefits apart
>> from
>> >> LCD
>> >> > text (i.e., performance). Regarding that, if I'm reading correctly,
>> >> > Vladimir Vukicevic has expressed support on webkit-dev for the
>> >> > ctx.getContext('2d', { alpha: false }) proposal (basically, a
>> syntactic
>> >> > rewrite of ). Does this indeed have traction with
>> other
>> >> > browser vendors?
>> >> >
>> >> > As for naming, I would prefer that it be something like
>> >> ctx.fontSmoothing
>> >> > or ctx.fontSmoothingHint, to align more closely with canvas's
>> >> > ctx.imageSmoothingEnabled and webkit's -webkit-font-smoothing CSS
>> >> property.
>> >> >  -webkit-font-smoothing has "none", "antialiased" and
>> >> > "subpixel-antialiased" as options. I think it's ok to explicitly call
>> >> out
>> >> > subpixel antialiasing, even if the platform (or UA) does not support
>> it,
>> >> > especially if the attribute explicitly describes itself as a hint.
>> >> >
>> >>
>> >>
>> >> Why call it "Font" smoothing? Shouldn't a UA be able to also render
>> paths
>> >> using the same hint?
>> >>
>> >
>> > I have not heard of anyone using sub-pixel antialiasing for vector art.
>> It
>> > might look weird...
>> >
>>
>> ??? Fonts are vector art.  Why should this flag be specific to fonts?  So
>> I
>> decide tomorrow that I want vector art to be prettier than the competition
>> in by implementing LCD anti-aliasing I'll have to lobby for a new flag to
>> turn it on? Why?
>>
>>
>>
>>
>> >
>> >
>> >>
>> >>
>> >> >
>> >> > Stephen
>> >> >
>> >> >
>> >> > On Sun, Mar 17, 2013 at 11:17 PM, Gregg Tavares 
>> >> wrote:
>> >> >
>> >> >> On Sun, Mar 17, 2013 at 1:40 PM, Robert O'Callahan <
>> >> rob...@ocallahan.org
>> >> >> >wrote:
>> >> >>
>> >> >> > On Sat, Mar 16, 2013 at 5:52 PM, Gregg Tavares 
>> >> wrote:
>> >> >> >
>> >> >> >> Let me ask again in a different way ;-)  Specifically about LCD
>> >> style
>> >> >> >> antialiasing.
>> >> >> >>
>> >> >> >> What about a context attribute "antialiasRenderingQualityHint"
>> for
>> >> now
>> >> >> >> with
>> >> >> >> 2 settings "default" and "displayDependent"
>> >> >> >>
>> >> >> >>context.antialiasRenderingQualityHint = "displayDependent"
>> >> >> >>
>> >> >> >
>> >> >> > How would this interact with canvas opacity? E.g. if the author
>> uses
>> >> >> > displayDependent and then draws text over transparent pixels in
>> the
>> >> >> canvas,
>> >> >> > what is the U

Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-04-03 Thread Gregg Tavares
On Wed, Apr 3, 2013 at 5:04 PM, Rik Cabanier  wrote:

>
>
> On Wed, Apr 3, 2013 at 9:04 AM, Gregg Tavares  wrote:
>
>> On Wed, Apr 3, 2013 at 8:41 AM, Stephen White > >wrote:
>>
>> > Would Mozilla (or other browser vendors) be interested in implementing
>> the
>> > hint as Gregg described above?
>> >
>> > If so, we could break out the LCD text issue from canvas opacity, and
>> > consider the latter on its own merits, since it has benefits apart from
>> LCD
>> > text (i.e., performance). Regarding that, if I'm reading correctly,
>> > Vladimir Vukicevic has expressed support on webkit-dev for the
>> > ctx.getContext('2d', { alpha: false }) proposal (basically, a syntactic
>> > rewrite of ). Does this indeed have traction with other
>> > browser vendors?
>> >
>> > As for naming, I would prefer that it be something like
>> ctx.fontSmoothing
>> > or ctx.fontSmoothingHint, to align more closely with canvas's
>> > ctx.imageSmoothingEnabled and webkit's -webkit-font-smoothing CSS
>> property.
>> >  -webkit-font-smoothing has "none", "antialiased" and
>> > "subpixel-antialiased" as options. I think it's ok to explicitly call
>> out
>> > subpixel antialiasing, even if the platform (or UA) does not support it,
>> > especially if the attribute explicitly describes itself as a hint.
>> >
>>
>>
>> Why call it "Font" smoothing? Shouldn't a UA be able to also render paths
>> using the same hint?
>>
>
> I have not heard of anyone using sub-pixel antialiasing for vector art. It
> might look weird...
>

??? Fonts are vector art.  Why should this flag be specific to fonts?  So I
decide tomorrow that I want vector art to be prettier than the competition
in by implementing LCD anti-aliasing I'll have to lobby for a new flag to
turn it on? Why?




>
>
>>
>>
>> >
>> > Stephen
>> >
>> >
>> > On Sun, Mar 17, 2013 at 11:17 PM, Gregg Tavares 
>> wrote:
>> >
>> >> On Sun, Mar 17, 2013 at 1:40 PM, Robert O'Callahan <
>> rob...@ocallahan.org
>> >> >wrote:
>> >>
>> >> > On Sat, Mar 16, 2013 at 5:52 PM, Gregg Tavares 
>> wrote:
>> >> >
>> >> >> Let me ask again in a different way ;-)  Specifically about LCD
>> style
>> >> >> antialiasing.
>> >> >>
>> >> >> What about a context attribute "antialiasRenderingQualityHint" for
>> now
>> >> >> with
>> >> >> 2 settings "default" and "displayDependent"
>> >> >>
>> >> >>context.antialiasRenderingQualityHint = "displayDependent"
>> >> >>
>> >> >
>> >> > How would this interact with canvas opacity? E.g. if the author uses
>> >> > displayDependent and then draws text over transparent pixels in the
>> >> canvas,
>> >> > what is the UA supposed to do?
>> >> >
>> >>
>> >> Whatever the UA wants. It's a hint. From my POV, since the spec doesn't
>> >> say
>> >> anything about anti-aliasing then it really doesn't matter.
>> >>
>> >> My preference, if I was programming a UA, would be if the user sets
>> >> "displayDependent" and the UA is running on a lo-dpi machine I'd
>> >> unconditionally render LCD-AA with the assumption that the canvas is
>> >> composited on white. If they want some other color they'd fill the
>> canvas
>> >> with as solid color first. Personally I don't think that needs to be
>> >> specced, but it would be my suggestion. As I mentioned, even without
>> this
>> >> hint the spec doesn't prevent a UA from unconditionally using LCD-AA.
>> >>
>> >> Very few developers are going to run into issues. Most developers that
>> use
>> >> canvas aren't going to set the hint. Most developers that use canvas
>> dont'
>> >> make it transparent nor do they CSS rotate/scale them. For those few
>> >> developers that do happen to blend and/or rotate/scale AND set the hint
>> >> they'll get probably get some fringing but there (a) there was no
>> >> guarantee
>> >> they wouldn't already have that problem since as pointed out, the spec
>> >> doesn't specify AA nor what kind, and (b) if they care they'll either
>> stop
>> >> using the hint or they'll search for "why is my canvas fringy" and the
>> >> answer will pop up on stackoverlow and they can choose one of the
>> >> solutions.
>> >>
>> >>
>> >>
>> >> >
>> >> > Rob
>> >> > --
>> >> > Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
>> >> > Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr
>> nhgubevgl
>> >> > bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng
>> nzbat
>> >> > lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe
>> >> fynir
>> >> > — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir,
>> naq gb
>> >> > tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
>> >> >
>> >>
>> >
>> >
>>
>
>


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-04-03 Thread Gregg Tavares
On Wed, Apr 3, 2013 at 8:41 AM, Stephen White wrote:

> Would Mozilla (or other browser vendors) be interested in implementing the
> hint as Gregg described above?
>
> If so, we could break out the LCD text issue from canvas opacity, and
> consider the latter on its own merits, since it has benefits apart from LCD
> text (i.e., performance). Regarding that, if I'm reading correctly,
> Vladimir Vukicevic has expressed support on webkit-dev for the
> ctx.getContext('2d', { alpha: false }) proposal (basically, a syntactic
> rewrite of ). Does this indeed have traction with other
> browser vendors?
>
> As for naming, I would prefer that it be something like ctx.fontSmoothing
> or ctx.fontSmoothingHint, to align more closely with canvas's
> ctx.imageSmoothingEnabled and webkit's -webkit-font-smoothing CSS property.
>  -webkit-font-smoothing has "none", "antialiased" and
> "subpixel-antialiased" as options. I think it's ok to explicitly call out
> subpixel antialiasing, even if the platform (or UA) does not support it,
> especially if the attribute explicitly describes itself as a hint.
>


Why call it "Font" smoothing? Shouldn't a UA be able to also render paths
using the same hint?


>
> Stephen
>
>
> On Sun, Mar 17, 2013 at 11:17 PM, Gregg Tavares  wrote:
>
>> On Sun, Mar 17, 2013 at 1:40 PM, Robert O'Callahan > >wrote:
>>
>> > On Sat, Mar 16, 2013 at 5:52 PM, Gregg Tavares  wrote:
>> >
>> >> Let me ask again in a different way ;-)  Specifically about LCD style
>> >> antialiasing.
>> >>
>> >> What about a context attribute "antialiasRenderingQualityHint" for now
>> >> with
>> >> 2 settings "default" and "displayDependent"
>> >>
>> >>context.antialiasRenderingQualityHint = "displayDependent"
>> >>
>> >
>> > How would this interact with canvas opacity? E.g. if the author uses
>> > displayDependent and then draws text over transparent pixels in the
>> canvas,
>> > what is the UA supposed to do?
>> >
>>
>> Whatever the UA wants. It's a hint. From my POV, since the spec doesn't
>> say
>> anything about anti-aliasing then it really doesn't matter.
>>
>> My preference, if I was programming a UA, would be if the user sets
>> "displayDependent" and the UA is running on a lo-dpi machine I'd
>> unconditionally render LCD-AA with the assumption that the canvas is
>> composited on white. If they want some other color they'd fill the canvas
>> with as solid color first. Personally I don't think that needs to be
>> specced, but it would be my suggestion. As I mentioned, even without this
>> hint the spec doesn't prevent a UA from unconditionally using LCD-AA.
>>
>> Very few developers are going to run into issues. Most developers that use
>> canvas aren't going to set the hint. Most developers that use canvas dont'
>> make it transparent nor do they CSS rotate/scale them. For those few
>> developers that do happen to blend and/or rotate/scale AND set the hint
>> they'll get probably get some fringing but there (a) there was no
>> guarantee
>> they wouldn't already have that problem since as pointed out, the spec
>> doesn't specify AA nor what kind, and (b) if they care they'll either stop
>> using the hint or they'll search for "why is my canvas fringy" and the
>> answer will pop up on stackoverlow and they can choose one of the
>> solutions.
>>
>>
>>
>> >
>> > Rob
>> > --
>> > Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
>> > Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl
>> > bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat
>> > lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe
>> fynir
>> > — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb
>> > tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
>> >
>>
>
>


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-17 Thread Gregg Tavares
On Sun, Mar 17, 2013 at 1:40 PM, Robert O'Callahan wrote:

> On Sat, Mar 16, 2013 at 5:52 PM, Gregg Tavares  wrote:
>
>> Let me ask again in a different way ;-)  Specifically about LCD style
>> antialiasing.
>>
>> What about a context attribute "antialiasRenderingQualityHint" for now
>> with
>> 2 settings "default" and "displayDependent"
>>
>>context.antialiasRenderingQualityHint = "displayDependent"
>>
>
> How would this interact with canvas opacity? E.g. if the author uses
> displayDependent and then draws text over transparent pixels in the canvas,
> what is the UA supposed to do?
>

Whatever the UA wants. It's a hint. From my POV, since the spec doesn't say
anything about anti-aliasing then it really doesn't matter.

My preference, if I was programming a UA, would be if the user sets
"displayDependent" and the UA is running on a lo-dpi machine I'd
unconditionally render LCD-AA with the assumption that the canvas is
composited on white. If they want some other color they'd fill the canvas
with as solid color first. Personally I don't think that needs to be
specced, but it would be my suggestion. As I mentioned, even without this
hint the spec doesn't prevent a UA from unconditionally using LCD-AA.

Very few developers are going to run into issues. Most developers that use
canvas aren't going to set the hint. Most developers that use canvas dont'
make it transparent nor do they CSS rotate/scale them. For those few
developers that do happen to blend and/or rotate/scale AND set the hint
they'll get probably get some fringing but there (a) there was no guarantee
they wouldn't already have that problem since as pointed out, the spec
doesn't specify AA nor what kind, and (b) if they care they'll either stop
using the hint or they'll search for "why is my canvas fringy" and the
answer will pop up on stackoverlow and they can choose one of the solutions.



>
> Rob
> --
> Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
> Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl
> bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat
> lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir
> — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb
> tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
>


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-15 Thread Gregg Tavares
Let me ask again in a different way ;-)  Specifically about LCD style
antialiasing.

What about a context attribute "antialiasRenderingQualityHint" for now with
2 settings "default" and "displayDependent"

   context.antialiasRenderingQualityHint = "displayDependent"

I'm thinking of it like this. The canvas spec does not say how antialiasing
works or even that it exists so right now a UA is free to antialias in
anyway it sees fit. It can do no antialiasing. It can do LCD antialiasing.
It can do alpha antialiasing. It can use different algorithms. In fact, the
software rasterizers between Firefox and Chrome already antialias different
as do different GPUs.

All we're looking for is some way to hint that we'd prefer LCD antialiasing
if the UA thinks it's best for a given situation. We already can't count on
a certain quality or algorithm

   context.antialiasRenderingQualityHint = "displayDependent"

The advantage to this hint is that

   (a) a UA is free it ignore it and rendering will not be any worse/better
than it is now

 and

   (b) as the world moves to HD-DPI everywhere UAs will pick alpha-AA and
things just magically work.

As for rotating, scaling, blending a cavnas it's up to the app to opt into
this hint and it's up to the UA when to honor it.

I'm not seeing the downside here. You're not breaking anything because the
app already has no idea what kind of AA a UA is using. The hint is forward
compatible as well.

The only place I see an issue is UA zooming. But if the app really cares
and if we really care we can provide an API to figure out the zoom level.
Then an app that cares can change the size of their canvas's backingstore
so its 1:1 device pixels for a given zoom level and re-render. Lots of apps
would like to do that with or without the proposed "hint" as it would let
them zoom in a way that matches the text and svg on the page.

Everybody wins! :-)












   1) not ant alias
   2) antialias in any way it sees fit

  could happily implement LCD style AA and still be spec complien


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-13 Thread Gregg Tavares
On Wed, Mar 13, 2013 at 1:18 PM, Robert O'Callahan wrote:

> On Thu, Mar 14, 2013 at 8:04 AM, Gregg Tavares  wrote:
>
>> It seems like an opaque canvas should be an orthogonal issue to
>> subpixel-aa. Subpixel AA seems like it should be a Canvas2DRenderingContext
>> setting though maybe with a name like
>>
>>ctx.antialiasingRenderQuality =
>>
>> With options of
>>
>>none
>>grayscale
>>bestForDeviceIfAxisAlignedAndNotScaledOrBlended
>>
>
My mistake. They should be

none
alpha

bestForDeviceIfNotCanvasIsNotRotatedAndCanvasIsNotScaledAndCanvasIsOpaque

;-)

Yes, I know that's a horrible name but it spells out the limitation of the
higher quality aa needed on some devices. A dev can opt in (Since the
default is alpha which is what happens today).

If they opt in

(a) it will look good if they follow the rules

and

(b) as the world transitions to HD-DPI it will end up being alpha so it's
forward compatible.




> Ugh!
>
>
>> This would let the developer choose. It would be clear what the limits
>> are, when to use it, and would let the developer choose what they need,
>> even in an opaque canvas.
>>
>
> Then we would need to come up with a spec for what happens when you
> composite subpixel AA over non-opaque pixels, including how the per-channel
> alpha values are combined to form a single alpha value. IIRC in some cases
> (D2D) you just can't do it.
>
> If we said that in a non-opaque canvas, subpixel AA is treated as
> grayscale, that would be OK.


sure.


>
>
> Rob
> --
> Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
> Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl
> bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat
> lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir
> — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb
> tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
>


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-13 Thread Gregg Tavares
Another question: And I see this brought up above

It seems like an opaque canvas should be an orthogonal issue to
subpixel-aa. Subpixel AA seems like it should be a Canvas2DRenderingContext
setting though maybe with a name like

   ctx.antialiasingRenderQuality =

With options of

   none
   grayscale
   bestForDeviceIfAxisAlignedAndNotScaledOrBlended

This would let the developer choose. It would be clear what the limits are,
when to use it, and would let the developer choose what they need, even in
an opaque canvas.

As a developer I'd like to be able to chose an opaque canvas for perf since
compositing an opaque canvas on the page (with GPU blending off) is
significantly faster than with it on. Choosing opaque for perf I shouldn't
then suddenly get anti-aliasing that doesn't fit my use case. A typical
example is to scale a canvas that is smaller than the size it will be
displayed to get more perf or to get a pixelated retro look. It would be
less than desirable if I also mark the canvas as opaque to get perf and
suddenly my scaled canvas has color fringing all over the place.


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-13 Thread Gregg Tavares
Sorry for only mentioning this so late but is there any chance to steer
this to be more inline with WebGL?

WebGL already has the option to have an opaque canvas using context
creation parameters. In WebGL it's

   gl = canvas.getContext("webgl", {alpha: false});

If we go forward with an "opaque" attribute now you have 2 conflicting
settings.

   canvas.opaque = true;
   gl = canvas.getContext("webgl", {alpha: true});

Who wins that conflict? Yea, I know we could come up with rules. (&& the 2
settings, etc...)

But, there are other context creation attributes we'd like to see on a 2d
canvas. One that comes to mind is 'preserveDrawingBuffer'.
preserveDrawingBuffer: false in WebGL means that the canvas is double
buffered. This is a performance win since most browsers using GPU
compositing need to copy the contents of the canvas when compositing.
Setting preseverDrawingBuffer: false (which is the default in WebGL) means
the browser can double buffer and avoid the copy. We'd like to see that
same attribute for 2D canvas/contexts to get the same perf benefit for
canvas games, etc.

So, given we want more creation attributes and given WebGL already has a
way to declare opaqueness why not follow the existing method and add
context creation parameters to 2d canvas to solve this issue rather than
make a new and conflicting 'opaque' attribute?


[whatwg] Polling APIs in JavaScript vs Callbacks

2013-02-07 Thread Gregg Tavares
 Anyone have time to give me some advice, tips, pointers?

In the next WebGL we have Query/Sync objects. In C/C++ land the user can
insert some drawing commands that will happen asynchronously on the GPU.
They can also insert a query. They can then poll that query (did this
finish?).

We'd like to expose that to JavaScript. I've been pushing to making it a
callback in JavaScript rather than polling. Most graphics developers I talk
to hate this idea.

My assumptions have been

(a) polling is kind of maybe not allowed in JS (though I have no proof,
only heresy)

Are there any "rules" or things in the spec that make polling apis
untenable?


(b) polling is often error prone and those errors can be avoided using
callbacks and I'm assuming designing an API that is hard or impossible to
use wrong is a good thing.

To give an example, in GL I can render in 2 threads. Let's say in one
thread I'm rendering to a texture and in the other thread I'd like to use
that texture to render something else.

I can insert a query in the first thread, and poll that query in the second
thread. When the query says the texture is ready the second thread can
render with it.

The problem is, if I forget to query things won't fail I'll just get the
wrong results. But I might not. It might be that rendering on thread #1
takes 1ms and it takes 2ms before thread 2 tries to render so things just
appear to work, no queries needed. But then something (another tab, another
app) slows the the system down. Thread #1 takes 4ms now and Thread #2 which
is just assuming things take 2ms renders and sees old or half rendered
results.

We can encapsulate some of this. In WebGL we can require that the texture
being used in both threads be a wrapped object, a SharedTexture, and we can
require SharedTextures can be used in only 1 thread at a time. You acquire
and release it on a particular thread. And so now comes the issue:

Should acquire block, poll or use a callback

block = evil

poll has the issue that I can try to use the texture even when I haven't
acquired it. See the example above. It allows not polling and assuming the
texture will be available. That might work most of the time but on some
machine or under certain circumstances the acquire will fail. Apps that
forget to check for success because the API let them ignore failure will
then fail in strange and random ways. It basically exposes race conditions

callback seems to make it much harder to use wrong. You get a callback when
the texture has been acquired. Now you can't use it without calling acquire
period.

But. that comes back to (a) above. Move devs I've talked to really don't
want to use callbacks. They're all good devs and they know what they're
doing and they don't want to deal with what they consider callback/closure
hell. Especially when they need to acquire multiple things at once.

I'm in the callback camp. (1) to protect programmers from mistakes they
won't find until the exceptional happens and (2) because I'm under the
impression that browser vendors would say "no" to polling. But, maybe it
doesn't matter and polling is fine.

note: this issue already exists in the Web API today with images. Example:

img.src = "someimage.jpg"
setTimeout(function() {
   ctx.drawImage(img, 0, 0);
}, 1000);

Assumes the image will be available in 1 second which may or may not be
true. The API could have been designed to make this much harder to get
wrong (though not impossible). If you had get some kind of ImageHandle to
call drawImage and you could only get that ImageHandle in the 'load' event
of an image

Images are not quite analogous though as there is no polling built in
polling on an image to find out if it's finished.


Re: [whatwg] Canvas in Workers

2013-01-08 Thread Gregg Tavares
On Tue, Jan 8, 2013 at 11:12 AM, Ian Hickson  wrote:

> On Thu, 3 Jan 2013, Gregg Tavares wrote:
> > On Tue, Dec 11, 2012 at 9:04 AM, Ian Hickson  wrote:
> > > On Tue, 11 Dec 2012, Gregg Tavares (社ç~T¨) wrote:
> > > >
> > > > discussion seems to have died down here but I'd like to bring up
> > > > another issue
> > > >
> > > > In WebGL land we have creation attributes on the drawingbuffer made
> > > > for a canvas. Example
> > > >
> > > > gl = canvas.getContext("webgl", { preserveDrawingBuffer: false
> });
> > > >
> > > > We're working out the details on how to set those options for the
> > > > case where we have 1 context and multiple canvases.
> > > >
> > > > The particular option above would apparently be a huge perf win for
> > > > canvas 2d for mobile. Which suggests that whatever API is decided on
> > > > it would be nice if it worked for both APIs the same.
> > >
> > > What does it do?
> >
> > Effectively it makes the canvas double buffered.
> >
> > Right now by 2d canvases are effectively single buffered. At the
> > appropriate time a copy of the canvas is made and passed to the
> > compositor. This copy is slow, especially on mobile.
> >
> > Apple requested that for WebGL the default be for double buffering. When
> > double buffered, when the canvas is composited (when the current
> > JavaScript event exits) the canvas's buffer is given to the compositor
> > and the canvas is given a new buffer (or an old one). That new buffer is
> > cleared, meaning the contents is gone. It's up to the app to draw stuff
> > into again. If nothing is drawn the compositor will continue to use the
> > buffer it acquired earlier.
>
> I think you mean page flipping, not double buffering.
>
> Supporting page flipping in 2D canvas would be fine too, but I don't see
> why it would need a change to the API... you would just make "commit()"
> flip which page was active for the context API and clear the newly active
> page in one operation.
>

How would you choose flip vs copy with just commit?

Just to be clear we're on the same page. I want to be able to do this (not
related to workers)

   // create a 2d context that flips buffers instead of copies them
   var ctx = canvas.getContext("2d", { preserveDrawingBuffer: false });

But, related to workers, if CanvasProxy is truly a proxy for the canvas
then I could do this

   // create a 2d context that flips buffers instead of copies them
   var ctx = canvasProxy.getContext("2d", { preserveDrawingBuffer: false });



>
> On Thu, 3 Jan 2013, Gregg Tavares wrote:
> >
> > So I've been thinking more about this and I'm wondering, for the time
> > being, why have canvas.setContext and why expose the
> > Canvas2DRenderingContext constructor?
>
> Well the constructor is needed so that there's a way to do an entirely
> off-screen bitmap, for when you just want to do some image work that isn't
> going to be displayed.
>

Agreed but that's a separate problem

Problem #1) Allow a worker to render to a canvas
Problem #2) Allow a worker to render offscreen (without communicating with
the main page)

I'm suggesting we only solve problem #1 for now. To do that, all we need is
CanvasProxy to truly be "a proxy for the canvas".


>
> setContext() is only needed so that you can use one context with multiple
> canvases, which is primarily intended to address the WebGL case of having
> one context used to render to multiple views with different settings (the
> settings being themselves set on the canvas or canvas proxy).
>
>
Right, but since it doesn't doesn't seem to work for WebGL's needs why spec
it now when we can solve problem #1 today and worry about the other
problems later?


> > That means we can solve the 1 context multiple canvases issue later
> > making this a minimal api change?
>
> I thought the "1 context multiple canvases issue" was a higher priority
> than the "canvas on workers" issue. Is this wrong?
>

I don't know if it's higher priority. It seemed to inform the worker
related design so it was important to look at.


>
>
> > Is there some reason that won't work?
>
> Well I'd rather not design something that doesn't address a known issue
> and then find we have painted ourselves into a corner with respect to that
> other issue. Hence trying to solve all the issues at once, or at least
> solve them in a way that is compatible with future solutions

Re: [whatwg] Canvas in Workers

2013-01-03 Thread Gregg Tavares
On Tue, Dec 11, 2012 at 9:04 AM, Ian Hickson  wrote:

> On Tue, 11 Dec 2012, Gregg Tavares (社ç~T¨) wrote:
> >
> > discussion seems to have died down here but I'd like to bring up another
> > issue
> >
> > In WebGL land we have creation attributes on the drawingbuffer made for a
> > canvas. Example
> >
> > gl = canvas.getContext("webgl", { preserveDrawingBuffer: false });
> >
> > We're working out the details on how to set those options for the case
> > where we have 1 context and multiple canvases.
> >
> > The particular option above would apparently be a huge perf win for
> > canvas 2d for mobile. Which suggests that whatever API is decided on it
> > would be nice if it worked for both APIs the same.
>
> What does it do?
>

Effectively it makes the canvas double buffered.

Right now by 2d canvases are effectively single buffered. At the
appropriate time a copy
of the canvas is made and passed to the compositor. This copy is slow,
especially on
mobile.

Apple requested that for WebGL the default be for double buffering. When
double buffered, when the canvas is composited (when the current JavaScript
event exits)
the canvas's buffer is given to the compositor and the canvas is given a
new buffer (or
an old one). That new buffer is cleared, meaning the contents is gone. It's
up to the app to
draw stuff into again. If nothing is drawn the compositor will continue to
use the
buffer it acquired earlier.

In WebGL you can opt into the slower "copy" path. For Canvas 2D while the
default
has to remain the slow "copy" path it would be nice to be able to opt into
the faster
"swap" double buffered path.







>
> In the 2D canvas, whenever you bind to a new canvas, the context is reset
> to its default state, the context's hit region list is reset, and the
> context's bitmap is reset. The next time the context is flushed, the
> canvas itself is always reset (since flushing the context causes the
> bitmap and hit region list to be pushed to the canvas, replacing whatever
> was there before).
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] Full Screen API Feedback

2011-05-20 Thread Gregg Tavares (wrk)
On Wed, May 11, 2011 at 11:27 AM, Jer Noble  wrote:

> WebKit is in the process of implementing Mozilla's proposed Full Screen API
> .  Basic full screen support
> is available in WebKit Nightlies  on Mac and
> Windows (other ports are adding support as well), and can be enabled through
> user defaults (WebKitFullScreenEnabled=1).  To test the feasibility of this
> API, we have mapped the full screen button in the default controls in
>  elements to this new API.  The webkit-only webkitenterfullscreen()
> method on HTMLMediaElement has also been mapped to this new API.  In so
> doing, we have been able to collect test case results from live websites.
>  In this process, I believe we have uncovered a number of issues with the
> API proposal as it currently stands that I'd like to see addressed.
>
> 1. Z-index as the primary means of elevating full screen elements to the
> foreground.
>
> The spec suggests that a full screen element is given a z-index of BIGNUM
> in order to cause the full screen element to be visible on top of the rest
> of page content.  The spec also notes that  "it is possible for a document
> to position content over an element with the :full-screen pseudo-class, for
> example if the :full-screen element is in a container with z-index not
> 'auto'."  In our testing, we have found that this caveat causes extreme
> rendering issues on many major video-serving websites, including Vimeo and
> Apple.com.  In order to fix rendering under the new full-screen API to be on
> par with WebKit's existing full-screen support for video elements, we chose
> to add a new pseudo-class and associated style rule to forcibly reset
> z-index styles and other stacking-context styles.  This is of course not
> ideal, and we have only added this fix for full screen video elements.  This
> rendering "quirk" makes it much more difficult for authors to elevate a
> single element to full-screen mode without modifying styles on the rest of
> their page.
>
> Proposal: the current API proposal simply recommends a set of CSS styles.
>  The proposal should instead require that no other elements render above the
> current full-screen element and its children, and leave it up to
> implementers to achieve that requirement.  (E.g., WebKit may implement this
> by walking up the ancestors of the full-screen element disabling any styles
> which create stacking contexts.)
>

That does seem more reasonable. A conceptual way to think of it is that the
fullscreen element and all its children get a temporary z-index boost.


>
> 2. Animating into and out of full screen.
>
> WebKit's current video full-screen support will animate an element between
> its full-screen and non-full-screen states.  This has both security and user
> experience benefits.  However, with the current z-index-based rendering
> technique recommended by the proposed Full Screen API, animating the
> full-screen transition is extremely difficult.
>
> Proposal: The full-screen element should create a new view, separate from
> its parent document's view.  This would allow the UA to resize and animate
> the view separate from the parent document's view. This would also solve
> issue 1 above.
>

I'm not sure what "view" means but I can see what I think are Robert's
issues. The DOM still has to be connected and CSS still has to flow. So if I
have body(a(b(c))) and I fullscreen c then changes to CSS of body, a, and b
should all still effect c even when c is in fullscreen mode. Of course maybe
your idea of "detaching" into a separate view means doesn't imply that other
elements.

I guess conceptually I thought all fullscreen does is (1) make the browser
window fullscreen with no chrome and (2) stretch the fullscreen element to
fill that space. That implies that if I set the background color of the
fullscreen element to rgba(0,0,0,0.5) I can see back to the non-fullscreen
elements behind it.


>
> 3. "fullscreenchange" events and their targets.
>
> The current proposal states that a "fullscreenchange" event must be
> dispatched when a document enters or leaves full-screen. Additionally, "when
> the event is dispatched, if the document's current full-screen element is an
> element in the document, then the event target is that element, otherwise
> the event target is the document."  This has the side effect that, if an
> author adds an event listener for this event to an element, he will get
> notified when an element enters full screen, but never when that element
> exits full-screen (if the current full screen element is cleared, as it
> should be, before the event is dispatched.)  In addition, if the current
> full-screen element is changed while in full screen mode (e.g. by calling
> requestFullScreen() on a different element) then an event will be dispatched
> to only one of the two possible targets.
>
> Proposal: split the "fullscreenchange" events into two: "fullscreenentered"
> and "fullscree

Re: [whatwg] CORS requests for image and video elements

2011-05-20 Thread Gregg Tavares (wrk)
How about updating the CORS spec so that a server can send a

   Access-Control-Allow-Origin: *

header even when not specifically requested and the browser can then
allow those resource to be used cross-origin where otherwise they
wouldn't

This would mean sites like picasa and flickr could just add that
static string to their headers and things would just work, no HTML
or JS changes required, no having to tag images with cross-origin
unless you're dealing with a really strict server that actually wants
to check credentials.


On Tue, May 17, 2011 at 6:11 PM, Ian Hickson  wrote:

> On Tue, 17 May 2011, Kenneth Russell wrote:
> >
> > Last week, a proof of concept of a previously theoretical timing attack
> > against WebGL was published which allows theft of cross-domain images'
> > content.
> >
> > To address this vulnerability it appears to be necessary to ban the use
> > of cross-domain images and videos in WebGL. Unfortunately, doing so will
> > prevent entire classes of applications from being written, and break a
> > not insignificant percentage of current applications.
> >
> > We would like to use CORS to solve this problem; if the server grants
> > access to the image or video, WebGL can use it. Initial discussions with
> > image hosting services have been positive, and it seems that CORS
> > support could be enabled fairly quickly. Many such services already
> > support other access control mechanisms such as Flash's crossdomain.xml.
> > Unfortunately, experimentation indicates that it is not possible to
> > simply send CORS' Origin header with every HTTP GET request for images;
> > some servers do not behave properly when this is done.
> >
> > We would like to propose adding a new Boolean property, useCORS, to
> > HTMLImageElement and HTMLMediaElement, defaulting to false. If set to
> > true, then HTTP requests sent for these elements will set the Origin
> > header from the page's URL. If the Access-Control-Allow-Origin header in
> > the response grants access, then the content's origin will be treated as
> > the same as the page's.
>
> On Tue, 17 May 2011, Jonas Sicking wrote:
> >
> > Does setting "useCORS" make the CORS implementation execute with the
> > "supports credentials" flag set to true or false?
> >
> > When set to true, the request to the server will contain the normal
> > cookies which the user has set for that domain. However, the response
> > from the server will have to contain "Access-Control-Allow-Origin:
> > ". In particular "Access-Control-Allow-Origin:*" will not be
> > treated as a valid response.
> >
> > If the "supports credentials" flag is set to false, the request will be
> > made without cookies, and the server may respond with either
> > "Access-Control-Allow-Origin:*" or "Access-Control-Allow-Origin:
> > ".
> >
> > I propose that the latter mode is used as it will make servers easier to
> > configure as they can just add a static header to all their responses.
>
> On Tue, 17 May 2011, Glenn Maynard wrote:
> >
> > This could be specified, eg.  without credentials and  > cors="credentials"> with.  I don't know if there are use cases to
> > justify it.
>
> On Tue, 17 May 2011, Kenneth Russell wrote:
> >
> > In general I think we need to enable as close behavior to the normal
> > image fetching code path as possible. For example, a mashup might
> > require you to be logged in to a site in order to display thumbnails of
> > movie trailers. If normal image fetches send cookies, then it has to be
> > possible to send them when doing a CORS request. I like the idea of  > cors> vs. .
>
> I've added a content attribute to , , and  that makes
> the image or media resource be fetched with CORS And have the origin of
> the page if CORS succeeded.
>
> The attribute is "cross-origin" and it has two allowed values,
> "use-credentials" and "anonymous". The latter is the default, so you can
> just say .
>
> This is only a first draft, I'm not sure it's perfect. In particular,
> right now cross-origin media is not allowed at all without this attribute
> (this is not a new change, but I'm not sure it's what implementations do).
> Also, right now as specced if you give a local URL that redirects to a
> remote URL, I don't have CORS kick in, even if you specified cross-origin.
> (This is mostly an editorial thing, I'm going to wait for Anne to get back
> and then see if he can help me out with some editorial changes to CORS to
> make it easier to make that work generally.)
>
> Implementation and author experience feedback would be very welcome on
> this.
>
>
> On Tue, 17 May 2011, Kenneth Russell wrote:
> >
> > Perhaps an API could also be added to find out whether the server
> > granted CORS access to the resulting media, though this is less
> > important. (Note that the canvas element does not have an explicit API
> > for querying the origin-clean flag.)
>
> I haven't exposed this. You can work around it by trying to use the image
> in a canvas, then rereading the canvas, and seeing if yo

Re: [whatwg] Embedding custom hierarchical data

2011-03-23 Thread Gregg Tavares (wrk)
Is X3DOM an example of solution to what you are trying to do? They embed
hierarchical data directly in HTML docs through namespaces and have
JavaScript use that data

http://www.x3dom.org/

I believe Angular does this as well.

http://angularjs.org/


Re: [whatwg] Canvas and drawWindow

2011-03-14 Thread Gregg Tavares (wrk)
Someone pointed out that once you have HTML5->Canvas->WebGL, even though you
can't call readPixels or toDataURL or getImageData because of cross origin
issues you can write a shader that takes longer depending on the color and
then just time draw calls to figure out what's in the texture.

In other words, if you want to prevent security issues you could only do
this on same origin content.

But then you open another can of worms. Once you can put content in a
texture you want to be able to let the user interact with it (like they can
with 3d css) but then you run into the issue that you don't know what the
user's shaders are doing so you have to let JavaScript translate mouse
coordinates which is probably another security issue on top of being a PITA
to implement.


On Fri, Mar 11, 2011 at 8:35 AM, Erik Möller  wrote:

> I bet this has been discussed before, but I'm curious as to what people
> think about breathing some life into a more general version of Mozillas
> canvas.drawWindow() that draws a snapshot of a DOM window into the canvas?
> https://developer.mozilla.org/en/drawing_graphics_with_canvas#section_9
>
> I know there are some security considerations (for example listed in the
> source of drawWindow):
>
>  // We can't allow web apps to call this until we fix at least the
>  // following potential security issues:
>  // -- rendering cross-domain IFRAMEs and then extracting the results
>  // -- rendering the user's theme and then extracting the results
>  // -- rendering native anonymous content (e.g., file input paths;
>  // scrollbars should be allowed)
>
> I'm no security expert, but it seems to me there's an easy way to at least
> cater for some of the use-cases by always setting origin-clean to false when
> you use drawWindow(). Sure it's a bit overkill to always mark it dirty, but
> it's simple and would block you from reading any of the pixels back which
> would address most (all?) of the security concerns.
>
> I'm doing a WebGL demo, so the use-case I have for this would be to render
> a same-origin page to a canvas and smack that on a monitor in the 3d-world.
> Intercept mouse clicks, transform them into 2d and passing them on would of
> course be neat as well and probably opens up the use-cases you could dream
> up.
>
> So, I'm well aware its a tad unconventional, but perhaps someone has a
> better idea of how something like this could be accomplished... i.e. via SVG
> and foreignObject or punching a hole in the canvas and applying a transform
> etc. I'd like to hear your thoughts.
>
> --
> Erik Möller
> Core Developer
> Opera Software
>


Re: [whatwg] Workers feedback

2011-02-14 Thread Gregg Tavares (wrk)
On Mon, Feb 14, 2011 at 1:37 AM, Ian Hickson  wrote:

> On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
> > On Fri, Feb 11, 2011 at 5:45 PM, Ian Hickson  wrote:
> > > On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
> > > > > On Fri, 7 Jan 2011, Berend-Jan Wever wrote:
> > > > > >
> > > > > > 1) To give WebWorkers access to the DOM API so they can create
> their
> > > > > > own elements such as img, canvas, etc...?
> > > > >
> > > > > It's the API itself that isn't thread-safe, unfortunately.
> > > >
> > > > I didn't see the original thread but how is a WebWorker any different
> > > > from another webpage? Those run just fine in other threads and use
> the
> > > > DOM API.
> > >
> > > Web pages do not run in a different thread.
> >
> > Oh, sorry. I meant they run in a different process. At least in some
> > browsers.
>
> The goal here is interoperability with all browsers, not just some.
>

I guess I don't understand. There are lots of things all browsers didn't do
at some point in the past. The video tag. CSS animation. The canvas tag.
Etc. We don't say because it hasn't been done yet therefore we can't try or
can't spec it.


>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] Workers feedback

2011-02-11 Thread Gregg Tavares (wrk)
On Fri, Feb 11, 2011 at 5:45 PM, Ian Hickson  wrote:

> On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
> > > On Fri, 7 Jan 2011, Berend-Jan Wever wrote:
> > > >
> > > > 1) To give WebWorkers access to the DOM API so they can create their
> > > > own elements such as img, canvas, etc...?
> > >
> > > It's the API itself that isn't thread-safe, unfortunately.
> >
> > I didn't see the original thread but how is a WebWorker any different
> > from another webpage? Those run just fine in other threads and use the
> > DOM API.
>
> Web pages do not run in a different thread.
>

Oh, sorry. I meant they run in a different process. At least in some
browsers.


>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] Workers feedback

2011-02-11 Thread Gregg Tavares (wrk)
On Fri, Feb 4, 2011 at 3:43 PM, Ian Hickson  wrote:

> On Sat, 16 Oct 2010, Samuel Ytterbrink wrote:
> >
> > *What is the problem you are trying to solve?*
> > To create sophisticated single file webpages.
>
> That's maybe a bit vaguer than I was hoping for when asking the question.
> :-)
>
> Why does it have to be a single file? Would multipart MIME be acceptable?
>
> A single file is a solution, not a problem. What is the problem?
>
>
> > [...] trying to build a more optimal standalone DAISY player (would be
> > nice if i could rewrite it with web workers).
>
> Now that's a problem. :-)
>
> It seems like what you need is a package mechanism, not necessarily a way
> to run workers without an external script.
>
>
> On Fri, 15 Oct 2010, Jonas Sicking wrote:
> >
> > Allowing both blob URLs and data URLs for workers sounds like a great
> > idea.
>
> I expect we'll add these in due course, probably around the same time we
> add cross-origin workers. (We didn't add them before because exactly how
> we do them depends on how we determine origins.)
>
>
> On Sat, 16 Oct 2010, Samuel Ytterbrink wrote:
> >
> > But then i got another problem, why is not
> > "file:///some_directory_where_the_html_are/" not the same domain as
> >
> "file:///some_directory_where_the_html_are/child_directory_with_ajax_stuff/".
> > I understand if it was not okay to go closer to root when ajax,
> > "file:///where_all_secrete_stuff_are/" or "/../../".
>
> That's not a Web problem. I recommend contacting your browser vendor about
> it. (It's probably security-related.)
>
>
> On Thu, 30 Dec 2010, Glenn Maynard wrote:
> > On Thu, Dec 30, 2010 at 7:11 PM, Ian Hickson  wrote:
> > >
> > > Unfortunately we can't really require immediate failure, since there'd
> > > be no way to test it or to prove that it wasn't implemented -- a user
> > > agent could always just say "oh, it's just that we take a long time to
> > > launch the worker sometimes". (Performance can be another hardware
> > > limitation.)
> >
> > Preferably, if a Worker is successfully created, the worker thread
> > starting must not block on user code taking certain actions, like
> > closing other threads.
>
> How can you tell the difference between "the thread takes 3 seconds to
> start" and "the thread waits for the user to close a thread", if it takes
> 3 seconds for the user to close a thread?
>
> My point is from a black-box perspective, one can never firmly say that
> it's not just the browser being slow to start the thread. And we can't
> disallow the browser from being slow.
>
>
> > That doesn't mean it needs to start immediately, but if I start a thread
> > and then do nothing, it's very bad for the thread to sit in limbo
> > forever because the browser expects me to take some action, without
> > anything to tell me so.
>
> I don't disagree that it's bad. Hopefully browser vendors will agree and
> this problem will go away.
>
>
> > If queuing is really necessary, please at least give us a way to query
> > whether a worker is queued.
>
> It's queued if you asked it to start and it hasn't yet started.
>
>
> On Fri, 31 Dec 2010, Aryeh Gregor wrote:
> >
> > I've long thought that HTML5 should specify hardware limitations more
> > precisely.
>
> We can't, because it depends on the hardware. For example, we can't say
> "you must be able to allocate a 1GB string" because the system might only
> have 500MB of storage.
>
>
> > Clearly it can't cover all cases, and some sort of general escape clause
> > will always be needed -- but in cases where limits are likely to be low
> > enough that authors might run into them, the limit should really be
> > standardized.
>
> It's not much of a standardised limit if there's still an escape clause.
>
> I'm happy to put recommendations in if we have data showing certain
> specific limits are needed for interop with real content.
>
>
> > > Unfortunately we can't really require immediate failure, since there'd
> > > be no way to test it or to prove that it wasn't implemented -- a user
> > > agent could always just say "oh, it's just that we take a long time to
> > > launch the worker sometimes". (Performance can be another hardware
> > > limitation.)
> >
> > In principle this is so, but in practice it's not.  In real life, you
> > can easily tell an algorithm that runs the first sixteen workers and
> > then stalls any further ones until one of the early ones exit, from an
> > algorithm that just takes a while to launch workers sometimes.  I think
> > it would be entirely reasonable and would help interoperability in
> > practice if HTML5 were to require that the UA must run all pending
> > workers in some manner that doesn't allow starvation, and that if it
> > can't do so, it must return an error rather than accepting a new worker.
> > Failure to return an error should mean that the worker can be run soon,
> > in a predictable timeframe, not maybe at some indefinite point in the
> > future.
>
> All workers should run soon, not maybe in the future. 

Re: [whatwg] Canvas element image scaling

2010-09-24 Thread Gregg Tavares (wrk)
As others have pointed out, canvas scaling algorithm is not specified and is
different in each browser.

http://greggman.com/downloads/examples/canvas-test/test-01/canvas-test-01-results.html

http://greggman.com/downloads/examples/canvas-test/test-01/canvas-test-01.html


On Sat, Sep 18, 2010 at 7:51 PM, Rob Evans  wrote:

> Thanks I'll give that a go in the morning!
>
> All the best,
>
> Rob
>
> On 19 Sep 2010 03:42, "Boris Zbarsky"  wrote:
>
> On 9/18/10 9:57 PM, Rob Evans wrote:
> >
> > Thanks for the reply. I’m already using high resolution ima...
>
> Gecko will scale canvas images in one of two ways: either using a
> nearest-neighbor algorithm or using a more complicated (bilinear, bicubic,
> may depend on other details) algorithm which is slower but usually gives
> better results.  You can control which is happening by setting
> mozImageSmoothingEnabled on the canvas 2d context (set to false to get
> nearest-neighbor and set to true to get the other).
>
> The default value there is true.  Does setting it to false give you the
> Chrome 6 behavior, perchance?  I'd be a little surprised if it does, but
> worth trying.
>
> -Boris
>
>


Re: [whatwg] Scriptable interface for video element FullScreen mode

2010-09-22 Thread Gregg Tavares (wrk)
On Wed, Sep 22, 2010 at 9:09 AM, Shiv Kumar  wrote:

>  I’ve changed the subject of this post in the hopes that it receives the
> correct attention…
>
>
>
> As per the current spec:
>
>
>
> 
>
> WARNING!
>
> User agents should not provide a public API to cause videos to be shown
> full-screen. A script, combined with a carefully crafted video file, could
> trick the user into thinking a system-modal dialog had been shown, and
> prompt the user for a password. There is also the danger of "mere"
> annoyance, with pages launching full-screen videos when links are clicked or
> pages navigated. Instead, user-agent-specific interface features may be
> provided to easily allow the user to obtain a full-screen playback mode.
>
> 
>
>
>
> In order for anyone to be able to provide their own skin/player, we’ll need
> to provide a scriptable way to switch the video element to full screen and
> out including events to support the same.
>
>
>
> I know Webkit folks have provided,
> webkitEnterFullScreen/webkitExitFullScreen methods to allow this. I don’t
> believe there are any events supporting the change in state, however
>
>
>
> I think it’s important that the video element provide a scriptable way to
> do this. Internally, the UA can determine if the call was made using a user
> gesture as I believe Webkit is doing.
>
>
>
> Can we agree to change the current spec to allow for this?
>
>
>
> Shiv
>
> http://exposureroom.com
>
>
>

Is this proposal not good enough?
https://wiki.mozilla.org/index.php?title=Gecko:FullScreenAPI

It handles the case I think you want. I also it's also useful for other
things like HTML5 games. It's also secure or at least as secure as flash.
The browser will not go fullscreen without either a prompt or a mouse click.


Re: [whatwg] Canvas: clarification of compositing operations needed

2010-07-29 Thread Gregg Tavares (wrk)
Even Firefox's implementation is inconsistent.

drawShape uses the "infinite transparent black bitmap" but drawImage does
not.

I believe even many at Mozilla would like Firefox to switch to the
Chrome/Safari method because it's more easily GPU accelerated.

In that direction it would be nice if 2 things in the spec changed

#1) Get rid of the "infinite transparent black bitmap" stuff and change it
to something that say only pixels inside the shape/image are effected

#2) Change the globalCompositingOperation spec from referencing PORTER-DUFF
to referencing OpenGL

source-over
   glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

source-in
   glBlendFunc(GL_DST_ALPHA, GL_ZERO);

source-out
   glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ZERO);

source-atop
   glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

destination-over
   glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);

destination-in
   glBlendFunc(GL_ZERO, GL_SRC_ALPHA)

destination-out
   glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);

destination-atop
   glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_SRC_ALPHA);

lighter
   glBlendFunc(GL_ONE, GL_ONE);

darker
   deprecated

copy
   glBlendFunc(GL_ONE, GL_ZERO);

xor
   glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE_MINUS_SRC_ALPHA);


Re: [whatwg] An API to resize and rotate images client-side

2010-07-26 Thread Gregg Tavares (wrk)
On Mon, Jul 26, 2010 at 6:34 PM, Ian Hickson  wrote:

>
> On Thu, 20 May 2010, David Levin wrote:
> >
> > Twice when this was brought up on whatwg developers out of the blue
> > mentioned that the image resizing was a useful thing for them (once early
> in
> > this thread and once long ago when canvas in workers was brought up).
> >
> > In addition to that anecdotal evidence, here are several other places
> this
> > comes up which I can list quickly:
> >
> > - For example, take Facebook. If I upload a huge photo to Facebook, it
> > seems to upload the whole thing and then resizes it on the server (down
> > to something much smaller than 1600 X 1200).
> >
> > - This is similar for other social sites like dating sites or Orkut that
> > only allow a maximum size of photo. Typically, either they force the
> > user to resize the image (which is a horrible experience) or they resize
> > the image on the client using gears (with workers and canvas) or flash,
> > etc. (or canvas but for more than one browser that may hang the UI).
> >
> > - Similarly Gmail now allows dragging images into email
> > (http://gmailblog.blogspot.com/2010/05/drag-images-into-messages.html).
> > The full resolution image isn't necessary for this. It would be better
> > to have a resized image.
> >
> > - Something like Google Docs or Wave which show real time participation
> > of other people typing would benefit from getting a thumbnail of an
> > inserted image to other people in the conversation. (One could envision
> > this for any real time chat/communication website.)
> >
> > - When you upload photos to picasaweb from the Picasa client, it offers
> > to resize them to 1600X1200 before uploading them. Also, it offers an
> > option to upload a thumbnail first before uploading the bigger picture,
> > so the album can appear even quicker (just at a really low resolution).
> > Ideally, a website could do something similar.
>
> On Tue, 25 May 2010, David Levin wrote:
> >
> > http://webkit.org/demos/canvas-perf/canvas.html
> >
> > Firefox 3.7a4 (no D2D)
> >
> > Direct image copy: 39ms
> > Indirect copy with (via ImageData): 160ms
> > Copy with 2x scale: 646.5ms
> > Copy with 0.5x scale: 42.5ms
> > Copy with rotate: 358ms
> >
> > Firefox 3.7a4 (D2D)
> >
> > Direct image copy: 115ms
> > Indirect copy with (via ImageData): 365.5ms
> > Copy with 2x scale: 246ms
> > Copy with 0.5x scale: 48.5ms
> > Copy with rotate: 100.5ms
> >
> > Chrome 4.1.249.1064 (45376)
> >
> > Direct image copy: 32.5ms
> > Indirect copy with (via ImageData): 207.5ms
> > Copy with 2x scale: 378.5ms
> > Copy with 0.5x scale: 27.5ms
> > Copy with rotate: 367ms
> >
> > While the GPU does help in some scenarios, unfortunately it must still
> > take some time to do its work, so it doesn't enable us to do sync apis
> > that don't hang the UI.
>
> The logical conclusion is that we should make one of the following choices:
>
>  1. Provide dedicated asynchronous APIs for this use case.
>For example, a method to go from an image URL to a Blob representing
>that image at a different size and/or rotation.
>
>  2. Provide generic APIs for that can handle this use case amongst others.
>For example, porting the 2D canvas API to workers.
>
>
I'd like to add I think there's a lot more enthusiasm for #2
because there are many other situations that need to do
CPU intensive computing without bogging down the
main browser thread.

I'm not knocking #1. I'm just suggesting that there are
more champions for #2 since it solves problems for
more people.

Any pointers to why this hasn't happened already?
I'm guessing there have already been giant debates
on how to share data between workers and the
main thread and or how to expose more services
to workers.



>  3. Not address this use case (yet).
>
> For #1, my preference would be to add two methods to , one to make
>  resize and rotate the image and then fire 'load' again, and one to
> obtain the current image as a Blob, much like toDataURL() on 
> (where a similar toBlob() would also be useful).
>
> For #2, we'd need to provide an Image object (a non-DOM version of
> HTMLImageElement) and a Canvas object (a non-DOM version of
> HTMLCanvasElement) in workers.
>
> But unless we have a critical mass of browser vendors agreed on one of
> these approaches, we have to default to #3. Currently it seems only the
> Chrome team is especially interested in either #1 or #2.
>
> My recommendation, in the absence of enthusiasm from other browser
> vendors, would be for the Chrome team to experiment with #1 or #2. If I
> have misread the current situation and there is a willingness to implement
> #1 or #2 in more browsers, then please let me know. I'd be happy to spec
> one of those options. (#1 would be easy to do; #2 might take longer, since
> it is significantly more work.)
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are 

Re: [whatwg] Image resize API proposal

2010-05-13 Thread Gregg Tavares
This really seems like the wrong solution. Taken to an extreme next you'll
need to add VideoRisizer, AudioRecompresser, and any thing else JavaScript
can't do without freezing the browser.

It seems like it would be better to figure out a way to get Web Workers to
be able to do this. Even if they have to XHR the binary down, decompress
into a TypeArray (see WebGL) and read the data themsevles so they can keep
the EXIF stuff, bloating the browser for one small use doesn't seem like the
right solution.

You can post an open source library and the usage can be as simple as this
proposal.


On Tue, May 11, 2010 at 11:58 AM, Sterling Swigart wrote:

> I'm working with David Levin, and based on the feedback received regarding
> offscreen canvas, the proposal has been changed to address more specific
> scenarios. The main use case was resizing images, so we are proposing an
> asynchronous image resizing API. If you are curious about how we arrived at
> our API below, take a look at the "appendix" to view the alternatives we
> considered.
>
> Let us know what you think. Thanks!
> Sterling
>
> Use Cases:
>
> Begin with a user giving a local image file to a webpage. Then:
>
> 1. In real-time chat, quickly give other users a thumbnail view of the
> image file.
>
> 2. Or, limit the size of an image file before uploading it to a web server.
>

This use case is already handled (minus the EXIF).

> Proposed Solution:
>
> We propose adding image.getBlob. getBlob will be an instance function of
> the javascript Image object which asynchronously gets a blob of the image,
> resized to the given width and height, encoded into jpeg or png. The
> function declaration will be:
>
> getBlob(mimeType /* req */, width /* req */, height /* req */, successEvent
> /* req */, errorEvent /* op */, qualityLevel /* op */, preserveAspectRatio
> /* op */, rotateExif /* op */);
>
> The blob will be passed as an argument to the success callback function, or
> upon error, error data will be passed into the error callback function as an
> argument. Quality level should be between 0.0 and 1.0, and any value outside
> of that range will be reverted to the default, 0.85. If MIME type does not
> equal "image/jpeg", then quality level is ignored. If null (or a negative
> value) is passed in for the width or height, then the function will use the
> source's measurement for that dimension. Default values for
> preserveAspectRatio and rotateExif are true.
>
> All EXIF metadata will be retained except for any saved thumbnails, and the
> EXIF rotation property will be appropriately modified.
>
> Security:
>
> If the image source is of a different origin than the script context, then
> getBlob raises a SECURITY_ERR exception.
>
> Sample Code:
>
> // url contains location of an image file
>
> Image i = new Image();
>
> i.src = url;
>
> var successEvt = function (newBlob) { myDiv.innerHTML += ""; };
>
> var errEvt = function (err) { alert(err); };
>
> i.getBlob("image/jpeg", 300, 350, successEvt, errEvt, .55);
>
> // Image will retain aspect ratio and correct for EXIF rotation. If the
> source image was 700x700,
>
> // the blob will represent a new image that is 300x300.
>
> That's all!Appendix: Alternatives considered
>
> For reference, we've also included a list of other designs that we thought
> of along with the reasons why they were dropped
>
> Creating a new object for resizing
>
> Summary of approach:
>
> [NamedConstructor=ImageResizer(),
>
> NamedConstructor=ImageResizer(blob, onsuccess),
>
> NamedConstructor=ImageResizer(blob, onsuccess, onerror),
>
> NamedConstructor=ImageResizer(blob, onsuccess, onerror, type),
>
> NamedConstructor=ImageResizer(blob, onsuccess, onerror, type, width,
> height)]
>
> interface ImageResizer {
>
> void start(); // starts resize operation
>
> void abort(); // aborts operation
>
>  attribute Blob blob;
>
> attribute DOMString type; // default "image/png"
>
> attribute unsigned long width;
>
> attribute unsigned long height;
>
> attribute float qualityLevel; // default 1.0, must be 0.0 to 1.0, else
> reverts to default
>
>  readonly attribute unsigned short started; // default 0
>
>  attribute Function onsuccess;
>
> attribute Function onerror;
>
> };
>
> Why it wasn't chosen:
>
> Creating an entirely new object for this task made the task seem more
> complicated and involved than necessary, and this problem could be solved
> via modifications to the Image object.
>
> Returning a SizelessBlob immediately from a method on image
>
> Summary of approach:
>
> var streamingBlob = image.toStreamingBlob(mimeType /* req */, width /* req
> */, height /* req */, qualityLevel /* op */, preserveAspectRatio /* op */,
> rotateExif /* op */);
>
> New Blob Interfaces:
>
> interface SizelessBlob {
>
> // moved from Blob
>
> readonly attribute DOMString type;
>
> readonly attribute DOMString url; // whatever name -- URL, urn, URN, etc.
>
> }
>
> interface StreamingBlob : SizelessBlob {
>
> // at most one of the following functions will be 

Re: [whatwg] Canvas tag - single or multiple contexts?

2009-12-21 Thread Gregg Tavares
On Mon, Dec 21, 2009 at 11:56 AM, Anne van Kesteren wrote:

> On Mon, 21 Dec 2009 20:50:33 +0100, Gregg Tavares  wrote:
>
>> What is the intent of the getContext function on the  tag?
>>
>> Should it be possible to get multiple simultaneous different contexts as
>> in?
>>
>> var ctx2d = canvas.getContext("2d");
>> var ctxText = canvas.getContext("fancy-text-api");
>> var ctxFilter = canvas.getContext("image-filter-api");
>>
>> ctx2d.drawImage(someImage, 0, 0);
>> ctxText.drawText(0, 0, "hello world");
>> ctxFilter.radialBlur(0.1);
>>
>> ?
>>
>> OR
>>
>> is canvas only allowed 1 context at a time?
>>
>
> In theory multiple contexts should be possible. E.g. we supported 2d,
> opera-2dgame, and opera-3d for a while. However it seems that for certain
> contexts, in particular webgl, using it together with other contexts is not
> possible (for now anyway).
>
>
Is disallowing other contexts when certain contexts, eg "webgl", okay or is
that really an incompatible extension of the canvas tag?

Can portable code be written if some browsers let me get both a "2d" context
and a "3d" context and others don't?


>
> --
> Anne van Kesteren
> http://annevankesteren.nl/
>


[whatwg] Canvas tag - single or multiple contexts?

2009-12-21 Thread Gregg Tavares
What is the intent of the getContext function on the  tag?

Should it be possible to get multiple simultaneous different contexts as in?

var ctx2d = canvas.getContext("2d");
var ctxText = canvas.getContext("fancy-text-api");
var ctxFilter = canvas.getContext("image-filter-api");

ctx2d.drawImage(someImage, 0, 0);
ctxText.drawText(0, 0, "hello world");
ctxFilter.radialBlur(0.1);

?

OR

is canvas only allowed 1 context at a time?


[whatwg] Passing mouse events through the transparent parts of a tag

2009-12-07 Thread Gregg Tavares
Excuse me if this has already been discussed

Has there been a proposal for allowing mouse events to go through a canvas
element where it is transparent to the element below?

As an example, assume you have a canvas element with a triangle rendered
into the top left corner so that half the canvas is opaque and half the
canvas is 100% transparent (Alpha = 0).  That canvas is zIndexed to be above
other HTML. It would be nice if it was possible to interact with the visible
html under the transparent part of the canvas but as it is now all browsers
treat the canvas as a rectangle and ignore its transparency so that if the
user attempts to interact with the clearly visible HTML nothing happens.

One solution that comes to mind is to add an option (css?) that tells the
browser "if alpha = 0 at the place the user clicks then pass the event
through to the element below"

As a CSS option this could really apply to any tag.  for example.


Re: [whatwg] window.setInterval if visible.

2009-10-16 Thread Gregg Tavares
On Thu, Oct 15, 2009 at 1:53 PM, Markus Ernst  wrote:

> Gregg Tavares schrieb:
>
>> I was wondering if there as been a proposal for either an optional
>> argument to setInterval that makes it only callback if the window is visible
>> OR maybe a window.setRenderInterval.
>>
>> Here's the issue that seems like it needs to be solved.
>>
>> Currently, AFAIK, the only way to do animation in HTML5 + JavaScript is
>> using setInterval. That's great but it has the problem that even when the
>> window is minimized or the page is not the front tab, JavaScript has no way
>> to know to stop animating.  So, for a CPU heavy animation using canvas 2d or
>> canvas 3d, even a hidden tab uses lots of CPU. Of course the browser does
>> not copy the bits from the canvas to the window but JavaScript is still
>> drawing hundreds of thousands of pixels to the canvas's internal image
>> buffer through canvas commands.
>>
>
> [...]
>
>>
>> There are probably other possible solutions to this problem but it seems
>> like the easiest would be either
>>
>> *) adding an option to window.setInterval or only callback if the window
>> is visible
>>
>> *) adding window.setIntervalIfVisible (same as the previous option really)
>>
>> A possibly better solution would be
>>
>> *) element.setIntervalIfVisible
>>
>> Which would only call the callback if that particular element is visible.
>>
>
> From a performance point of view it might even be worth thinking about the
> contrary: Allow UAs to stop the execution of scripts on non-visible windows
> or elements by default, and provide a method to explicitly specify if the
> execution of a script must not be stopped.
>
> If you provide methods to check the visibility of a window or element, you
> leave it up to the author to use them or not. I think performance issues
> should rather be up to the UA.
>

I agree that would be ideal. Unfortunately, current webpages already expect
setInternval to function even when they are not visible. web based chat and
mail clients come to mind as examples. So, unfortunately, it doesn't seem
like a problem a UA can solve on it's own.

On the otherhand, if the solution is as simple as add a flag to setInterval
then it's at least a very simple change for those apps that want to not hog
the CPU when not visible.


[whatwg] window.setInterval if visible.

2009-10-15 Thread Gregg Tavares
I was wondering if there as been a proposal for either an optional argument
to setInterval that makes it only callback if the window is visible OR maybe
a window.setRenderInterval.

Here's the issue that seems like it needs to be solved.

Currently, AFAIK, the only way to do animation in HTML5 + JavaScript is
using setInterval. That's great but it has the problem that even when the
window is minimized or the page is not the front tab, JavaScript has no way
to know to stop animating.  So, for a CPU heavy animation using canvas 2d or
canvas 3d, even a hidden tab uses lots of CPU. Of course the browser does
not copy the bits from the canvas to the window but JavaScript is still
drawing hundreds of thousands of pixels to the canvas's internal image
buffer through canvas commands.

To see an example run this sample in any browser

http://mrdoob.com/projects/chromeexperiments/depth_of_field/

Minimize the window or switch to another tab and notice that it's still
taking up a bunch of CPU time.

Conversely, look at this flash page.

http://www.alissadean.com/

While it might look simple there is actually a lot of CPU based pixel work
required to composite the buttons with alpha over the scrolling clouds with
alpha over the background.

Minimize that window or switch to another tab and unlike HTML5 + JavaScript,
flash has no problem knowning that it no longer needs to render.

There are probably other possible solutions to this problem but it seems
like the easiest would be either

*) adding an option to window.setInterval or only callback if the window is
visible

*) adding window.setIntervalIfVisible (same as the previous option really)

A possibly better solution would be

*) element.setIntervalIfVisible

Which would only call the callback if that particular element is visible.

It seems like this will be come an issue as more and more HMTL5 pages start
using canvas to do stuff they would have been doing in flash like ads or
games. Without a solution those ads and games will continue to eat CPU even
when not visible which will make the user experience very poor.

Am I making an sense?

-gregg


Re: [whatwg] An BinaryArchive API for HTML5?

2009-08-04 Thread Gregg Tavares
On Tue, Aug 4, 2009 at 6:43 PM, Ian Hickson  wrote:

>
> On Thu, 30 Jul 2009, Sebastian Markbåge wrote:
> >
> > This suggestion seems similar to Digg's Stream project that uses
> multipart
> > documents: http://github.com/digg/stream
> >
> > While it would be nice to have a way to parse and handle this in
> > JavaScript, it shouldn't be JavaScript's responsibility to work with
> > large object data and duplicating it as in-memory data strings.
> >
> > The real issue here is the overhead of each additional HTTP request for
> > those thousands of objects. But that's useful for all parts of the spec
> > if you can download it as a single package even without JavaScript.
> > Images, CSS, background-images, JavaScript, etc. Currently you can
> > include graphics as data URLs in CSS. Using a package you could package
> > whole widgets (or apps) as a single request.
> >
> > I'd suggest that this belongs in a lower level API such as the URIs and
> > network stack for the tags. You could specify a file within an archive
> > by adding an hash with the filename to the URI:
> >
> > http://someplace.com/somearchive.tgz#myimage.jpg"; />
> >
> > 
> > #id { background-image: url(
> > http://someplace.com/somearchive.tgz#mybackgroundimage.jpg); }
> > 
> >
> > http://someplace.com/somearchive.tgz#myscript.js";
> > type="text/javascript">
> >
> > var img = new Image();
> > img.src = "http://someplace.com/somearchive.tgz#myimage.png";;
> >
> > Now which packaging format to use would be a discussion on it's own. An
> > easy route would be to use multipart/mixed that is already used for this
> > in e-mails and can also be gzipped using Content-Encoding.
>
> This is out of scope for HTML5; I would recommend bringing this up in the
> context of the IETF.
>
>
> On Thu, 30 Jul 2009, Kenneth Russell wrote:
> >
> > In the context of the 3d canvas discussions, it looks like there is a
> > need to load binary blobs of vertex data and feed them to the graphics
> > card via a JavaScript call. Here is some hypothetical IDL similar to
> > what is being considered:
> >
> > [IndexGetter, IndexSetter]
> > interface CanvasFloatArray {
> > readonly attribute unsigned long length;
> > };
> >
> > interface CanvasRenderingContextGL {
> > ...
> > typedef unsigned long GLenum;
> > void glBufferData(in GLenum target, in CanvasFloatArray data,
> > in GLenum usage);
> > ...
> > };
> >
> > Do you have some suggestions for how the data could be transferred most
> > efficiently to the glBufferData call? As far as I know there is no tag
> > which could be used to refer to the binary file within the archive. If
> > there were then presumably it could provide its contents as a
> > CanvasFloatArray or other type.
>
> We are waiting for the File API specification to be stable, but one that
> exists, I would expect it to be used for this kind of thing:


I'm a little confused? Are you saying the File API is part of HTML5 or not?

Without archive support the File API is not sufficient for the above use
case because a typical WebGL app will need to download hundreds of these
types of files and it would want to download them compressed.



>
>
>   http://dev.w3.org/2006/webapi/FileUpload/publish/FileAPI.xhtml
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] autobuffer on "new Audio" objects

2009-07-31 Thread Gregg Tavares
On Fri, Jul 31, 2009 at 3:06 PM, Robert O'Callahan wrote:

> On Sat, Aug 1, 2009 at 12:20 AM, David Wilson  wrote:
>
>> I still don't understand the 'why' of this, whereas the 'why not'
>> seems clear.
>
>
> Because for the 99% use case of "new Audio()" --- scripts loading sounds,
> and then playing them in response to events --- it's what you want. And if
> authors forget to set "autobuffer", then under some network conditions (fast
> networks), short sounds will play fine when play() is called because the
> sound data will have arrived with the metadata before the download is
> throttled, but under other network conditions (slow networks), the same
> sounds will not play smoothly because not all the data will have been
> preloaded. So probably authors will forget to set "autobuffer" and not
> notice, and users with slow networks will suffer.
>
> This is not hypothetical, I suggested this change precisely because I
> noticed this problem happening while testing Firefox.
>
> It might be useful (in a "saving an extra line of code"
>> kind of way), but the fact it implicitly causes potentially high
>> bandwidth IO seems more wasteful than convenient.
>
>
> For the 99% use case, you want to incur that I/O.
>
> If you never want to incur it, use a browser that lets you disable
> autobuffer or otherwise manage bandwidth the way you want.
>
> Rob
>

Agreed. If you want sounds on your UI, or you want to create a game using
the canvas tag, you need to be able count on your sounds being loaded the
same way you count on your images being loaded. I suspect those 2 use cases
will be far more common than the streaming use case.


>
> --
> "He was pierced for our transgressions, he was crushed for our iniquities;
> the punishment that brought us peace was upon him, and by his wounds we are
> healed. We all, like sheep, have gone astray, each of us has turned to his
> own way; and the LORD has laid on him the iniquity of us all." [Isaiah
> 53:5-6]
>


[whatwg] An BinaryArchive API for HTML5?

2009-07-29 Thread Gregg Tavares
If this has already been covered just point me in that direction.

Assuming it hasn't...

What are people's feelings on adding a Binary Archive API to HTML5?

I'm sure for many that sets off alarms so let me try to describe what I mean
and a case for it.

It seems like it would be useful if there was browser API that let you
download something like gzipped tar files.

The API would look something like

var request = createArchiveRequest();
request.open("GET", "http://someplace.com/somearchive.tgz";);
request.onfileavailable = doSomethingWithEachFileAsItArrives;
request.send();

function doSomethingWithEachFileAsItArrives(binaryBlob) {
  // Load every image in archive
  if (binaryBlob.url.substr(-3) == ".jpg") {
 var image = new Image();
 image.src = binaryBlob.toDataURL();  // or something;
 ...
  }
  // Look for a specific text file
  else if (binaryBlog.url === "myspecial.txt") {
// getText only works if binaryBlob is valid utf-8 text.
var text = binaryBlob.getText();
document.getElementById("content").innerHTML = text;
  }
}

Hopefully from the example above you can see that a .tgz file is downloaded
and as each file becomes available it is handed to the JavaScript as binary
blobs through an onfileavailable callback. A blob can be passed to an img,
video, audio, assuming its in the correct format. It can also be gotten as a
string assuming it is valid utf-8

Why is this needed?  Because with canvas tag and the upcoming 3dweb (canvas
3d) it will be common for an application to need to download thousands of
small files. A typical canvas 3d application will need all kinds of small
pieces of geometry data as well as hundreds of textures and sound files to
make even a modest game.

As it is now, each tag is apparently required to implement it's own network
stack for getting data. Image does things its way (progressively loading),
Video and Audio both support steaming. That's all great. But it seems like
as more and more types get added some support for a more centrally
implemented system would go a long way to helping some of these new APIs.

Thoughts?


Re: [whatwg] Canvas context.drawImage clarification

2009-07-29 Thread Gregg Tavares
On Tue, Jul 28, 2009 at 4:07 AM, Aryeh Gregor

> wrote:

> On Tue, Jul 28, 2009 at 1:41 AM, Gregg Tavares wrote:
> > It's ambiguous because images have a direction.  An image that starts at
> 10
> > with a width of -5 is not the same as an image that starts at 6 with a
> width
> > of +5 any more than starting in SF and driving 5 miles south is not the
> same
> > as starting in Brisbane and driving 5 miles north.
> >
> > The spec doesn't say which interpretation is correct.
>
> I think it's extremely clear.  The spec gives four points which
> determine a rectangle, which are in no particular order.  The image is
> rectangular, and is mapped into that rectangle.  Rectangles have no
> orientation, and the operation "paint the source region onto the
> destination region" couldn't possibly be interpreted as requiring
> reorientation of any kind.


If it's so clear, why do you think 2 of the 4 browsers that implemented it
apparently got it wrong?

Would making the spec more explicit have avoided their mis-intepretation?




>
>
> I think you got misled by the diagram, and now aren't reading the
> normative text of the spec carefully enough -- it's *very* specific
> (like most of HTML 5).
>


Re: [whatwg] Canvas context.drawImage clarification

2009-07-27 Thread Gregg Tavares
On Mon, Jul 27, 2009 at 4:14 PM, Ian Hickson  wrote:

> On Mon, 27 Jul 2009, Gregg Tavares wrote:
> >
> > The diagram in the docs
> >
> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#images
> >
> > Clearly show SX maps to DX, SY maps top DY
> >
> > But that is not the interpretation that is implemented. The
> > interpretation that is implemented is Source Top/Left maps to Dest
> > Top/Left regardless of whether SX/SY define top left or SX + WIDTH, SY +
> > HEIGHT define top left.
> >
> > That seems pretty ambiguous to me.
>
> Ignore the diagram. It's not normative. The text is the only thing that
> matters. I've moved the diagram up to the intro section to make this
> clearer.
>
>
> > I'd argue that based on the spec as currently written, all current
> > canvas implementations are wrong. Hence the suggestion to make it
> > unambiguous or get the implementation to match the spec.
>
> Could you explain what other interpretations of the following you think
> are reasonable?:
>
> # The source rectangle is the rectangle whose corners are the four points
> # (sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh).
> # [...]
> # The destination rectangle is the rectangle whose corners are the four
> # points (dx, dy), (dx+dw, dy), (dx+dw, dy+dh), (dx, dy+dh).
> #
> # When drawImage() is invoked, the region of the image specified by the
> # source rectangle must be painted on the region of the canvas specified
> # by the destination rectangle [...]



It's ambiguous because images have a direction.  An image that starts at 10
with a width of -5 is not the same as an image that starts at 6 with a width
of +5 any more than starting in SF and driving 5 miles south is not the same
as starting in Brisbane and driving 5 miles north.

The spec doesn't say which interpretation is correct.

The one where SrcX maps to DstX and from there width can be positive or
negative OR the one as currently implemented in 2 of the 4 browsers which is
that Source Left maps to Dest Left regardless of the starting values.

Without the diagram, both of those interpretations match the text. With the
diagram only 1 interpretation matches, it just happens the be the one no one
has implemented.


>
>
> It seems pretty unambigious to me.
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] Canvas context.drawImage clarification

2009-07-27 Thread Gregg Tavares
On Mon, Jul 27, 2009 at 3:12 PM, Ian Hickson  wrote:

> On Thu, 9 Jul 2009, Gregg Tavares wrote:
> >
> > The specific ambiguity I'd like to bring up has to do with the several
> > versions of a function, context.drawImage. They take width and height
> > values.  The spec does not make it clear what is supposed to happen with
> > negative values.
> >
> > My personal interpretation and preference is that negative values should
> >
> > (a) be legal and
> > (b) draw backward, flipping the image.
> >
> > The specification currently says:
> >
> > "The source rectangle is the rectangle whose corners are the four points
> > (sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh).
> >
> > ...
> >
> > The destination rectangle is the rectangle whose corners are the four
> > points (dx, dy), (dx+dw, dy), (dx+dw, dy+dh), (dx, dy+dh)."
> >
> > Well, simple math would suggest that if sx = 10, and sw = -5 then it
> still
> > defines a valid rectangle.
>
> Correct. Why is this ambiguous? The rectangle is well-defined, it just
> happens that its points are given in a different order than normally.


The diagram in the docs
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#images

Clearly show SX maps to DX, SY maps top DY

But that is not the interpretation that is implemented. The interpretation
that is implemented is Source Top/Left maps to Dest Top/Left regardless of
whether SX/SY define top left or SX + WIDTH, SY + HEIGHT define top left.

That seems pretty ambiguous to me.

I'd argue that based on the spec as currently written, all current canvas
implementations are wrong. Hence the suggestion to make it unambiguous or
get the implementation to match the spec.






>
>
>
> > I'd like to make a passionate plea that the spec say "implementations
> > must support negative widths and negative heights and draw the image
> > backward effectively flipping the result".
>
> If you want to flip the image, use a transform.
>
>
> > Also, I'd like to suggest that a widths and heights of 0 for source
> > should be valid as well as rectangles outside of the source also be
> > valid and that this part of the spec.
> >
> > "If the source rectangle is not entirely within the source image, or if
> > one of the sw or sh arguments is zero, the implementation must raise an
> > INDEX_SIZE_ERR exception."
> >
> > be changed to reflect that.
>
> If height or width is zero, how do you scale the bitmap up to a non-zero
> size?
>
> We could use transparent black for the pixels outside the image, but this
> is already interoperably implemented, so I don't want to change it.
>
>
> > Coming from a graphics background I see no reason why if I let my user
> > size an image in a canvas I should have to special case a width or
> > height of zero. Just draw nothing if the width or height is zero.
> > Similarly, if I was to provide a UI to let a user choose part of the
> > source to copy to the dest and I let them define a rectangle on the
> > source and drag it such that all or part of it is off the source I see
> > no reason why I should have to do extra math in my application to make
> > that work when simple clipping of values in drawImage would make all
> > that extra work required by each app disappear.
>
> I agree that this may have made sense when the API was being designed a
> few years ago.
>
>
> > The next issue related to drawImage is that the spec does not specify
> > how to filter an image when scaling it. Should it use bi-linear
> > interpolation? Nearest Neighbor? Maybe that should stay implementation
> > dependent? On top of that the spec does not say what happens at the
> > edges and the different browsers are doing different things. To give you
> > an example, if you take a 2x2 pixel image and scale it to 256x256 using
> > drawImage. All the major browsers that currently support the canvas tag
> > will give you an image where the center of each pixel is around center
> > of each 128x128 corner of the 256x256 result. The area inside the area
> > defined by those 4 points is rendered very similar on all 4 browsers.
> > The area outside though, the edge, is rendered very differently. On
> > Safari, Chrome and Opera the colors of the original pixels continue to
> > be blended all the way to the edge of the 256x256 area. On Firefox
> > though, the blending happens as though the source image was actually 4x4
> > pixels instead of 2x2 where the edge pixels are all set to an RGBA value
> > of 0, 0, 0, 0. It then draws that scaled

Re: [whatwg] Canvas context.drawImage clarification

2009-07-10 Thread Gregg Tavares
On Thu, Jul 9, 2009 at 6:25 PM, Oliver Hunt  wrote:

>  Inconsistency doesn't lead to no one depending on a behaviour, it just
>> means sites only work in one browser.  Your suggesting would result in sites
>> being broken in all browsers -- the only options from here on out are either
>> nothing gets drawn (as in gecko and presto), or the destination is
>> normalised (as in webkit).
>>
>
> Or making it consistent when the DOCTYPE is set to something.
>
> API behaviour is not effected by the DOCTYPE, only parsing.  Unfortunately
> you can't change a DOM API that has existed for years to something
> contradictory.
>

I guess I don't understand. I'm new to the list so forgive me but I thought
HTML5 was still a working draft and that the canvas tag was part of that
draft. How is a draft immutable?

Also, I don't follow the logic here: " Your suggesting would result in sites
being broken in all browsers -- the only options from here on out are either
nothing gets drawn (as in gecko and presto), or the destination is
normalised (as in webkit)."

I don't see how breaking some very small percentage of Webkit sites, or
breaking some very small percentage of Gecko/Presto sites is better than
from breaking some very small percentage of sites in all of them to make the
function useful and the spec specific.

(1) The number of sites that use cavnas is exceeding small at this point and
the number of those that count on negative width and height behavior being
one way or the other is and exceedingly small percent of those

(2) breaking some apps is the same as breaking some apps where some #1 is X
and some #2 is Y.  So what if X > Y if both X and Y are less than 0.01%
of websites.

Consistency and usefulness should win in this case. There is the chance to
make the spec unambiguous and more useful before canvas becomes widely used.



>
>  Image scaling is implementation dependent everywhere else, why would it
>>> be spec defined in the case of canvas?
>>
>>
>> There are 2 issues here I brought up
>>
>> 1) What happens at the edges.
>>
>> The results are VASTLY different now. Unless this works consistently it
>> would be hard to make canvas graphics work across browsers and expect get
>> reproducible results.  The 2x2 pixel example I gave, one browser ends up
>> scaling with translucency even though there is no translucent pixels in the
>> source image.
>>
>>
>> This is just an artifact of scaling, and you agree below that scaling is
>> implementation dependent.
>>
>
> I disagree. When I scale a rectangular opaque image I expect rectangular
> opaque results.  The Firefox implementation does not do this. If I take a
> 1x1 pixel image and attempt to use it to cover up something in another image
> by scaling it it will not cover up that other image. Only the very center
> pixel will be opaque, all other pixels will be some percentage translucent,
> showing whatever was previously drawn on the canvas.  That's a much bigger
> issue than whether the scaled pixels are blocky or smooth.
>
> If you believe that to be the case then you can always file a bug at
> bugs.webkit.org .
>

I can't claim it's a bug if the spec doesn't define what the correct
behavior is.

Here's a webpage showing the issue.

http://greggman.com/downloads/examples/canvas-test/test-01/canvas-test-01-results.html



>
> --Oliver
>
>


Re: [whatwg] Canvas context.drawImage clarification

2009-07-09 Thread Gregg Tavares
On Thu, Jul 9, 2009 at 4:28 PM, Oliver Hunt  wrote:

>
> On Jul 9, 2009, at 4:19 PM, Gregg Tavares wrote:
>
>
>
> On Thu, Jul 9, 2009 at 4:11 PM, Oliver Hunt  wrote:
>
>>  I'd like to make a passionate plea that the spec say "implementations
>>> must
>>> support negative widths and negative heights and draw the image backward
>>> effectively flipping the result".
>>>
>>
>> We'd need to be fairly sure that such a change would not break existing
>> content -- this is a change that would result in substantially different
>> rendering in some scenarios.
>>
>
> Given that it's inconsistent in the various browsers it's hard to see how
> this would break something since it's broken in 2 browsers one way or the
> other currently.
>
>
> Inconsistency doesn't lead to no one depending on a behaviour, it just
> means sites only work in one browser.  Your suggesting would result in sites
> being broken in all browsers -- the only options from here on out are either
> nothing gets drawn (as in gecko and presto), or the destination is
> normalised (as in webkit).
>

Or making it consistent when the DOCTYPE is set to something.


>
> Image scaling is implementation dependent everywhere else, why would it be
>> spec defined in the case of canvas?
>
>
> There are 2 issues here I brought up
>
> 1) What happens at the edges.
>
> The results are VASTLY different now. Unless this works consistently it
> would be hard to make canvas graphics work across browsers and expect get
> reproducible results.  The 2x2 pixel example I gave, one browser ends up
> scaling with translucency even though there is no translucent pixels in the
> source image.
>
>
> This is just an artifact of scaling, and you agree below that scaling is
> implementation dependent.
>

I disagree. When I scale a rectangular opaque image I expect rectangular
opaque results.  The Firefox implementation does not do this. If I take a
1x1 pixel image and attempt to use it to cover up something in another image
by scaling it it will not cover up that other image. Only the very center
pixel will be opaque, all other pixels will be some percentage translucent,
showing whatever was previously drawn on the canvas.  That's a much bigger
issue than whether the scaled pixels are blocky or smooth.





>
>
> 2) How it does the scaling.
>
> I agree that it being implementation dependent is probably fine.
>
>
>
>>
>> --Oliver
>>
>>
>>
>
>


Re: [whatwg] Canvas context.drawImage clarification

2009-07-09 Thread Gregg Tavares
On Thu, Jul 9, 2009 at 4:11 PM, Oliver Hunt  wrote:

> I'd like to make a passionate plea that the spec say "implementations must
>> support negative widths and negative heights and draw the image backward
>> effectively flipping the result".
>>
>
> We'd need to be fairly sure that such a change would not break existing
> content -- this is a change that would result in substantially different
> rendering in some scenarios.
>

Given that it's inconsistent in the various browsers it's hard to see how
this would break something since it's broken in 2 browsers one way or the
other currently.



>
>
>  Also, I'd like to suggest that a widths and heights of 0 for source should
>> be
>> valid as well as rectangles outside of the source also be valid and that
>> this
>> part of the spec.
>>
>> "If the source rectangle is not entirely within the source image, or if
>> one of
>> the sw or sh arguments is zero, the implementation must raise an
>> INDEX_SIZE_ERR
>> exception."
>>
>> be changed to reflect that.
>>
> The issues of when exceptions should be thrown in the canvas API have been
> discussed repeatedly on this list, you should search the archives and see if
> there are any arguments you can make that have not already been made. (I
> note that i am also all for exceptions not being thrown in many of these
> cases)


>
>  The next issue related to drawImage is that the spec does not specify how
>> to
>> filter an image when scaling it. Should it use bi-linear interpolation?
>> Nearest
>> Neighbor? Maybe that should stay implementation dependent?
>>
> Image scaling is implementation dependent everywhere else, why would it be
> spec defined in the case of canvas?


There are 2 issues here I brought up

1) What happens at the edges.

The results are VASTLY different now. Unless this works consistently it
would be hard to make canvas graphics work across browsers and expect get
reproducible results.  The 2x2 pixel example I gave, one browser ends up
scaling with translucency even though there is no translucent pixels in the
source image.

2) How it does the scaling.

I agree that it being implementation dependent is probably fine.



>
> --Oliver
>
>
>


[whatwg] Canvas context.drawImage clarification

2009-07-09 Thread Gregg Tavares
Hello, I'm new to the list so I hope this is the right place and format.

I've been having a look at the canvas tag API specification and I noticed at

least one ambiguity. (I'm guessing those that have been on the list for a
while are laughing)

The specific ambiguity I'd like to bring up has to do with the several
versions
of a function, context.drawImage. They take width and height values.  The
spec
does not make it clear what is supposed to happen with negative values.

My personal interpretation and preference is that negative values should

(a) be legal and
(b) draw backward, flipping the image.

The specification currently says:

"The source rectangle is the rectangle whose corners are the four points
(sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh).

...

The destination rectangle is the rectangle whose corners are the four
points (dx, dy), (dx+dw, dy), (dx+dw, dy+dh), (dx, dy+dh)."

Well, simple math would suggest that if sx = 10, and sw = -5 then it still
defines a valid rectangle.

Unfortunately since the spec is ambiguous the current browsers that
implement
this all do it differently. Firefox and Opera draw nothing with a negative
width or height. Safari and Chrome draw to the rectangle defined by negative

widths and heights but do not flip the image.

I'd like to make a passionate plea that the spec say "implementations must
support negative widths and negative heights and draw the image backward
effectively flipping the result".

Coming from a graphics and game development background we use the ability to

flip images all the time. I know that I can achieve similar results by using
a
transform matrix but still it would be much easier to just make negative
widths
and heights specifically part of the spec.

Also, I'd like to suggest that a widths and heights of 0 for source should
be
valid as well as rectangles outside of the source also be valid and that
this
part of the spec.

"If the source rectangle is not entirely within the source image, or if one
of
the sw or sh arguments is zero, the implementation must raise an
INDEX_SIZE_ERR
exception."

be changed to reflect that.

Coming from a graphics background I see no reason why if I let my user size
an
image in a canvas I should have to special case a width or height of zero.
Just
draw nothing if the width or height is zero. Similarly, if I was to provide
a UI
to let a user choose part of the source to copy to the dest and I let them
define
a rectangle on the source and drag it such that all or part of it is off the

source I see no reason why I should have to do extra math in my application
to
make that work when simple clipping of values in drawImage would make all
that
extra work required by each app disappear.

Another way to look at that is in OpenGL, if my texture coordinates are set
less than 0 or greater than 1 the GPU does not fail. Why should drawImage
act any differently?

The next issue related to drawImage is that the spec does not specify how to

filter an image when scaling it. Should it use bi-linear interpolation?
Nearest
Neighbor? Maybe that should stay implementation dependent? On top of that
the spec
does not say what happens at the edges and the different browsers are doing
different things. To give you an example, if you take a 2x2 pixel image and
scale it to 256x256 using drawImage. All the major browsers that currently
support the canvas tag will give you an image where the center of each pixel
is
around center of each 128x128 corner of the 256x256 result. The area inside
the
area defined by those 4 points is rendered very similar on all 4 browsers.
The
area outside though, the edge, is rendered very differently. On Safari,
Chrome
and Opera the colors of the original pixels continue to be blended all the
way to
the edge of the 256x256 area. On Firefox though, the blending happens as
though
the source image was actually 4x4 pixels instead of 2x2 where the edge
pixels
are all set to an RGBA value of 0, 0, 0, 0. It then draws that scaled image
as
as though the source rectangle was sx = 1, sy = 1, sw = 2, sh = 2 so that
you
get a progressively more and more translucent color towards the edge of the
rectangle.

I don't know which is right but with low resolution source images the 2 give

vastly different results.

Could these ambiguities be clarified in the spec?


[whatwg] Canvas context.drawImage clarification

2009-07-09 Thread Gregg Tavares
Hello, I'm new to the list so I hope this is the right place and format.

I've been having a look at the canvas API specification and I noticed at
least
one ambiguity. (I'm guessing those that have been on the list for a while
are
laughing)

The specific ambiguity I'd like to bring up has to do with the several
versions
of a function, context.drawImage. They take width and height values.  The
spec
does not make it clear what is supposed to happen with negative values.

My personal interpretation and preference is that negative values should

(a) be legal and
(b) draw backward, flipping the image.

The specification currently says:

"The source rectangle is the rectangle whose corners are the four points
(sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh).

...

The destination rectangle is the rectangle whose corners are the four
points (dx, dy), (dx+dw, dy), (dx+dw, dy+dh), (dx, dy+dh)."

Well, simple math would suggest that if sx = 10, and sw = -5 then it still
defines a valid rectangle.

Unfortunately since the spec is ambiguous the current browsers that
implement
this all do it differently. Firefox and Opera draw nothing with a negative
width or height. Safari and Chrome draw to the rectangle defined by negative

widths and heights but do not flip the image.

I'd like to make a passionate plea that the spec say "implementations must
support negative widths and negative heights and draw the image backward
effectively flipping the result".

Coming from a graphics and game development background we use the ability to

flip images all the time. I know that I can achieve similar results by using
a
transform matrix but still it would be much easier to just make negative
widths
and heights specifically part of the spec.

Also, I'd like to suggest that a widths and heights of 0 for source should
be
valid as well as rectangles outside of the source also be valid and that
this
part of the spec.

"If the source rectangle is not entirely within the source image, or if one
of
the sw or sh arguments is zero, the implementation must raise an
INDEX_SIZE_ERR
exception."

be changed to reflect that.

Coming from a graphics background I see no reason why if I let my user size
an
image in a canvas I should have to special case a width or height of zero.
Just
draw nothing if the width or height is zero. Similarly, if I was to provide
a UI
to let a user choose part of the source to copy to the dest and I let them
define
a rectangle on the source and drag it such that all or part of it is off the

source I see no reason why I should have to do extra math in my application
to
make that work when simple clipping of values in drawImage would make all
that
extra work required by each app disappear.

The next issue related to drawImage is that the spec does not specify how to

filter an image when scaling it. Should it use bi-linear interpolation?
Nearest
Neighbor? Maybe that should stay implementation dependent? On top of that
the spec
does not say what happens at the edges and the different browsers are doing
different things. To give you an example, if you take a 2x2 pixel image and
scale it to 256x256 using drawImage. All the major browsers that currently
support the canvas tag will give you an image where the center of each pixel
is
around center of each 128x128 corner of the 256x256 result. The area inside
the
area defined by those 4 points is rendered very similar on all 4 browsers.
The
area outside though, the edge, is rendered very differently. On Safari,
Chrome
and Opera the colors of the original pixels continue to be blended all the
way to
the edge of the 256x256 area. On Firefox though, the blending happens as
though
the source image was actually 4x4 pixels instead of 2x2 where the edge
pixels
are all set to an RGBA value of 0, 0, 0, 0. It then draws that scaled image
as
as though the source rectangle was sx = 1, sy = 1, sw = 2, sh = 2 so that
you
get a progressively more and more translucent color towards the edge of the
rectangle.

I don't know which is right but with low resolution source images the 2 give

vastly different results.

Could these ambiguities be clarified in the spec?