Re: [whatwg] I believe source rectangles for HTML5 Canvas drawImage are specified incorrectly

2012-09-10 Thread Vladimir Vukicevic
This is pretty tricky to get right -- there's just a general graphics
problem in this case.  There are valid use cases for both sampling outside
and not sampling outside the source rectangle, as well as implementation
issues for being able to do source rectangle clamping.  For example, should
you be able to take a source image and draw it scaled up using 4 rectangles
(one for each quadrant) and have the result be equal to just doing it in
one draw?  Or take any random subimage (for example, for efficient updates
of some destination) and draw it in.

I do agree that the spec needs some clarity here, but I don't think that
just stating that drawImage should always sample in the source is the right
thing.  At best, I think a new mode toggle or flag would be needed to allow
you to select.

Additionally, I think there's a related bug filed from a while ago about
defining how to sample pixels that are outside of the source bounds -- do
you clamp to edge, do you sample transparent black, etc.

- Vlad

On Mon, Aug 20, 2012 at 10:09 AM, Justin Novosad ju...@chromium.org wrote:

 Hi Kevin,

 The same artifact use to be present in Chrome not that long ago. When we
 fixed it, we chose to interpret original image data as meaning the part
 of the image data that is within the bounds of the of the source rectangle.
 Also, it makes more sense to do it that way. I agree that the spec could
 use more clarity here.
 I support your case that it is preferable for the filtering algorithm to
 clamp to the border of the source rectangle rather than to the border the
 border of the source image.  This is essential for implementing sprite maps
 without having to waste pixels to pad the borders between tiles.

  -Justin Novosad

 On Mon, Aug 20, 2012 at 9:38 AM, Kevin Gadd kevin.g...@gmail.com wrote:

  Hi, I've been digging into an inconsistency between various browsers'
  Canvas implementations and I think the spec might be allowing
  undesirable behavior here.
 
  The current version of the spec says
  (
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-drawimage
  ):
 
  If the original image data is a bitmap image, the value painted at a
  point in the destination rectangle is computed by filtering the
  original image data. The user agent may use any filtering algorithm
  (for example bilinear interpolation or nearest-neighbor). When the
  filtering algorithm requires a pixel value from outside the original
  image data, it must instead use the value from the nearest edge pixel.
  (That is, the filter uses 'clamp-to-edge' behavior.)
 
  While clamp-to-edge is desirable, the way this is specified means that
  it only ever clamps to the edges of the source bitmap, not to the
  source rectangle. That means that attempting to do the equivalent of
  css sprites or video game style 'tile sets' - where a single source
  image contains many smaller images - is not possible, because the spec
  allows implementations to read pixels from outside the source
  rectangle.
 
  Unfortunately, at present Internet Explorer and Firefox both read
  pixels from outside the source rectangle, as demonstrated by this test
  case:
  https://dl.dropbox.com/u/1643240/canvas_artifacts.html
  Worse still, in implementations with imageSmoothingEnabled available,
  turning off image smoothing is not sufficient to eliminate the
  artifacts.
 
  Google Chrome appears to implement this the way you would probably
  want it to work - by clamping to the edges of the source rectangle,
  instead of the source image. I can't think of a good reason to prefer
  the current behavior over what Chrome does, and I haven't been able to
  find a reliable way to compensate for the current behavior.
 
  Thanks,
  -kg
 



[whatwg] [canvas] getContext multiple contexts

2010-04-29 Thread Vladimir Vukicevic

Hey folks,

A while ago questions came up in the WebGL WG about using a canvas with 
multiple rendering contexts, and synchronization issues that arise 
there.  Here's our suggested change to getContext.  It essentially 
allows for multiple contexts but adds no synchronization primitives 
other than the requirement that rendering must be visible to all 
contexts (that is, that they're rendered to the same destination space).


This also adds the 'attributes' parameter which can customize the 
context that's created, as defined by the context itself.  WebGL has its 
own context attributes object, and I'd suggest that the 2D context gain 
at least an attribute to specify whether the context should be opaque or 
not; but that's a separate suggestion from the below text.


- Vlad

  object getContext(in DOMString contextId, in optional any attributes)

  A canvas may be rendered to using one or more contexts, each named by
  a string context ID. For each canvas, there is a set of zero or more
  active contexts. The getContext() method is used to obtain a
  particular rendering context for the canvas.

  'contextId' must be a string naming a canvas rendering context to be
  returned. For example, this specification defines the '2d' context,
  which, if requested, will return either a reference to an object
  implementing CanvasRenderingContext2D or null, if a 2D context cannot
  be created at this time. Other specifications may define their own
  contexts, which would return different objects.

  The optional 'attributes' parameter must be either unspecified or an
  object specific to the context being requested. An unspecified value
  indicates a default set of attributes, as defined by the context
  ID. Unknown attributes must be ignored by the context.

  If getContext() is called with a context ID that the implementation
  does not support, it must return null.

  If there are no active contexts for the canvas, the implementation
  must create the specified context for the canvas.

  If a context ID that is already an active context for the canvas is
  requested, then any passed attributes must be ignored, and a reference
  to the existing context object must be returned.

  If there are one or more active contexts and a context ID that is not
  currently active is requested, it is up to the implementation to
  determine whether the requested context can be used simultaneously
  with all currently active canvas contexts. If simultaneous rendering
  with the requested context is not possible, getContext() must return
  null. Otherwise the implementation must create the specified context
  for the canvas.

  Certain context types may not support all combinations of
  context-specific attributes. If an unsupported set of attributes is
  requested during context creation, but the context ID is otherwise
  compatible with all existing contexts, then the implementation must
  create the new context with a set of attributes that best satisfies
  those requested. The caller is responsible for using context-specific
  APIs to determine whether the attributes used to create the context
  satisfy the requirements of the caller's code.

  If a new context is successfully created, a reference to an object
  implementing the context API is returned and the new context is added
  to the list of active contexts for the canvas.

  If multiple rendering contexts are active, they all render to the same
  canvas bitmap; they are not layered or otherwise isolated. Changes
  made to the canvas bitmap with one context must be immediately visible
  to any other active contexts on the canvas. The implementation must
  manage synchronization issues associated with rendering with different
  contexts to the same canvas.  Supporting different rendering contexts
  within the same canvas is not recommended due to the significant cost
  of synchronization.  Instead, each context API is encouraged to support
  generic interoperability with other canvases. For example, the 2D
  canvas API provides a drawImage method that can render the contents of
  another canvas.


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Vladimir Vukicevic

On 3/15/2010 4:22 AM, Maciej Stachowiak wrote:


On Mar 15, 2010, at 3:46 AM, Philip Taylor wrote:

On Mon, Mar 15, 2010 at 7:05 AM, Maciej Stachowiak m...@apple.com 
wrote:

Copying from one canvas to another is much faster than copying to/from
ImageData. To make copying to a Worker worthwhile as a responsiveness
improvement for rotations or downscales, in addition to the 
OffscreenCanvas

proposal we would need a faster way to copy image data to a Worker. One
possibility is to allow an OffscreenCanvas to be copied to and from a
background thread. It seems this would be much much faster than 
copying via

ImageData.


Maybe this indicates that implementations of getImageData/putImageData
ought to be optimised? e.g. do the expensive multiplications and
divisions in the premultiplication code with SIMD. (A seemingly
similar thing at http://bugzilla.openedhand.com/show_bug.cgi?id=1939
suggests SSE2 makes things 3x as fast). That would avoid the need to
invent new API, and would also benefit anyone who wants to use
ImageData for other purposes.


It might be possible to make getImageData/putImageData faster than 
they are currently, certainly the browsers at the slower end of the 
ImageData performance spectrum must have a lot of headroom. But they 
probably also probably have room to optimize drawImage. (Looking back 
at my data I noticed that getImageData + putImageData in Safari is 
about as fast or faster than two drawImage calls in the other browsers 
tested).


In the end, though, I doubt that it's possible for getImageData or 
putImageData to be as fast as drawImage, since drawImage doesn't have 
to do any conversion of the pixel format.


This is true -- getImageData/putImageData unfortunately saddled us with 
two performance-killing bits:


1) clamping on assignment.  Not so bad, but doesn't help.

2) Unpremultiplied alpha.  This is the biggest chunk.  We have more 
optimized code in nightly builds of Firefox now that uses a lookup table 
and gets a pretty significant speedup for this part of put/get, but it's 
not going to be as fast as drawImage.


Also, canvas is often (or can be) backed by actual hardware surfaces, 
and drawImage from one to another is going to be much faster than 
reading the data into system memory and then drawing from there back to 
the hardware surface.


If we wanted to support this across workers (and I think it would be 
helpful to figure out how to do so), something like saying that if a 
canvas object was passed (somehow) between workers, it would be a copy 
-- and internally it could be implemented using copy-on-write semantics.


- Vlad


Re: [whatwg] Canvas performance issue: setting colors

2008-11-10 Thread Vladimir Vukicevic

On 10/3/08 4:37 PM, Oliver Hunt wrote:

thinking out loud
Just had a thought (no idea how original) -- how about if fillStyle were
able to accept a 3 or 4 number array? eg. fillStyle = [0, 0.3, 0.6, 1.0] ?

That might work well if people are using arrays as vectors/colours
/thinking out loud


I actually have a patch sitting around that starts to do this -- I never 
got around to proposing it to the list.  Parsing CSS-style color names 
is certainly flexible, but given that 99.9% of uses are going to be rgb 
or rgba, I'd agree that [r, g, b] or [r, g, b, a] should be accepted as 
colors (in the range of 0.0 to 1.0) wherever colors are accepted in the 
canvas API..


   - Vlad


Re: [whatwg] canvas shadow compositing oddities

2008-08-04 Thread Vladimir Vukicevic


On Aug 4, 2008, at 2:29 PM, Eric Butler wrote:


Philip Taylor wrote:
On Sun, Jul 27, 2008 at 8:06 PM, Eric Butler [EMAIL PROTECTED]  
wrote:



[...]
However, following the spec's drawing model, there are a few  
operators that
behave rather unexpectedly if the shadow color is left at its  
default value.
For instance, since A in B always results in transparency if  
either A or B
is fully transparent, source-in will always simply clear the  
clipping region

to fully transparent no matter what the source and destination are.



Oops - that does seem quite broken. (It's probably my fault - I  
didn't

notice that problem when I was looking at how shadows should work...


The need to be able to disable shadows explicitly seems clear. But I  
also believe that the spec should provide
for a means to disable normal drawing and only draw shadows to  
increase the usefulness of shadows.


As it stands, if you draw with shadows, you'll end up getting some  
of the shadows drawn on top of some of
the actual shapes. But perhaps the developer wants to have all  
shadows behind all shapes for a particular set
of shapes. The only way to accomplish that would be to create a  
second canvas, do all the drawing without
shadows on that, then draw the canvas with its shadow back to the  
original, which seems cumbersome

to use and is terribly inefficient.


I think that'll cause problems as well -- for example, let's say you  
had two overlapping paths that you wanted to draw a shadow behind.   
The two paths are both solid and are supposed to be rendered as a  
single shape to the user.  If you drew them separately with shadows,  
as it stands now, the shadows would end up adding and would be denser  
in the overlap areas which isn't what the author would intend.  I  
would suggest:


- special case opacity 0, 0,0 offset, 0 blur radius as 'shadows off',  
as Oliver suggested to preserve current usage


- if shadows aren't off, draw them normally -- one shadow per drawing  
operation


- go the whole way and add beginLayer/endLayer, akin to  
CGContextBeginTransparencyLayer[WithRect]/EndTransparencyLayer.  Could  
also call it pushGroup/popGroup.  As a side benefit, this would  
provide a simple way to implement double-buffered rendering without  
needing to use two canvases.  (http://developer.apple.com/documentation/GraphicsImaging/Reference/CGContext/Reference/reference.html#/ 
/apple_ref/c/func/CGContextBeginTransparencyLayer)


   - Vlad



Re: [whatwg] Audio canvas?

2008-07-23 Thread Vladimir Vukicevic


On Jul 16, 2008, at 11:25 AM, Dave Singer wrote:


At 20:18  +0200 16/07/08, Dr. Markus Walther wrote:


get/setSample(samplePoint t, sampleValue v, channel c).

For the sketched use case - in-browser audio editor -, functions on  
sample regions from {cut/add silence/amplify/fade} would be nice  
and were mentioned as an extended possibility, but that is optional.


I don't understand the reference to MIDI, because my use case has  
no connection to musical notes, it's about arbitrary audio data on  
which MIDI has nothing to say.


get/set sample are 'drawing primitives' that are the equivalent of  
get/setting a single pixel in images.  Yes, you can draw anything a  
pixel at a time, but it's mighty tedious.  You might want to lay  
down a tone, or some noise, or shape the sound with an envelope, or  
do a whole host of other operations at a higher level than sample-by- 
sample, just as canvas supports drawing lines, shapes, and so on.   
That's all I meant by the reference to MIDI.


I think an interesting approach for an audio canvas would be to allow  
you to both manipulate audio data directly (through a getSampleData/ 
putSampleData type interface), but also build up an audio filter  
graph, both with some predefined filters/generators and with the  
ability to do filters in javascript.  Would make for some interesting  
possibilities, esp. if it's able to take audio as input.


   - Vlad



Re: [whatwg] createImageData

2008-06-02 Thread Vladimir Vukicevic


Sorry it took me a bit to respond here... so, ok, based on the  
discussion, I'd suggest:


- user-created ImageData-like objects should be supported, e.g. with  
language such as:


The first argument to the method must be an ImageData object returned  
by createImageData(), getImageData(), or an object constructed with  
the necessary properties by the user.  If the object was constructed  
by the user, its width and height dimensions are specified in device  
pixels (which may not map directly to CSS pixels).  If null or any  
other object is given that does not present the ImageData interface,  
then the putImageData() method must raise a TYPE_MISMATCH_ERR exception.


- ImageData objects returned by createImageData or getImageData should  
behave as currently specified; that is, they should explicitly clamp  
on pixel assignment.


That gives users a choice over which approach they want to take, and  
whether they want clamping or not.


How's that sound?

- Vlad



[whatwg] [canvas] imageRenderingQuality property

2008-06-02 Thread Vladimir Vukicevic


I'd like to propose adding an imageRenderingQuality property on the  
canvas 2D context to allow authors to choose speed vs. quality when  
rendering images (especially transformed ones).  This is modeled on  
the SVG image-rendering property, at http://www.w3.org/TR/SVG/painting.html#ImageRenderingProperty 
:


  attribute string imageRenderingQuality;

'auto' (default): The user agent shall make appropriate tradeoffs to  
balance speed and quality, but quality shall be given more importance  
than speed.


'optimizeQuality': Emphasize quality over rendering speed.

'optimizeSpeed': Emphasize speed over rendering quality.

No specific image sampling algorithm is specified for any of these  
properties, with the exception that, at a minimum, nearest-neighbour  
resampling should be used.  One alternative is to specify 'best',  
'good', 'fast', with good being the default, as opposed to the SVG  
names; I think those names are more descriptive, but there might be  
value in keeping the names consistent with SVG, especially if that  
property bubbles up into general CSS usage.


- Vlad



Re: [whatwg] [canvas] imageRenderingQuality property

2008-06-02 Thread Vladimir Vukicevic


Sure; bilinear filtering is slower than nearest neighbour sampling,  
and in many cases the app author would like to be able to decide that  
tradeoff (or, at least, to be able to say I want this to go as fast  
as possible, regardless of quality).  Some apps might also render to  
a canvas just once, and would prefer to do it at the highest quality  
filtering available even if it's more expensive than the default.


- Vlad

On Jun 2, 2008, at 12:25 PM, Oliver Hunt wrote:
Um, could you actually give some kind of reasoning for these?  I am  
not aware of any significant performance issues in Canvas that  
cannot be almost directly attributed to JavaScript itself rather  
than the canvas.


--Oliver

On Jun 2, 2008, at 12:19 PM, Vladimir Vukicevic wrote:



I'd like to propose adding an imageRenderingQuality property on the  
canvas 2D context to allow authors to choose speed vs. quality when  
rendering images (especially transformed ones).  This is modeled on  
the SVG image-rendering property, at http://www.w3.org/TR/SVG/painting.html#ImageRenderingProperty 
:


attribute string imageRenderingQuality;

'auto' (default): The user agent shall make appropriate tradeoffs  
to balance speed and quality, but quality shall be given more  
importance than speed.


'optimizeQuality': Emphasize quality over rendering speed.

'optimizeSpeed': Emphasize speed over rendering quality.

No specific image sampling algorithm is specified for any of these  
properties, with the exception that, at a minimum, nearest- 
neighbour resampling should be used.  One alternative is to specify  
'best', 'good', 'fast', with good being the default, as opposed  
to the SVG names; I think those names are more descriptive, but  
there might be value in keeping the names consistent with SVG,  
especially if that property bubbles up into general CSS usage.


  - Vlad







Re: [whatwg] [canvas] imageRenderingQuality property

2008-06-02 Thread Vladimir Vukicevic


Yeah, I agree -- I thought that there was some plan somewhere to  
uplift a bunch of these SVG CSS properties into general usage?  I know  
that Gecko uplifted text-rendering, we should figure out what else  
makes sense to pull up.  (If image-rendering were uplifted, it would  
apply to canvas, for the scaling/transformation of the canvas  
element itself as opposed to the canvas rendering content.)


- Vlad

On Jun 2, 2008, at 2:26 PM, David Hyatt wrote:
I like the idea of this property.  I actually would love to see the  
SVG property applied to HTML img as well. :)


dave

On Jun 2, 2008, at 4:15 PM, Vladimir Vukicevic wrote:



Sure; bilinear filtering is slower than nearest neighbour sampling,  
and in many cases the app author would like to be able to decide  
that tradeoff (or, at least, to be able to say I want this to go  
as fast as possible, regardless of quality).  Some apps might also  
render to a canvas just once, and would prefer to do it at the  
highest quality filtering available even if it's more expensive  
than the default.


  - Vlad

On Jun 2, 2008, at 12:25 PM, Oliver Hunt wrote:
Um, could you actually give some kind of reasoning for these?  I  
am not aware of any significant performance issues in Canvas that  
cannot be almost directly attributed to JavaScript itself rather  
than the canvas.


--Oliver

On Jun 2, 2008, at 12:19 PM, Vladimir Vukicevic wrote:



I'd like to propose adding an imageRenderingQuality property on  
the canvas 2D context to allow authors to choose speed vs.  
quality when rendering images (especially transformed ones).   
This is modeled on the SVG image-rendering property, at http://www.w3.org/TR/SVG/painting.html#ImageRenderingProperty 
:


attribute string imageRenderingQuality;

'auto' (default): The user agent shall make appropriate tradeoffs  
to balance speed and quality, but quality shall be given more  
importance than speed.


'optimizeQuality': Emphasize quality over rendering speed.

'optimizeSpeed': Emphasize speed over rendering quality.

No specific image sampling algorithm is specified for any of  
these properties, with the exception that, at a minimum, nearest- 
neighbour resampling should be used.  One alternative is to  
specify 'best', 'good', 'fast', with good being the default, as  
opposed to the SVG names; I think those names are more  
descriptive, but there might be value in keeping the names  
consistent with SVG, especially if that property bubbles up into  
general CSS usage.


- Vlad











Re: [whatwg] [canvas] imageRenderingQuality property

2008-06-02 Thread Vladimir Vukicevic


On Jun 2, 2008, at 2:39 PM, Oliver Hunt wrote:
That's exactly what i would be afraid of people doing.  If I have a  
fast system why should i have to experience low quality rendering?   
It should be the job of the platform to determine what level of  
performance or quality can be achieved on a given device.  Typically  
such a property would be considered a hint, and as such would  
likely be ignored.


If honouring this property was _required_ rather than being a hint  
you would hit the following problems:
* Low power devices would have a significant potential for poor  
performance if a developer found that their desktop performed well  
so set the requirement to high quality.
* High power devices would be forced to use low quality rendering  
modes when perfectly capable of providing better quality without  
significant performance penalty.
Neither of these apply if the property were just a hint, but now you  
have to think about what happens to content that uses this property  
in 18 months time.  You've told the UA to use a low quality  
rendering when it may no longer be necessary, so now the UA has a  
choice it either always obeys the property meaning lower quality  
than is necessary so that new content performs well, or it ignores  
the property in which case new content performs badly.


If web apps misuse the property, then bugs should be filed on those  
apps that incorrectly use the property, and the app developer should  
fix them.  The web platform shouldn't prevent developers from  
exercising control over how their content is rendered; most  
developers, as you say, probably shouldn't change anything from the  
default 'auto'.  But the capability should be there.  Arbitrarily  
deciding what developers can and can't do isn't interesting from the  
perspective of creating a full-featured platform, IMO.


No matter how fast smooth/bilinear filtering is, something more  
complex is always going to be slower, and something less complex is  
always going to be faster.  If those perf differences are significant  
to the web app, no matter how small, you're going to want to be able  
to have that control.  If they're not, then you should just be using  
'auto' and let the UA handle it.


- Vlad


On Jun 2, 2008, at 2:15 PM, Vladimir Vukicevic wrote:



Sure; bilinear filtering is slower than nearest neighbour sampling,  
and in many cases the app author would like to be able to decide  
that tradeoff (or, at least, to be able to say I want this to go  
as fast as possible, regardless of quality).  Some apps might also  
render to a canvas just once, and would prefer to do it at the  
highest quality filtering available even if it's more expensive  
than the default.


  - Vlad

On Jun 2, 2008, at 12:25 PM, Oliver Hunt wrote:
Um, could you actually give some kind of reasoning for these?  I  
am not aware of any significant performance issues in Canvas that  
cannot be almost directly attributed to JavaScript itself rather  
than the canvas.


--Oliver

On Jun 2, 2008, at 12:19 PM, Vladimir Vukicevic wrote:



I'd like to propose adding an imageRenderingQuality property on  
the canvas 2D context to allow authors to choose speed vs.  
quality when rendering images (especially transformed ones).   
This is modeled on the SVG image-rendering property, at http://www.w3.org/TR/SVG/painting.html#ImageRenderingProperty 
:


attribute string imageRenderingQuality;

'auto' (default): The user agent shall make appropriate tradeoffs  
to balance speed and quality, but quality shall be given more  
importance than speed.


'optimizeQuality': Emphasize quality over rendering speed.

'optimizeSpeed': Emphasize speed over rendering quality.

No specific image sampling algorithm is specified for any of  
these properties, with the exception that, at a minimum, nearest- 
neighbour resampling should be used.  One alternative is to  
specify 'best', 'good', 'fast', with good being the default, as  
opposed to the SVG names; I think those names are more  
descriptive, but there might be value in keeping the names  
consistent with SVG, especially if that property bubbles up into  
general CSS usage.


- Vlad











Re: [whatwg] createImageData

2008-05-13 Thread Vladimir Vukicevic


On May 10, 2008, at 4:53 PM, Vladimir Vukicevic wrote:
I would amend the spec to state that if an object is passed to  
putImageData with the necessary properties, but without having been  
created by create/getImageData beforehand, that its dimensions are  
aways in device pixels.


Some suggested language in section 3.12.11.1.11(!):

Instead of:

If the first argment to the method is null or not an ImageData  
object that was returned by createImageData() or getImageData() then  
the putImageData() method must raise a TYPE_MISMATCH_ERR exception.


I would suggest:

The first argument to the method must be an ImageData object returned  
by createImageData(), getImageData(), or an object constructed with  
the necessary properties by the user.  If the object was constructed  
by the user, its width and height dimensions are specified in device  
pixels (which may not map directly to CSS pixels).  If null or any  
other object is given that does not present the ImageData interface,  
then the putImageData() method must raise a TYPE_MISMATCH_ERR exception.


- Vlad



Re: [whatwg] createImageData

2008-05-13 Thread Vladimir Vukicevic


On May 13, 2008, at 2:58 PM, Oliver Hunt wrote:


On May 13, 2008, at 1:53 PM, Vladimir Vukicevic wrote:
The first argument to the method must be an ImageData object  
returned by createImageData(), getImageData(), or an object  
constructed with the necessary properties by the user.  If the  
object was constructed by the user, its width and height dimensions  
are specified in device pixels (which may not map directly to CSS  
pixels).  If null or any other object is given that does not  
present the ImageData interface, then the putImageData() method  
must raise a TYPE_MISMATCH_ERR exception.


If we were to add that we should include a note to indicate that  
using a custom object is not recommended -- Any code that uses a  
custom created object will never benefit from improvements in  
ImageData performance made by the UA.


I'm fine with adding that language (the first part, anyway); something  
like Using a custom object is not recommended as the UA may be able  
to optimize operations using ImageData if they were created via  
createImageData() or getImageData().


That said I still don't believe custom objects should be allowed,  
aside from the resolution (which may or may not be relevant) and  
performance issues, a custom object with a generic JS array, rather  
than an ImageData object will have different behaviour -- a proper  
ImageData will clamp on assignment, and throw in cases that a custom  
object won't.


That verification seems odd; doing those checks (clamping, conversion  
to number) on every single pixel assignment is going the wrong  
direction for performance -- you really want to validate everything at  
once.


- Vlad



Re: [whatwg] createImageData

2008-05-13 Thread Vladimir Vukicevic


On May 13, 2008, at 3:37 PM, Oliver Hunt wrote:
That said I still don't believe custom objects should be allowed,  
aside from the resolution (which may or may not be relevant) and  
performance issues, a custom object with a generic JS array,  
rather than an ImageData object will have different behaviour -- a  
proper ImageData will clamp on assignment, and throw in cases that  
a custom object won't.


That verification seems odd; doing those checks (clamping,  
conversion to number) on every single pixel assignment is going the  
wrong direction for performance -- you really want to validate  
everything at once.
But by delaying clamping, etc you are requiring that the backing  
store be an array of boxed values, leading to increased memory  
usage, increased indirection, and increasing the cost of the final  
blit.


That's an implementation detail, I guess..

My experience implementing this in WebKit showed a pure byte array  
backing store was significantly faster than using boxed values.


Faster for which operation, though?  The put, or the actual  
manipulation?  It's a tradeoff, really; if you're doing limited pixel  
manipulation, but lots of putImageData, you can optimize that directly  
by just calling putImageData once to an offscreen canvas and then  
blitting that with drawImage.  If you're doing lots of pixel  
manipulation but only one putImageData, I guess you can use a JS array  
for your intermediate ops to avoid the checking overhead, and set the  
image data pixels all at once (though again paying the checking  
penalty per pixel), but having cheap putImageData.


Throwing the error at putImageData time lets the implementation  
optimize in whatever way is most convenient/performant (either at  
pixel operation time by setting an error bit in the ImageData object  
which is checked by putImageData, or at putImageData time), and is  
(IMO) more flexible.. given that errors are an exceptional case, I  
don't think the spec should force the checking per pixel.


   - Vlad



Re: [whatwg] createImageData

2008-05-13 Thread Vladimir Vukicevic


On May 13, 2008, at 4:10 PM, Oliver Hunt wrote:


My experience implementing this in WebKit showed a pure byte array  
backing store was significantly faster than using boxed values.


Faster for which operation, though?  The put, or the actual  
manipulation?  It's a tradeoff, really; if you're doing limited  
pixel manipulation, but lots of putImageData, you can optimize that  
directly by just calling putImageData once to an offscreen canvas  
and then blitting that with drawImage.  If you're doing lots of  
pixel manipulation but only one putImageData, I guess you can use a  
JS array for your intermediate ops to avoid the checking overhead,  
and set the image data pixels all at once (though again paying the  
checking penalty per pixel), but having cheap putImageData.


Throwing the error at putImageData time lets the implementation  
optimize in whatever way is most convenient/performant (either at  
pixel operation time by setting an error bit in the ImageData  
object which is checked by putImageData, or at putImageData time),  
and is (IMO) more flexible.. given that errors are an exceptional  
case, I don't think the spec should force the checking per pixel.


I found it faster in general across quite a few tests.  I would  
argue that if you are using ImageData in a way that leads to you  
writing to the same pixel multiple times you should improve your  
algorithms (this is just the generic over painting issue).


I dunno, some kind of iterative algorithm that you want to visualize  
at random timesteps.  You could keep the output in a separate array  
and copy over when you want to render it.


A very reall issue to consider though is the case where I've been  
very careful to only update those pixels that need to be updated.   
If the ImageData is not clamped, etc on put then *every* blit must  
do a complete revalidation of the entire ImageData data buffer.


Yep, that's true.

I think we need a list of use cases for ImageData, off the top of my  
head i can think of:
* filters -- in general a single write per pixel, potentially  
multiple reads

* Generated images -- still arguably single write per pixel
* I'm not sure what to call this -- but things like 
http://jsmsxdemo.googlepages.com/jsmsx.html

I honestly can't think of something that would (sanely) expect to be  
writing multiple times to the same pixel between blits, but i should  
note i haven't actively spent any significant time trying to come up  
with these.  That said in all of the above cases the cost of  
immediate clamping is technically the same as delaying the clamp,  
although it also has the benefit of allowing reduced memory usage.


Yeah, those are all good use cases -- it just seems like requiring  
immediate clamping is basically specifying for a specific  
implementation, when the overall goal is checking for invalid data.   
Specifying that the error should come from putImageData would give  
implementations more flexibility, without limiting error checking.   
(You could argue that it's easier to get a precise error location by  
checking on pixel assignment, but I don't think that the potential  
cost and loss of flexibility is worth it.  Once authors know that they  
have an error in their data, they can take other action to track it  
down.)


- Vlad



Re: [whatwg] createImageData

2008-05-10 Thread Vladimir Vukicevic


On May 9, 2008, at 5:53 PM, Ian Hickson wrote:

On Fri, 9 May 2008, Vladimir Vukicevic wrote:

I don't think the restriction that putImageData must only work with
objects returned by create/get is a good one


This restriction was made because it allows for dramatic (many  
orders of
magnitude) optimisations. With createImageData(), the use cases for  
custom
imageData objects should be catered for -- what are the cases where  
you
would need another solution? (Note that hand-rolling imageData  
objects is
dangerous since you don't know what resolution the backing store is  
using,

necessarily, which is something else that createImageData() solves.)


Well, I don't agree that it's dangerous; canvas resolution  
independence has always been hard to pin down, and I still maintain  
that it shouldn't be treated any differently than an image is  
treated.  Canvas isn't supposed to replace SVG.  However, regardless  
of that, I don't think there's a reason to disallow custom-created  
data objects, perhaps with a caveat that there may be issues.. get/ 
putImageData in Firefox 2, so adding that restriction may  
unnecessarily break existing code that uses putImageData with a hand- 
constructed ImageData object.  I would amend the spec to state that if  
an object is passed to putImageData with the necessary properties, but  
without having been created by create/getImageData beforehand, that  
its dimensions are aways in device pixels.


One problem with the desired goal of resolution independence is that  
it only really makes sense if the target resolution is an integer  
multiple of a CSS pixel.  For example, with a 144dpi output device,  
that's exactly 1.5 times CSS resolution.  If I call createImageData  
with dimensions 10,10, I would get an ImageData object with width 15  
and height 15.  What do I get if I call it with 11,11 though?  That's  
16.5 device pixels, and you're going to lose data either way you go,  
because at putImageData time you're going to get seams no matter what  
direction you round.  This can maybe be solved with language in the  
spec that specifies that a canvas should use a different ratio of CSS  
pixels to device pixels only if one is an integer multiple of the  
other, but that seems like an odd limitation (and it still requires  
the implementation to decide what to do if a clean ratio isn't  
possible).


Another approach would be to not try to solve this in canvas at all,  
and instead specify that by default, all canvas elements are 96dpi,  
and provide authors a way to explicitly override this -- then using a  
combination of CSS Media Queries and other CSS, the exact dpi desired  
could be specified.  (You can sort of do this today, given that the  
canvas width/height attributes are in CSS pixels, and that if CSS  
dimensions are present a canvas is scaled like an image... so canvas  
{ width: 100px; height: 100px; } ... canvas width=200 height=200/ 
 would give a 192dpi canvas today, no?)



but it would be good to have some way to mark sections of the spec as
stable/unstable --


I've gone through and added annotations for each of the canvas  
sections to

distinguish the stable parts from the unstable parts. Does that work?


otherwise someone's liable to take a snapshot and implement it, and  
then

have it change under them if a portion is still in flux.


In general, the spec is unlikely to change significantly _before_ the
first implemenation. We get more feedback from the first  
implementation of
anything than from people just looking at the spec. I agree that the  
first

implementation should know what it's getting itself into, though. :-)


Well, it depends what you mean by spec -- I think that what gets put  
down as the initial spec is likely to change significantly from when  
the feature is first proposed to where it's added to the spec; I agree  
that there would be more feedback after a first implementation, but I  
don't think that means that the first proposal-spec discussion/ 
feedback period should be skipped.


The annotations do help make it clear what's in what state though,  
thanks!


- Vlad



Re: [whatwg] createImageData

2008-05-10 Thread Vladimir Vukicevic


On May 10, 2008, at 5:44 PM, Oliver Hunt wrote:


On May 10, 2008, at 4:53 PM, Vladimir Vukicevic wrote:

Another approach would be to not try to solve this in canvas at  
all, and instead specify that by default, all canvas elements are  
96dpi, and provide authors a way to explicitly override this --  
then using a combination of CSS Media Queries and other CSS, the  
exact dpi desired could be specified.  (You can sort of do this  
today, given that the canvas width/height attributes are in CSS  
pixels, and that if CSS dimensions are present a canvas is scaled  
like an image... so canvas { width: 100px; height: 100px; } ...  
canvas width=200 height=200/ would give a 192dpi canvas  
today, no?)


Canvas was designed with the intent of allowing resolution  
independent, removing that intent in the name of a feature that is  
not used in the general case seems to be a fairly substantial step  
back from that goal.  Unfortunately the solution of using a larger  
canvas scaled to fit a smaller region isn't a real solution.  For  
lower resolution displays it results in higher memory usage and  
greater computational cost than is otherwise necessary, and for high  
dpi displays it results either the same issues as the low dpi case  
(if the canvas resolution is still too high) or it results in a  
lower resolution display than the display is capable of.


Eh?  The resolution used should be whatever the appropriate resolution  
should be;  I'm certainly not suggesting that everyone unilaterally  
create canvases with 2x pixel resolution, I'm saying that the features  
exist to allow authors to (dynamically) create a canvas at whatever  
the appropriate resolution is relative to CSS resolution.  Canvas was  
designed to allow for programmatic 2D rendering for web content;  
resolution independence would certainly be nice, but it was never a  
goal of the canvas spec.  In fact, the spec explicitly states that the  
canvas element represents a resolution-dependent bitmap canvas.


- Vlad



Re: [whatwg] Text APIs on canvas

2008-05-09 Thread Vladimir Vukicevic


On May 5, 2008, at 8:10 PM, Ian Hickson wrote:


I have introduced the following APIs:

  context.font


I think this should be textStyle -- both to match existing fillStyle/ 
strokeStyle, and for consistency with the rest of the text functions.


I haven't provided a way to render text to or along a path, nor a  
way to

do vertical text, nor a way to measure anything but the nominal layout
width of text (e.g. there's no way to measure bounding boxes or get
baseline metrics). I also haven't provided a way to render document
fragments straight to a canvas.


Rendering text to a path or along a path are both useful operations;  
why were they omitted?  Bitmap fonts can pose problems here, but their  
bitmaps can be traced if necessary (and would need to be for  
strokeText anyway).  Text to a path allows for clipping to text, which  
is useful, and text along a path allows for effects that couldn't be  
obtained any other way.


I'm not super excited about maxWidth and the implementation getting to  
condense the font, but roc convinced me that it's a useful feature to  
have.


I'm happy to see text added to canvas (and createImageData, though I  
don't think the restriction that putImageData must only work with  
objects returned by create/get is a good one), but it would be good to  
have some way to mark sections of the spec as stable/unstable --  
otherwise someone's liable to take a snapshot and implement it, and  
then have it change under them if a portion is still in flux.


- Vlad



Re: [whatwg] Geolocation API Proposal

2008-03-17 Thread Vladimir Vukicevic

Hi Aaron,

On Mar 7, 2008, at 1:03 AM, Aaron Boodman wrote:


I've posted this to the W3C WebAPI mailing list as well. Still looking
forward to feedback on the actual content of the proposal, in either
place.


I agree with the previously stated comments that this probably doesn't  
belong in HTML5, but, as you say, there isn't a better place to  
discuss it at the moment -- the people who would be interested  
intersect with the people who are interested in HTML5.


So, some feedback on the proposal... overall, I think that this API  
should be kept as simple as possible.  To that end, I would suggest:


- remove Address from Position; a separate API/spec/web service/ 
whichever can be used to turn a Position into an Address, without the  
awkward requestAddress boolean flag or similar.  I think this also  
removes the awkward gersLocationProviderUrls?  (If I'm understanding  
right, these are the services that would convert position-address?)


- altitude/horizontalAccuracy/verticalAccuracy should probably use -1  
if not available (using null can be awkward, since it'll become 0 in  
some contexts)


- Geolocation.lastPosition should, IMO, be the only interface here  
(probably Geolocation.position).  It already has a timestamp, so apps  
can determine when the fix is.  There's no need for watchPosition/ 
clear given that we have setInterval/setTimeout already.  An  
updateInterval can be added with the minimum interval between position  
updates, as a hint to applications how often it should be updating.


- I understand the desire for optionally obtaining a high accuracy  
fix; I would have that be a separate method for that.  For that, I can  
see a callback based interface would make sense, as acquiring the fix  
would take time.


- I would move heading/speed off into a separate Direction interface,  
though I don't have a strong opinion about that


So, I'd suggest:

interface Position {
  readonly double latitude;
  readonly double longitude;
  readonly double altitude;

  readonly double horizontalAccuracy;
  readonly double veritcalAccuracy;

  readonly bool valid; // true if the fix is valid and exists; if  
false, then an error message is available

  readonly string errorMessage;
};

interface Geolocation {
  readonly Position position;
  readonly int updateInterval; // in ms

  void requestHighAccuracyPosition (callback);
};

- Vlad

 
 


Re: [whatwg] Compatibility problems with HTML5 Canvas spec.

2007-09-26 Thread Vladimir Vukicevic

Oliver Hunt wrote:


On 25/09/2007, at 2:19 PM, Philip Taylor wrote:


On 25/09/2007, Oliver Hunt [EMAIL PROTECTED] wrote:

Firefox 2/3 and Safari 2 clear the context's path on strokeRect/
fillRect, this violates the spec -- but there are many websites that
now rely on such behaviour despite the behaviour defined in hmtl5.
This means that those browsers that match the current draft (eg.
Safari 3 and Opera 9.x) fail to render these websites correctly.


How hard would it be to get those sites fixed? If there are problems
in something like PlotKit or Reflection.js, which lots of people copy
onto their own servers, then it would be a pain to break
compatibility. If it's just sites like canvaspaint.org where there is
a single copy of the code and the developer still exists and can
update it, it seems a much less significant problem to break
compatibility.


I've only seen it on major sites -- it just appears that FFX3 is unlikely
to be updated to match the correct behaviour, which is worrying in terms 
of compatibility.


Certainly I would prefer that FFX behaviour was fixed as the spec'd 
behaviour is much more technically sane.


We can certainly fix it, I'm just wondering what makes the most sense to 
do so.  Like I said, there's a patch sitting in our (Mozilla's) bugzilla 
that implements the spec-compatible behaviour.  I'd be happy to fix it 
and relnote that it was fixed, while providing a simple workaround 
(which is basically calling beginPath() after calling fill/strokRect etc.)



Unfortunately it isn't really an edge case as it's a relatively
common occurance -- people expect that the rect drawing function (for
example) will clear the path, so expect clearRect
(myCanvasElement.width, myCanvasElement.height) to clear the rect and
reset the path, and other similarly exciting things :-/


Firefox also resets the path on drawImage and putImageData, unlike
Opera and Safari 3 - do people depend on that behaviour too?


That honestly never occurred to me :-O

I can't see why people would expect it to, but i wouldn't have thought 
they'd think that about fill/strokeRect :-/


Yeah, we do the same thing on drawImage/putImageData that we do no 
fill/stroke (because in the underlying code they're all implemented 
using paths, and there's just one path :).  So, like I said, we can 
certainly fix it, and it sounds like that would be the best way to go.


- Vlad



Re: [whatwg] Compatibility problems with HTML5 Canvas spec.

2007-09-25 Thread Vladimir Vukicevic

Hi,

Oliver Hunt wrote:

Hi All,

We've encountered a number of website compatibility issues in WebKit due 
to our adherence to the new Canvas specifications -- a good example of 
this is rect drawing at http://canvaspaint.org


The most obvious issues can be shown if you use the draw rect tool and 
resize the rect repeatedly.


The first problem is the repeated drawing of old rects, this is due to 
the context path not being cleared by draw rect and fill rect which is 
the behaviour present in Safari 2 and Firefox 2.  While I've discussed 
the issue with Hixie in the past (and to an extent agree with him) the 
Firefox 3 nightlies do not appear to have adopted this behaviour, 
leaving us in a position where we have to choose between compatibility 
and compliance which is awkward.


For Firefox 3, there is a patch in our bugzilla, at 
https://bugzilla.mozilla.org/show_bug.cgi?id=296904 that could fix this 
issue -- that is, strokeRect/fillRect/drawImage would no longer reset 
the current path.  However, I'm confused by your comment -- Firefox 2 
and current Firefox 3 trunk's behaviour is identical, as far as I know; 
that is, the current path is being reset on strokeRect/fillRect.  (Did 
you mean due to the context path being cleared ...?)


Given that this is somewhat of an edge case, would it make sense to 
alter the spec here?


The second problem is that the rules for drawing rects/adding rects to 
the path require us to throw an exception on negative width/height, once 
again Firefox 3 does not match this behaviour, putting us in a position 
where we need to choose between compatibility and compliance.  In this 
case however it is relatively easy to make the argument that an 
exception should _not_ be thrown, as it means webapp developers either 
need to litter their code with exception handlers or add significant 
logic to ensure that their apps do not unexpectedly terminate.


The possible responses to drawing a rect with negative dimensions are 
(excluding the unappealing exception behaviour currently defined) to 
cull/ignore them (as we do with 0-sized rects), to normalise them (the 
current behaviour of firefox, and the behaviour expected by those apps 
that are affected by it), or to normalise them and treat the negative 
dimensions as an implicitly reversing the winding direction.


Both Opera and Safari 3 match the specification behaviour in both these 
cases, which results in multiple sites failing to render.


I agree that throwing an exception is probably unnecessary, as there are 
very few other places in the API where such exceptions are thrown 
(except when the input is of clearly the wrong type).  Normalizing seems 
to be the most useful approach -- that is, the functions that take 
x,y,w,h would be defined to create a rectangle with its two corners 
being (x,y) and (x+w,y+h), regardless of the sign of w/h, but I would be 
ok with making such rectangles also be ignored.  It's also fairly easy 
for authors to implement their own versions of strokeRect/fillRect with 
any of these semantics (well, harder if they were to preserve the 
current path).  This is less of an edge case than the previous issue, 
IMO, so we should try to figure out what the most useful (and most 
straightforward) thing is to happen here.


I think that it would be important to ship compatible canvas 
implementations in the next versions of Firefox, Safari, and Opera; so 
if we have to break existing users to do so, I hope that they will 
forgive us for the compliance bumps and put in workarounds (e.g., to 
call beginPath() after every strokeRect/fillRect), but it would be 
better if we could avoid having to do that.


- Vlad


[whatwg] canvas 2d context additions proposal (fillRule, lineDash)

2007-05-24 Thread Vladimir Vukicevic

Howdy,

I'd like to propose adding a few simple 2D canvas context 
methods/parameters:


  attribute string fillRule;

Specifies the algorithm used for filling paths.  The options are 
winding or even-odd; the default is winding.  Good descriptions 
for these are in the SVG spec: 
http://www.w3.org/TR/SVG/painting.html#FillProperties .  Hmm, they use 
nonzero and evenodd; we could use those instead for consistency.


A more complicated addition is a mechanism for dashed line drawing; I 
see this was discussed a few weeks ago:


  attribute variant lineDash; // array of numbers or a single number; 
default 0.0

  attribute float dashOffset;

lineDash specifies an array of lengths of the on and off portions of 
a stroked path.  Each value must be = 0.0.  A single value of 0.0 
disables line dashing.


The array repeats, and if there are an odd number of values specified, 
the sense of each value is inverted; that is, if [5.0, 1.0, 3.0] is 
passed in, it is interepted to mean the same as [5.0, 1.0, 3.0, 5.0, 
1.0, 3.0] -- on 5, off 1, on 3, off 5, on 1, off 3.  (I'm doing a 
horrible job of explaining this, but hopefully it'll make sense.)  A 
single value is interpreted the same as an array with a single value, so 
5.0 - [5.0] - [5.0, 5.0], meaning a dash of 5.0 length followed by a 
space of 5.0 length.  These values are in user-space units.


It's an error to specify any negative dash values, or to provide an 
array with all values of 0.0.


Line caps are to be applied to dash ends.  (This means that given a dash 
pattern of [0.0, 5.0], a line width of 5.0, and a line cap of round, 
the result should be a series of 5px diameter round dots -- the 0.0 on 
has no length, but it still has the round end caps drawn.)


Each subpath is treated independently for purposes of dashing.  The dash 
pattern restarts with each subpath.


The dashOffset specifies an offset into the dash pattern at which the 
stroke starts.  (That is, with a lineDash of [5.0, 5.0], and a 
dashOffset of 5.0, the first 5 units of the stroke will be a space, and 
not a dash.)


Thoughts?

- Vlad


Re: [whatwg] Apple Proposal for Timed Media Elements

2007-04-05 Thread Vladimir Vukicevic

Maciej Stachowiak wrote:


On Apr 4, 2007, at 7:31 PM, Vladimir Vukicevic wrote:
1. 'media-loop-count' is an awkward name, especially with The default 
value of 1 means the item will play through once but will not loop.  
We went through this with APNG, and ended up renaming that member.  I 
would suggest 'media-play-count' instead -- that way there is no 
ambiguity with what the number means.


We considered 'media-repeat-count' instead of 'media-loop-count', but 
that turned out to be more confusing. We really wanted all the 
looping-related properties to have consistent naming, and I don't think 
'play' would work in the other places mentioned.


The problem is that 'media-loop-count' with a value of 1, as defined, 
doesn't have anything to do with looping... play-count is much more 
descriptive of its actual purpose, IMO, despite not containing 'loop' in 
the name.  The others should definitely stay loop-, though.


2. The descriptions for 'media-loop-start-time' and 'media-loop-end- 
time' don't match; start-time says sets the time at which the media 
item begins playing after looping, and end-time says sets the time 
at which the media item loops for the second and subsequent repetitions.


I would suggest that start-time says sets the time index at which the 
media item starts playing for the second and subsequent repetitions, 
and that end-time says sets the time index at which the media item 
ends playing for the second and subsequent repetitions.  The language 
for end-time is still a little awkward, since ends playing could 
imply that it simply stops playing (and does not loop), but it's 
clearer than before.


I think the language might have ended up actually defining it wrong. The 
intent of 'media-loop-end-time' is that this is the point where you end 
where repeating, but on the last iteration you go all the way to 
'media-end-time'. So if 'media-loop-count' has a value of 3, the three 
repetitions would go as follows:

[...]


Hmmm.  I see how that would be useful, ok.  So if I just wanted to loop 
the first 5 seconds of a video clip, I would just set media-start-time 
to 0s and media-end-time to 5s, and the count to infinite, right? 
Clarifying this (perhaps with some examples) would be good.


3. 'media-timing' I would get rid of completely; while a shorthand 
would be useful, I don't think that media-timing as specified really 
works. Shorthands for properties such as 'background' are 
understandable on their own; 'media-timing: playing 0s -0.5s 2 2s -4s 
1' is very opaque.If it's still desirable, I would remove the 
setting of start/end times and change the volume shorthand to only 
accept the symbolic names; e.g. 'media-timing: playing high 4;'... but 
I think that removing the shorthand entirely would be preferable.


I'll reply in more detail about media-timing in a later message.


Sounds good.

   - Vlad



Re: [whatwg] on codecs in a 'video' tag.

2007-04-04 Thread Vladimir Vukicevic


If video supports fallback though, that 20% is enough to bootstrap and 
build support, especially as we all hope that that 20% continues to grow.


However, I do agree that the codec discussion should be tabled and that 
we should get back to the spec discussion... I've been ignoring much of 
the video discussion because it's mostly been off in the codec weeds. 
 I'll see if I can find some time to read over the proposals this 
weekend and give some constructive comments.


   - Vlad

David Hyatt wrote:
I agree with this.  The tag isn't worth much to the Web if it's not 
interoperable among *all* Web browsers.  That includes, unfortunately, 
Internet Explorer.  That is why I think trying to pick a baseline format 
in the WhatWG is premature.  Until the video element moves to the HTML 
WG and we find out what Microsoft's opinion is on this subject, I'm not 
really sure what the point is of this codec debate.  Even if the browser 
vendors of the WhatWG all agreed to support Theora tomorrow, Mozilla + 
Opera + Safari constitute only 20% of total browser market share.


That percentage is not even remotely compelling enough for content 
authors to want to use the video element over proprietary alternatives 
like Flash.


dave
([EMAIL PROTECTED])

 seems On Apr 3, 2007, at 9:50 PM, Håkon Wium Lie wrote:


Seriously, though, I think this group is concerned that having a
polished video interface isn't worth much in terms of
interoperability unless there is a baseline format.






Re: [whatwg] Canvas - globalCompositeOperation

2007-04-04 Thread Vladimir Vukicevic

Philip Taylor wrote:
 [...]

Cool stuff!  I'll look through your tests and fix up the mozilla 
implementation as much as possible.



I would be happy if darker was removed from the spec - there isn't
an obvious definition for it, and it's not interoperably implemented
at all and it sounds like it never will be. Existing implementations
can add apple-plusdarker, moz-saturate, etc, if they still want to
provide the old functionality.


I'd be happy with getting rid of it.


lighter seems much easier to define, and more useful, so I think
it's perhaps worth keeping - but it looks like a pain for those using
Qt/Java/etc libraries which don't support anything other than the
standard Porter-Duff operators, and I don't know if it's a difficulty
for Opera to fix their implementation of it. Does anyone have views on
this or on darker?


Well, if we have lighter, we should keep darker; I think that for 
mozilla at least, we can implement this using some slow-boat fallback 
mechanism -- basically, render the path/image to a separate surface, 
then manually do the requested math if things don't map directly to one 
of our operators; this is what our SVG impl does now for many of the SVG 
filters.


- Vlad


Re: [whatwg] Apple Proposal for Timed Media Elements

2007-04-04 Thread Vladimir Vukicevic

Maciej Stachowiak wrote:
CSS Timed Media Module proposal - http://webkit.org/specs/ 
Timed_Media_CSS.html


Some feedback on my initial reading..  the CSS properties specified seem 
like a good set that will cover most common functionality.  Some 
comments about the spec, though:


1. 'media-loop-count' is an awkward name, especially with The default 
value of 1 means the item will play through once but will not loop.  We 
went through this with APNG, and ended up renaming that member.  I would 
suggest 'media-play-count' instead -- that way there is no ambiguity 
with what the number means.


2. The descriptions for 'media-loop-start-time' and 
'media-loop-end-time' don't match; start-time says sets the time at 
which the media item begins playing after looping, and end-time says 
sets the time at which the media item loops for the second and 
subsequent repetitions.


I would suggest that start-time says sets the time index at which the 
media item starts playing for the second and subsequent repetitions, 
and that end-time says sets the time index at which the media item ends 
playing for the second and subsequent repetitions.  The language for 
end-time is still a little awkward, since ends playing could imply 
that it simply stops playing (and does not loop), but it's clearer than 
before.


3. 'media-timing' I would get rid of completely; while a shorthand would 
be useful, I don't think that media-timing as specified really works. 
Shorthands for properties such as 'background' are understandable on 
their own; 'media-timing: playing 0s -0.5s 2 2s -4s 1' is very opaque. 
   If it's still desirable, I would remove the setting of start/end 
times and change the volume shorthand to only accept the symbolic names; 
e.g. 'media-timing: playing high 4;'... but I think that removing the 
shorthand entirely would be preferable.



I've yet to read over the HTML part of the proposal, but I'll send along 
feedback when I've had a chance to do so.


- Vlad


Re: [whatwg] Video proposals

2007-03-19 Thread Vladimir Vukicevic

Håkon Wium Lie wrote:

Also sprach Robert Brodrecht:

  As I said before, I think we have a lot better chance at getting a common,
  cross-browser, cross-platform format with MPEG 4.  The reason WHAT WG
  proposed Theora is *because* it is FOSS, not for quality, size, ease of
  implementation, or anything else (as far as I know).


Quality, size, etc. have all been goals of the Theora project, so it's 
not really fair to say that they haven't been considered.  There is no 
technical reason why Theora shouldn't be specified as a baseline format.



Due to software patents, MPEG 4 costs money. Also, it requires more
processing power than many devices have. Who will pay for licenses to
OLPC's machines? And, how will the get the power to decode?

I think it's vital that we find an open format that the free world can
use.

If MPEG4 is the alternative, we might as well continue using Flash and
object. But it's not a world I want to live in.


I see no problem with an implementation supporting MPEG4 etc. in 
addition to Theora (provided they can legally do so).  If providing 
content in non-Theora formats is important, the client should list the 
supported video formats in the Accept header, and the server can send 
back the right thing.  Arguing over which format is supported isn't 
really productive, because due to legal realities, there are very few 
high quality options for a common baseline format.  Theora is probably 
the best of that bunch.  (The BBC format whose name I can't think of atm 
might be another, but I think it's much earlier along the development 
process.)


- Vlad


Re: [whatwg] Comments on the video element

2007-03-19 Thread Vladimir Vukicevic

Martin Atkins wrote:

Mihai Sucan wrote:


For Youtube, a site which provides bloggers an easy way to integrate 
videos, this would prove even ... hard. Here's the simple code users 
have to copy/paste:


object width=425 height=350
param name=movie value=http://www.youtube.com/v/id;/param
param name=wmode value=transparent/param
embed src=http://www.youtube.com/v/id; 
type=application/x-shockwave-flash wmode=transparent width=425 
height=350/embed

/object

Switching to the video element, would require a script element, 
and technically, for the developers at Youtube, this would mean a lot 
more work. They script must be carefully coded so that it doesn't 
break the myriad of blog systems, etc. Blogs don't even allow scripts 
to be run (last time I heard). Say Youtube wants to do this, users 
themselves would no longer like this. What? Am I going to put a 
potentially risky script within my site?


[...]
However, if it requires any scripting to use it'll never work because 
LiveJournal absolutely cannot allow scripting.


So allow me to offer this as another vote to video being, by default, a 
completely standalone element with browser-provided UI. By all means 
allow authors to override it if they want to do something neat.


I don't think the video element, as currently specified, is supposed to 
be the end-all be-all video specification.  There's nothing to prevent 
the specification of a UI attribute later on, when more of the issues 
around the core get video in a HTML element issues are better understood.


Specifying a UI at this point would end up with the spec being bogged 
down with what the UI must and must not support.  What's the 
disadvantage to doing this in multiple steps?  Let's get video up in an 
element in a cross-browser way first.  This means that it won't be 
immediately usable for all potential use cases (but even with UI it 
wouldn't be anyway; I don't see youtube jumping to provide 
theora/mpeg4/whatever streams), but once it's better understood how it 
-does- solve some use cases, those can be built on.


   - Vlad



Re: [whatwg] Canvas 2d methods

2006-07-03 Thread Vladimir Vukicevic

Even without using |with|, why not just create a simple JS wrapper for
the context object that can have return-this or any other desired
semantics?  This would avoid a change that would have some apps
require canvas 2D 2.0 or some such, and require authors to do
version checks to see which version of canvas is supported -- and
still write old code for quite some time.  Adding a different way
to do the same things that can be done now without much benefit in
simplicity or efficiency doesn't seem useful.

Canvas supports multiple contexts for a reason; if there are
compelling arguments for a complete rev of the 2D API, then a new
context can be introduced to support that.

  - Vlad


Re: [whatwg] Canvas 2d methods

2006-07-01 Thread Vladimir Vukicevic

On 7/1/06, Benjamin Joffe [EMAIL PROTECTED] wrote:

Each of the methods defined for the canvas 2d context return null. I think
it would be very convenient if instead they would return a reference to the
2d context for that canvas. This would allow writing such code as ctx.fill
().stroke() or ctx.moveTo(0,0).lineTo(10,10). This is how many of the native
string and array methods work in javaScript.


This isn't a bad idea; the problem is that the cat's already out of
the bag here, and developers will end up writing ctx.moveTo()
ctx.lineTo() etc. for compatability.  I'm a fan of with in this
instance:  with (ctx) { moveTo(0,0); lineTo(10,10); } etc.

  - Vlad


Re: [whatwg] strokeRect() with zero heights or widths

2006-05-28 Thread Vladimir Vukicevic

Doesn't a zero-width (or zero-height, as long as it's only one)
degenerate into a vertical (horizontal) line when stroked, due to the
line width?  A filled rectangle doesn't, because the area to fill is
defined exactly by the rectangular path (which has 0 thickness),
whereas a stroked path takes the line width into account to compute
the area to fill.  Now, to be fair, I don't really care either way,
just looking for consistency... should using fillRect/strokeRect be
defined as convenience functions doing the same job as creating a
rectangular path and calling fill/stroke?

I'd expect the following to give me a 10 pixel line, the same as if I
had just done moveTo(x, 10); lineTo(x,20); stroke();

beginPath();
moveTo(x, 10);
lineTo(x, 20);
lineTo(x+0, 20);
lineTo(x+0, 10);
closePath();
stroke();

Otherwise, we end up with different results for what is logically the
same operation, I'd think?

   - Vlad

On 5/20/06, Anne van Kesteren [EMAIL PROTECTED] wrote:

I think http://whatwg.org/specs/web-apps/current-work/#strokerect must
have no effect when it has a zero height or width (or both). Currently
Safari, Firefox and Opera act that way when they are both zero and
Safari acts that way for all cases. Firefox and Opera draw a small
line when either is larger than zero but that can easily be changed.
It also makes the method more consistent with the other two.

For those If either height or width are zero, this method has no
effect. should probably be changed to If either height or width are
zero, this method must have no effect.


--
Anne van Kesteren
http://annevankesteren.nl/




Re: [whatwg] proposed canvas 2d API additions

2006-05-16 Thread Vladimir Vukicevic

On 4/26/06, Ian Hickson [EMAIL PROTECTED] wrote:

   ImageData getImageData(in float x, in float y, in float w, in float h);
   void drawImageData(in float x, in float y, in ImageData d);


I'm about to implement this as suggested; however, I'd call the second
function here putImageData instead of drawImageData; draw implies
an actual drawing operation, similar to drawImage, that would be
affected by (at least) the current compositing operator.  What's
actually happening is a direct replacement of the pixel data in the
given region, so that could be confusing.  (If someone does want the
operator to be involved they can use an offscreen canvas to call
putImageData on and drawImage that in.)

   - Vlad


Re: [whatwg] proposed canvas 2d API additions

2006-05-04 Thread Vladimir Vukicevic

On 4/28/06, Vladimir Vukicevic [EMAIL PROTECTED] wrote:

interface ImageData {
  readonly attribute string format; /* only rgba is valid for now */
  readonly attribute long int width;
  readonly attribute long int height;
  readonly attribute Array data;
}


Actually, let's step back a second; this may be massive
overengineering.  What if we simply had:

   readonly attribute float deviceScaling;

on the 2D context, which would give the scaling factor between
canvas-space pixels (that is, the space that the canvas width/height
attributes are in) and device-space pixels (the pixels of the actual
backing store).  So if canvas width=200 height=200/ was
represented with a 300x300 backing store, deviceScaling would be 1.5;
if 400x400, it would be 2.0.  (If necessary, we can have
deviceScalingX, deviceScalingY.)

Then getPixels is defined to take parameters in canvas pixel space,
and returns the ARGB array in device space; if you ask for a 50x50
region, you'll get back 100x100x4 samples, with a deviceScaling of
2.0.  putPixels would take coordinates in canvas pixel space again,
but would take the appropriate device-pixel-sized ARGB array.  This
becomes tricky with non-integer deviceScaling; that is, if a 2x2
region becomes a 3x3 region with a deviceScaling of 1.5, what do you
return when you're asked for x=1 y=1 w=1 h=1?  I'd say that you end up
resampling and shifting over your 3x3 device space backing store by .5
pixels so that the region would start on a device pixel boundary. 
This would obviously not be a clean round-trip, but the spec can

inform authors how to ensure a clean round trip (only request regions
where your x/y * deviceScaling are integers).

This removes the need for a separate ImageData object and all the
extra gunk necessary there, but still maintains full resolution
independence.  Any thoughts on this?

  - Vlad


Re: [whatwg] proposed canvas 2d API additions

2006-04-28 Thread Vladimir Vukicevic

On 4/26/06, Ian Hickson [EMAIL PROTECTED] wrote:

On Mon, 24 Apr 2006, Vladimir Vukicevic wrote:

 The use case that I'm thinking of is essentially:

 pixels = c.getPixels(x, y, width, height);
 /* manipulate pixels here */
 c.putPixels(pixels, x, y, width, height);

 That is, direct pixel manipulation, for performing some operation that
 can't be done using the context API.

Ok. That is helpful, because there have been several use cases thrown
about and it wasn't clear to me which use case we actually cared about.

It seems to me that a critical requirement of the use case you describe is
that the result of the following script:

   pixels = c.getPixels(x, y, width, height);
   /* do nothing here */
   c.putPixels(pixels, x, y, width, height);

...be a (possibly expensive) no-op. That is, you should not lose image
data -- the above should not corrupt your picture. This means the pixel
data returned must be native resolution data.

How about:

   interface ImageData {
 readonly attribute long int width;
 readonly attribute long int height;
 readonly attribute Array data;
   }


I have a nagging feeling that this is a bad idea, but I can't explain
why, because I do like the idea.  If we do this, let's advance it a
bit:

interface ImageData {
 readonly attribute string format; /* only rgba is valid for now */
 readonly attribute long int width;
 readonly attribute long int height;
 readonly attribute Array data;
}

format would specify the type of data that is in data; only rgba
would be valid for now, but we this gives us a way to extend that
later on.

and also add:

ImageData createImageData(in string format, in string width, in string
height, in Array data);

for creating ImageData out of an arbitrary set of generated data
(e.g. evaluating some function, drawing the results).  This would be
especially needed because you can't assign to data in an ImageData
structure (since you have it readonly); can only change the values of
its members.


   ImageData getImageData(in float x, in float y, in float w, in float h);
   void drawImageData(in float x, in float y, in ImageData d);


I would keep x/y/w/h as integers, and explicitly specify that they're
not affected by the CTM.  If they are, you can't guarantee a lossless
round-trip (since if you shift over by half a pixel you have to do
lots of resampling, etc.).

  - Vlad


Re: [whatwg] proposed canvas 2d API additions

2006-04-24 Thread Vladimir Vukicevic
Arve's example is how I imagined putPixels working -- basically as a
potential optimization over a bunch of fillRect calls.  Even in the
presence of a higher resolution backing store, this can provide for an
optimization -- load the putPixels data into a bitmap image that's
width*height pixels and draw it to the canvas backing store with the
appropriate resolution scaling.

The use case that I'm thinking of is essentially:

pixels = c.getPixels(x, y, width, height);
/* manipulate pixels here */
c.putPixels(pixels, x, y, width, height);

That is, direct pixel manipulation, for performing some operation that
can't be done using the context API.  An example might be to perform a
desaturate on a region of the canvas to obtain a grayscale region from
a color one.  Any image-type operations (copying a region from one
place to another) should be done using the existing drawImage or other
APIs, with temporary canvases as needed.

Because of this, putPixels will end up losing quality in a
getPixels/putPixels round-trip if the backing store is higher
resolution.  I'm not sure what to do about that; one solution might be
that we specify that a pixel in the canvas backing store must map to
exactly one pixel in canvas-space; that is, that there's always a
cluster of NxN device pixels that correspond to 1 canvas pixel.  We
can then have getPixels return the actual device-resolution pixel
data, along with a resolution multiplier or somesuch.  I don't really
like that, though; I'd much rather leave putPixels as the
fillRect-type optimization, and have getPixels return a simple average
of the color of all the device pixels that compose a single target
pixel.  (Again, as with the putPixels case, this can be optimized by
simply doing a downscaling of the appropriate region of the
higher-resolution backing store into a width*height pixel buffer).

- Vlad

On 4/24/06, Arve Bersvendsen [EMAIL PROTECTED] wrote:
 [ Ian Hickson ]
  I don't understand how these are supposed to work when the underlying
  bitmap's device pixel space does not map 1:1 to the coordinate space.

 [ Vladimir Vukicevic ]
  I'm not sure what you mean -- the coordinates here are explicit canvas
  pixels, and they specifically ignore the current canvas transform.
  So, given
   canvas width=100 height=200/canvas
 
  the coordinates would be 0..99, 0..199.

 Without expressing any other opinion at the moment, I'd just like to
 clarify how Opera's implementation of getPixel/setPixel currently follows
 the coordinate space, as Vlad is suggesting here, disregarding any
 translation and rotation. Given the following script snippet:

gc =
 document.getElementsByTagName('canvas')[0].getContext('opera-2dgame');
for ( var y = 50; y  100; y++){
  for (var x = 50; x  100; x++){
gc.setPixel(x,y,blue);
  }
}

 ... with this CSS:

canvas  {
  width: 200px;
  height: 200px;
  border: 1px solid black;
}

 and the following markup:

canvas width=100 height=100

 we fill the bottom-right quadrant of the canvas, with a rectangle that is
 comprised of 100x100 CSS pixels.

 --
 Arve Bersvendsen, Opera Software ASA




Re: [whatwg] Audio Interface

2006-04-24 Thread Vladimir Vukicevic
On 4/21/06, Jeff Schiller [EMAIL PROTECTED] wrote:
 2) Can you clarify the mechanism to determine if a user agent supports
 a particular content type?  Otherwise, as a developer do I just assume
 that every browser will support .wav or .mp3 or .ogg or .mid or  ?
  What about a static method on the Audio interface to query content
 types?

A static method would probably be best (or even a sting, with wav mp3
ogg m4a or something in it, or even the actual mime type for the
format).  WAV would probably be the baseline required format.

 3) I think full URIs should be allowed in the Audio constructor.  Why
 must the URI be a relative one?  Is this some crude means of
 preventing leaching of bandwidth?  I feel this is artifically
 constraining what I should be allowed to do as a developer and as a
 service provider.  What if Google wants to start an audio ad program
 for websites?  What if I want to start a web service to let web
 developers use sounds on my server?

I don't see where the spec states that the URI must be relative -- it
merely states that the URI that is used for resolving relative URIs is
the window.location.href URI.

 4) The term repeat count is misleading.The word repeat implies a
 re-occurence, so to repeat once means to play a total of two times.
 Just globally rename repeat count to play count.  This is more
 accurate of what this number actually is (the number of times the
 sound will play).

I agree; play count would be more descriptive.  I'd also like to see a
pause() method, that pauses playback at whatever current frame is
playing, and then resumes playback there on play().

On the same subject, here's a copy of an email I sent a little while
ago, that I ended up mistakenly not Cc'ing the whatwg list on:

I've been recently thinking about audio as well.. however, I'm not
sure about them not having a DOM presence.  This may be totally off
the wall, but what about adding audio src=foo.wav as an element
within head?  The default state of an audio element would be
stopped, but we could do something like:

audio id=background src=background.wav state=playing repeat=true
audio id=fx1 src=fx1.wav
audio id=fx1 src=fx1.wav

the state attribute would take a value of stopped (frame 0),
playing, paused (paused/stopped with playback resuming at whatever
frame when it was paused).  These could be mapped to CSS -audio-state
and -audio-repeat or something.  Having these as elements would make
operations like save as complete web page be able to do something
useful with the audio elements (even though they could still be
created/loaded purely programmatically).  The default attributes might
be state=playing with repeat=false; a bgsound equivalent would be
obtained by state=playing repeat=true.  The UA should provide a
way to disable audio elements; I'm not a huge fan of bgsound.

Something else that I think would be useful would be an onrepeat
handler; that is, whenever a looping audio stream repeats, it would
fire the handler.  This could be useful for audio synchronization,
e.g. you want to have something happen every time your alarm klaxon
audio repeats, and timers aren't quite precise enough.

   - Vlad


[whatwg] proposed canvas 2d API additions

2006-04-21 Thread Vladimir Vukicevic
Hi folks,

I'd like to suggest extending the HTML canvas 2d context with a few
additions.  These are variations on some of the methods added to
Opera's opera-2dgame context.  The methods are intended to give
content authors direct pixel access to the canvas, as well as provide
some basic point-in-path testing functionality.

float [] getPixels (in integer x, in integer y, in integer width,
in integer height);

Returns an array of floats representing the color values in the region
of pixels in the canvas whose upper left corner is at (x,y) and which
extends for width,height pixels.  These coordinates are in canvas
pixel space (that is, the same space that the canvas width and height
attributes are specified in).  The color values for each pixel are
returned as 4 floats, each in the range of 0.0 to 1.0, in R,G,B,A
order.  That is, given the paramters (0,0,2,2), the returned array
will be [R00 G00 B00 A00 R10 G10 B10 A10 R01 G01 B01 A01 R11 B11 G11
A11].

Note: we could return the pixels as integers in the range of 0..255,
as 8-bit color is most likely what canvases will be dealing with. 
However, using floats allow us to easily extend into a 16-bit
colorspace without any API changes.  In addition, any computation
using these pixels is often done in normalized colors, so the division
by 255 would need to happen anyway.

void putPixels (in float [] pixels, in integer x, in integer y, in
integer width, in integer height);

Does the opposite of getPixels; the given array must be exactly width
* height * 4 elements in length.  The values are to be clamped to
0.0..1.0.

boolean pointInPathFill(in float x, in float y);

pointInPathFill returns true if the given point would be inside the
region filled by the current path, and false otherwise.  The x,y
coordinates are in the current space of the canvas; that is, they are
transformed by the CTM and do not necessarily map directly to pixels.

I'd suggest that these three functions be added directly to the 2d
context; content authors can test for their presence by checking the
function is not null on the 2d context object.  We might want a more
comprehensive way of letting authors test whether particular features
are supported, e.g. shadows, pixel-access, etc, but maybe it's not
necessary.

How's this sound?

- Vlad


Re: [whatwg] proposed canvas 2d API additions

2006-04-21 Thread Vladimir Vukicevic
On 4/21/06, Ian Hickson [EMAIL PROTECTED] wrote:
 On Fri, 21 Apr 2006, Vladimir Vukicevic wrote:
  boolean pointInPathFill(in float x, in float y);

 This sounds fine to me (though it means you have to spin through creating
 many paths for hit testing, instead of just hanging on to a particular
 path and hit testing a list of paths, which seems silly).

Hm, I'm not sure what you mean -- we have no way of holding on to a
Path as a retained object.  If we did, then you could hit test
through this object; there would be a speedup for some paths, but not
noticable for most, I would think.  Adding support for retained path
objects would be an additional chunk of work, though, and isn't really
necessary.

  float [] getPixels (in integer x, in integer y, in integer width,
  in integer height);
 
  void putPixels (in float [] pixels, in integer x, in integer y, in
  integer width, in integer height);

 I don't understand how these are supposed to work when the underlying
 bitmap's device pixel space does not map 1:1 to the coordinate space.

I'm not sure what you mean -- the coordinates here are explicit canvas
pixels, and they specifically ignore the current canvas transform. 
So, given

  canvas width=100 height=200/canvas

the coordinates would be 0..99, 0..199.  Are you referring to the case
where on, say, a very high resolution display the canvas might choose
to create a 200x400 pixel canvas and just present it as 100x200, and
quadruple the physical screen space taken up by each pixel?  If so, it
would still map to the original 100x200 pixels; the fact that each of
those takes up 4 physical device pixels should be transparent to the
user.  That is, we have:

  CSS size (width/height style)   --  canvas size  --  device bitmap size

The API would always operate in terms of canvas size.  Does that make
more sense?

- Vlad


Re: [whatwg] [canvas-developers] Opera with support for getPixel/setPixel

2006-03-29 Thread Vladimir Vukicevic
Hi,

On 3/29/06, Arve Bersvendsen [EMAIL PROTECTED] wrote:
 Some of you have requested getPixel and setPixel for the bitmap canvas.
 Well, we have some news for you -- Opera has actually had this support all
 along, but we haven't been able to talk about it until now.

Interesting stuff!  What are some of the use cases for this, other
than providing color filters?  That can be useful, to be sure, but it
seems like a very expensive way to implement that functionality (but
then, expensive is better than not at all)...

 In addition to supporting getting and setting of individual pixels, we
 also support locking of the canvas and better control over the redraw
 process, for optimized performance in games.

What happens if an exception is thrown while you have the canvas locked?
This also seems like it requires you to always copy the current canvas
contents at lock time. Locking seems like just a roundabout way to
achieve double buffering, when explicit double buffering could be
better.  Instead, I've been suggesting that if people need double
buffering (and it's a good idea), that they implement it themselves,
with full control over the are that's double buffered, when they
update, etc:

  var pageCanvas = getElement(canvas);
  pageCanvas2D = pageCanvas.getContext(2d);

  var backBufferCanvas = new CANVAS({});
  backBufferCanvas.width = pageCanvas.width;
  backBufferCanvas.height = pageCanvas.height;

  backBuffer2D = backBufferCanvas.getContext(2d);

  // do drawing into backBufferCanvas/backBuffer2D

  // update the front buffer
  pageCanvas2D.globalCompositeOperation = copy;
  pageCanvas2D.drawImage(backBufferCanvas, 0, 0);

This allows for updating only part of the front buffer region cheaply
(e.g. if you draw a static image/border on the inside of the canvas
and the changing part is only a 50x50 rectangle in the middle, you can
double-buffer just that 50x50 piece), and it also lets you easily
layer elements that are updated at different rates -- e.g. render a
complex infrequently-changing map into canvasA, render a more
frequently changing UI into canvasB, render A and B into the front
buffer, and then render something that's very fast to draw but quickly
changing directly into the front buffer (like a selection rectangle).

 Finally, the opera-2dgame
 context also supports native collision detection, by detecting whether a
 point is within the currently open path on the regular 2d canvas.

This is a useful idea; we'll probably do something similar, though
calling it collision detection when it's a point in path test is a
little strong. :)  We'll probably add a point-in-path method.

Cc'ing the whatwg list on this, as there are interesting canvas API
suggestions involved.

- Vlad


Re: [whatwg] Text support in canvas

2005-11-08 Thread Vladimir Vukicevic
On 11/3/05, James Graham [EMAIL PROTECTED] wrote:
 Allowing text in canvas has serious accessibility problems. Presumably
 such text would not be resizable and encouraging authors to use
 non-resizable text in their web-apps is not a good idea. I guess there
 would also be (separate) issues with fonts; one assumes font
 substitution on a bitmap drawing would be more problematic than font
 substitution where the text is the main content.

There are accessibility problems for sure; however, accessibility is
not something that can be forced onto content authors.  They have to
design for accessibility, it won't happen for them.  If they don't,
being able to draw text into canvas is a relatively minor issue.

If a DOM or if arbitrary high-resolution scaling of already drawn
content is desired, SVG's the best option.  However, there are a lot
of use cases where text in canvas is highly desirable.  canvas is
already going down a pseudo-CSS path for specifying colors and other
styles; I think it makes sense to extend text in a similar way,
perhaps by using a textStyle that refers to a CSS property (by ID? by
class? somehow), and then letting you render text strings into the
canvas.

I know Ian's busy, so I might send an early suggested spec over to him
for polishing.

- Vlad


Re: [whatwg] canvas tag and animations ?

2005-06-20 Thread Vladimir Vukicevic
On 6/15/05, Charles Iliya Krempeaux [EMAIL PROTECTED] wrote:
 So then what do you do if your code is not amenable to the event
 drive way of programming.
 
 What if you have an event loop approach.  How then do you signify
 the ending so that things can actually happen?

I'm confused -- if you're operating within the context of a UA that
implements canvas, then you can't have your own event loop, but
instead have to operate based on event callbacks to your own code (be
they timeouts or whatever) that all happen on the UI thread.  I'm not
sure what other context you'd be able to use canvas in?

In any case, if for some reason a user needs more specific control
over double buffering, or wants to build up long frames over time or
whatever, they can render to a separate offscreen canvas, and then use
drawImage() to present that canvas to the user.

- Vlad