Re: [whatwg] question about the input tag attributes

2011-04-13 Thread Jukka K. Korpela

Nicola Di Fabrizio wrote:


is it possible to set a attribute for checking the element
if it is not empty


Yes, there's the attribute required, see
http://www.whatwg.org/specs/web-apps/current-work/multipage/common-input-element-attributes.html#the-required-attribute


like
input type=text needed=yes


Like input type=text required.

No value is needed, it's a boolean attribute (somewhat misleading, but 
this name is justified by the logic that the _presence_ or absence of the 
attribute in markup implies a boolean value - true or false - for the IDL 
attribute, a property of the object).



if needed = yes and the element has no value
then appears a msgbox like
Please input something...


Something like that, though this has been implemented in a few browsers 
only, e.g. Opera.



or the msgbox text is also an attribute
where you can put in your own message.


Well not in markup, but HTML5 specifies the setCustomValidity() method for 
input fields; so you can set your own message using JavaScript.


--
Yucca, http://www.cs.tut.fi/~jkorpela/ 



Re: [whatwg] PeerConnection feedback

2011-04-13 Thread Harald Alvestrand
Since Ian seems to prefer to jumble all threads on a given group of 
issues together in one message, I'll attempt to use the same format this 
time.


On 04/12/11 04:09, Ian Hickson wrote:

On Tue, 29 Mar 2011, Harald Alvestrand wrote:

A lot of firewalls (including Google's, I believe) drop the subsequent
part of fragmented UDP packets, because it's impossible to apply
firewall rules to fragments without keeping track of all fragmented UDP
packets that are in the process of being transmitted (and keeping track
would open the firewalls to an obvious resource exhaustion attack).

This has made UDP packets larger than the MTU pretty useless.

So I guess the question is do we want to limit the input to a fixed value
that is the lowest used MTU (576 bytes per IPv4), or dynamically and
regularly determine what the lowest possible MTU is?

The former has a major advantage: if an application works in one
environment, you know it'll work elsewhere, because the maximum packet
size won't change. This is a erious concern on the Web, where authors tend
to do limited testing and thus often fail to handle rare edge cases well.

The latter has a major disadvantage: the path MTU might change, meaning we
might start dropping data if we don't keep trying to determine the Path
MTU. Also, it's really hard to determine the Path MTU in practice.

For now I've gone with the IPv4 minimum maximum of 576 minus overhead,
leaving 504 bytes for user data per packet. It seems small, but I don't
know how much data people normally send along these low-latency unreliable
channels.

However, if people want to instead have the minimum be dynamically
determined, I'm open to that too. I think the best way to approach that
would be to have UAs implement it as an experimental extension at first,
and for us to get implementation experience on how well it works. If
anyone is interested in doing that I'm happy to work with them to work out
a way to do this that doesn't interfere with UAs that don't yet implement
that extension.
The practical MTU of the current Internet is the Ethernet MTU: 1500 
bytes minus headers.
The IPv6 minimum maximum of 1280 bytes was chosen to leave some room 
for headers, tunnels and so on.


My suggestion would be to note that applications need to be aware that 
due to firewalls and other types of black holes, you might get 
consistent packet loss for packets larger than a given size, typically 
1280 bytes or 1480 bytes, and leave it at that.


On Tue, 29 Mar 2011, Harald Alvestrand wrote:

On 03/29/11 03:00, Ian Hickson wrote:

On Wed, 23 Mar 2011, Harald Alvestrand wrote:

Is there really an advantage to not using SRTP and reusing the RTP
format for the data messages?

Could you elaborate on how (S)RTP would be used for this? I'm all in
favour of defering as much of this to existing protocols as possible,
but RTP seemed like massive overkill for sending game status packets.

If data was defined as an RTP codec (application/packets?), SRTP
could be applied to the packets.

It would impose a 12-byte header in front of the packet and the
recommended authentication tag at the end, but would ensure that we
could use exactly the same procedure for key exchange

We already use SDP for key exchange for the data stream.
Yes, with a means of applying encryption that is completely unique to 
this specification. I'm not fond of novel cryptography designed by 
non-cryptographers; seen that done before.
(I've also seen flaws found in novel cryptography designed by 
cryptographers)



multiplexing of multiple data streams on the same channel using SSRC,

I don't follow. What benefit would that have?
If, for instance, a FPS wants one stream of events for bullet 
trajectories and another stream of events for sound-source movements, 
multiple data streams will allow the implementor to not invent his own 
multiplexing layer.



and procedures for identifying the stream in SDP (if we continue to use
SDP) - I believe SDP implicitly assumes that all the streams it
describes are RTP streams.

That doesn't seem to be the case, but I could be misinterpreting SDP.
Currently, the HTML spec includes instructions on how to identify the
stream in SDP; if those instructions are meaningless due to a
misunderstanding of SDP then we should fix it (and in that case, it might
indeed make a lot of sense to use RTP to carry this data).

I'm not familiar with any HTTP-in-SDP spec; can you point out the reference?

I've been told that defining RTP packetization formats for a codec needs
to be done carefully, so I don't think this is a full specification, but
it seems that the overhead of doing so is on the same order of magnitude
as the currently proposed solution, and the security properties then
become very similar to the properties for media streams.

There are very big differences in the security considerations for media
data and the security considerations for the data stream. In particular,
the media data can't be generated by the author in any 

Re: [whatwg] PeerConnection constructor: Init string format

2011-04-13 Thread Harald Alvestrand

On 04/08/11 18:51, Glenn Maynard wrote:
On Fri, Apr 8, 2011 at 4:41 AM, Harald Alvestrand 
har...@alvestrand.no mailto:har...@alvestrand.no wrote:


My alternate proposal:
--
The initialization string looks like this:

{
 “stun_service”: { “host”: “stun.example.com
http://stun.example.com”,
  “service”: “stun”,
  “protocol”: “udp”
},
 “turn_service”: { “host”: “turn.example.com
http://turn.example.com” }
}


FWIW, I thought the block-of-text configuration string was peculiar 
and unlike anything else in the platform.  I agree that using a 
configuration object (of some kind) makes more sense.
I'm a fan of recycling parsers, in particular those that can't result in 
active objects.


Whether the calling side or the callee side calls JSON.parse() to turn 
the string blob into an object that can be accessed using standard 
mechanisms is a question I'm relatively indifferent to.


I do suspect that we're going to have to extend these parameter blobs - 
for example, neither this proposal nor Ian's proposal gives a good 
mechanism for provisioning the security parameters for the TURN service. 
(STUN service is so cheap, it might be reasonable to run it without 
per-client provisioned security parameters).


 Harald




Re: [whatwg] question about the input tag attributes

2011-04-13 Thread Randy Drielinger
See: 
http://www.whatwg.org/specs/web-apps/current-work/multipage/common-input-element-attributes.html#the-required-attribute




-Original Message- 
From: Nicola Di Fabrizio

Sent: Wednesday, April 13, 2011 9:41 AM
To: whatwg@lists.whatwg.org
Subject: [whatwg] question about the input tag attributes

Good day everyone,
I have a question about the new input tag attributes.

is it possible to set a attribute for checking the element
if it is not empty

like
input type=text needed=yes

if needed = yes and the element has no value
then appears a msgbox like
Please input something...

or the msgbox text is also an attribute
where you can put in your own message.

what you think about the idea?
are there any discussion about something like this?

Many thanks and best wishes from Germany

nicola di fabrizio 



Re: [whatwg] question about the input tag attributes

2011-04-13 Thread DiFabrizio
thanks a lot for the fast response, it is exactly what i search

nicola di fabrizio


Re: [whatwg] PeerConnection feedback

2011-04-13 Thread Stefan Håkansson LK
 

-Original Message-
From: Ian Hickson [mailto:i...@hixie.ch] 
Sent: den 12 april 2011 04:09
To: whatwg
Subject: [whatwg] PeerConnection feedback

On Tue, 29 Mar 2011, Stefan H kansson LK wrote:
 The web application must be able to define the media format to 
 be used for the streams sent to a peer.

Shouldn't this be automatic and renegotiated dynamically via SDP 
offer/answer?
  
   Yes, this should be (re)negotiated via SDP, but what is unclear is 
   how the SDP is populated based on the application's preferences.
  
  Why would the Web application have any say on this? Surely the user 
  agent is in a better position to know what to negotiate, since it will 
  be doing the encoding and decoding itself.

 The best format of the coded media being streamed from UA a to UA b 
 depends on a lot of factors. An obvious one is that the codec used is 
 supported by both UAs As you say much of it can be handled without 
 any involvement from the application.
 
 But let's say that the app in UA a does addStream. The application in 
 UA b (the same application as in UA a) has two video elements, one 
 using a large display size, one using a small size. The UAs don't know 
 in which element the stream will be rendered at this stage (that will be 
 known first when the app in UA b connects the stream to one of the 
 elements at onaddstream), so I don't understand how the UAs can select 
 a suitable video resolution without the application giving some input. 
 (Once the stream is being rendered in an element the situation is 
 different - then UA b has knowledge about the rendering and could 
 somehow inform UA a.)

I had assumed that the video would at first be sent with some more or less 
arbitrary dimensions (maybe the native ones), and that the receiving UA 
would then renegotiate the dimensions once the stream was being displayed 
somewhere. Since the page can let the user change the video size 
dynamically, it seems the UA would likely need to be able to do that kind 
of dynamic update anyway.
Yeah, maybe that's the way to do it. But I think the media should be sent with
some sensible default resolution initially. Having a very high resolution could
congest the network, and a very low would give bad user experience until the 
format has been renegotiated.

//Stefan


[whatwg] Initial video resolution (Re: PeerConnection feedback))

2011-04-13 Thread Harald Alvestrand

On 04/13/11 13:35, Stefan Håkansson LK wrote:



-Original Message-
From: Ian Hickson [mailto:i...@hixie.ch]
Sent: den 12 april 2011 04:09
To: whatwg
Subject: [whatwg] PeerConnection feedback


On Tue, 29 Mar 2011, Stefan H kansson LK wrote:

The web application must be able to define the media format to
be used for the streams sent to a peer.

Shouldn't this be automatic and renegotiated dynamically via SDP
offer/answer?

Yes, this should be (re)negotiated via SDP, but what is unclear is
how the SDP is populated based on the application's preferences.

Why would the Web application have any say on this? Surely the user
agent is in a better position to know what to negotiate, since it will
be doing the encoding and decoding itself.

The best format of the coded media being streamed from UA a to UA b
depends on a lot of factors. An obvious one is that the codec used is
supported by both UAs As you say much of it can be handled without
any involvement from the application.

But let's say that the app in UA a does addStream. The application in
UA b (the same application as in UA a) has twovideo  elements, one
using a large display size, one using a small size. The UAs don't know
in which element the stream will be rendered at this stage (that will be
known first when the app in UA b connects the stream to one of the
elements at onaddstream), so I don't understand how the UAs can select
a suitable video resolution without the application giving some input.
(Once the stream is being rendered in an element the situation is
different - then UA b has knowledge about the rendering and could
somehow inform UA a.)

I had assumed that the video would at first be sent with some more or less
arbitrary dimensions (maybe the native ones), and that the receiving UA
would then renegotiate the dimensions once the stream was being displayed
somewhere. Since the page can let the user change thevideo  size
dynamically, it seems the UA would likely need to be able to do that kind
of dynamic update anyway.

Yeah, maybe that's the way to do it. But I think the media should be sent with
some sensible default resolution initially. Having a very high resolution could
congest the network, and a very low would give bad user experience until the
format has been renegotiated.
One possible initial resolution is 0x0 (no video sent); if the initial 
addStream callback is called as soon as the ICE negotiation concludes, 
the video recipient can set up the destination path so that it knows 
what a sensible resolution is, and can signal that back.


Of course, this means that after the session negotiation and the ICE 
negotiation, we have to wait for the resolution negotiation before we 
have any video worth showing.


Re: [whatwg] Accept full CSS colors in the legacy color parsing algorithm

2011-04-13 Thread Philip Taylor
On Fri, Apr 8, 2011 at 10:26 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/8/11 1:54 PM, Tab Atkins Jr. wrote:

 In the legacy color parsing algorithm [...]
 Could we change those two steps to just say If keyword is a valid CSS
 color value, then return the simple color corresponding to that
 value.?  (I guess, to fully match Webkit, you need to change the
 definition of simple color to take alpha into account.)

 Do you have web compat data here?

I don't know if this is relevant or useful but anyway:
http://philip.html5.org/data/font-colors.txt has some basic data for
font color values, http://philip.html5.org/data/bgcolors.txt for
body bgcolor. (Each line is the number of URLs that value was found
on (from the set from
http://philip.html5.org/data/dotbot-20090424.txt), followed by the
XML-encoded value.)

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Kenneth Russell
On Tue, Apr 12, 2011 at 4:32 PM, Glenn Maynard gl...@zewt.org wrote:
 Based on some discussion[1], it looks like a clean way to handle the
 permanent failure case is: If the GPU is blacklisted, or any other
 permanent error occurs, treat webgl as an unsupported context.  This means
 instead of WebGL's context creation algorithm executing and returning null,
 it would never be run at all; instead, step 2 of getContext[2] would return
 null.

 For transient errors, eg. too many open contexts, return a WebGL context in
 the lost state as Kenneth described.

 It was mentioned that the GPU blacklist can change as the browser runs.
 That's supported with this method, since whether a context type is
 supported or not can change over time.

 Are there any cases where this wouldn't work?

 (I'm not sure if or how webglcontextcreationerror fits in this.  It would
 either go away entirely, or be wedged between steps 1 and 2 of getContext; I
 don't know how WebGL would specify that.)

Thanks for the pointer to the IRC logs. It looks like it was a useful
discussion.

It's essential to be able to report more detail about why context
creation failed. We have already received a lot of feedback from users
and developers of popular projects like Google Body that doing so will
reduce end user frustration and provide them a path toward getting the
content to work.

At a minimum, we need to either continue to allow the generation of
webglcontextcreationerror at some point during the getContext() call,
throw an exception from getContext() in this case, or do something
else. Do you have a suggestion on which way to proceed?

-Ken

 [1] http://krijnhoetmer.nl/irc-logs/whatwg/20110413#l-77
 [2]
 http://dev.w3.org/html5/spec/the-canvas-element.html#dom-canvas-getcontext

 --
 Glenn Maynard



Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Cedric Vivier
On Wed, Apr 13, 2011 at 05:16, Kenneth Russell k...@google.com wrote:
 (...)
 To sum up, in general I think that whenever getContext(webgl)
 returns null, it's unrecoverable in a high quality WebGL
 implementation.

Makes sense.
Applications could detect all possible context creation failure
scenarios with something like this :


var gl = canvas.getContext(webgl);
if (!gl) {
if (!window.WebGLRenderingContext) {
   // Your browser does not support WebGL. Please upgrade your browser.
} else {
   // WebGL could not be initialized on your setup. Please check
that your GPU is supported and/or upgrade your drivers.
}
}
if (gl.isContextLost()) {
// Not enough resources to initialize WebGL. Please try closing
tabs/programs...
// (an application can, but is not required to, listen to
webglcontextlost and use statusMessage to give more information)
}


For the use case of detecting context restoration error, we could
possibly add an 'isRestorable' boolean to webglcontextlost event to
signal the app when restoration failed (and/or won't ever happen).


Therefore I believe we can get rid of webglcontextcreationerror
entirely, and we do not need to throw an exception either.


Regards,


[whatwg] Proposing canvas.toBlob(contentType)

2011-04-13 Thread Kyle Huey
Hello All,

Gecko 2.0 ships with a non-standard method on canvas named
mozGetAsFile(contentType, fileName).  We added this for internal use in our
UI.  It retrieves the contents of the canvas as a File object (at the time
Gecko did not supports Blobs) encoded in the contentType according to the
same rules toDataURL uses.

I propose adding a toBlob(contentType) method to the canvas element in the
style of toDataURL.  This would greatly increase the options available to
developers for extracting data from a canvas element (a Blob can be saved to
disk, XHRed, etc.)

- Kyle


Re: [whatwg] Proposing canvas.toBlob(contentType)

2011-04-13 Thread Glenn Maynard
On Wed, Apr 13, 2011 at 6:02 PM, Kyle Huey m...@kylehuey.com wrote:
 Gecko 2.0 ships with a non-standard method on canvas named
 mozGetAsFile(contentType, fileName).  We added this for internal use in our
 UI.  It retrieves the contents of the canvas as a File object (at the time
 Gecko did not supports Blobs) encoded in the contentType according to the
 same rules toDataURL uses.

 I propose adding a toBlob(contentType) method to the canvas element in the
 style of toDataURL.  This would greatly increase the options available to
 developers for extracting data from a canvas element (a Blob can be saved to
 disk, XHRed, etc.)

FYI: 
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-December/029517.html

As I mentioned there, I strongly recommend this be an asynchronous
call.  Compressing images can take time, and even a small image that
only takes 300ms is enough to cause a hitch in the browser's UI.  For
example,

r = canvas.getReader();
r.onload = function(blob) { blob = r.result; }
r.readBlob();

following the pattern of FileReader (and probably borrowing from its
spec, after it stabilizes).

This allows browsers to optionally thread compression or (more likely)
run it in slices, and this API would allow Progress Events
(onprogress) to be supported later on, useful when compressing large
images (which may take several seconds).

-- 
Glenn Maynard


Re: [whatwg] Proposing canvas.toBlob(contentType)

2011-04-13 Thread Juriy Zaytsev
I would be in favor of this.

In my recent app — http://mustachified.com — I used `mozGetAsFile` to
retrieve file from canvas, append it to form data and send to an external
service via cross-domain request.

When mozGetAsFile was not available, I had to build blob manually from
canvas' data url. Aside from the fact that it's more code to
transfer/maintain, and (likely) worse performance, the blob building also
relies on presence of BlobBuilder, ArrayBuffer and Uint8Array — so is not
always available.

Source: http://mustachified.com/master.js

-- 
kangax

On Wed, Apr 13, 2011 at 6:02 PM, Kyle Huey m...@kylehuey.com wrote:

 Hello All,

 Gecko 2.0 ships with a non-standard method on canvas named
 mozGetAsFile(contentType, fileName).  We added this for internal use in our
 UI.  It retrieves the contents of the canvas as a File object (at the time
 Gecko did not supports Blobs) encoded in the contentType according to the
 same rules toDataURL uses.

 I propose adding a toBlob(contentType) method to the canvas element in the
 style of toDataURL.  This would greatly increase the options available to
 developers for extracting data from a canvas element (a Blob can be saved
 to
 disk, XHRed, etc.)

 - Kyle



Re: [whatwg] Proposing canvas.toBlob(contentType)

2011-04-13 Thread David Levin
Shouldn't this api be async?

Returning a blob means that the size is available which implies a sync
operation.

dave

On Wed, Apr 13, 2011 at 3:02 PM, Kyle Huey m...@kylehuey.com wrote:
 Hello All,

 Gecko 2.0 ships with a non-standard method on canvas named
 mozGetAsFile(contentType, fileName).  We added this for internal use in our
 UI.  It retrieves the contents of the canvas as a File object (at the time
 Gecko did not supports Blobs) encoded in the contentType according to the
 same rules toDataURL uses.

 I propose adding a toBlob(contentType) method to the canvas element in the
 style of toDataURL.  This would greatly increase the options available to
 developers for extracting data from a canvas element (a Blob can be saved to
 disk, XHRed, etc.)

 - Kyle



Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Glenn Maynard
On Wed, Apr 13, 2011 at 4:21 PM, Kenneth Russell k...@google.com wrote:

 It's essential to be able to report more detail about why context
 creation failed. We have already received a lot of feedback from users
 and developers of popular projects like Google Body that doing so will
 reduce end user frustration and provide them a path toward getting the
 content to work.


Hixie says this is a bad idea, for security reasons, and that the UA should
just tell the user directly:
http://krijnhoetmer.nl/irc-logs/whatwg/20110413#l-1056

That said, the discussion lead to another approach:

Calling canvas.getContext(webgl, {async: true}) will cause it to *always*
return an object immediately, without attempting to initialize the
underlying drawing context.  This context starts out in the lost state.
As long as WebGL is supported by the browser, getContext will never return
null, even for blacklisted GPUs.  The context is initialized
asynchronously.  On success, webglcontextrestored is fired, as if the
context had just come back from a normal context loss.  On failure,
webglcontextcreationerror is fired with a statusMessage, and possibly a flag
indicating whether it's a permanent failure (GPU blacklisted) or a
recoverable one (insufficient resources).

If {async: true} isn't specified, then an initial context failure returns
null (using the unsupported contextId approach), and there's no interface
to get an error message--people should be strongly discouraged from using
this API (deprecating it if possible).

(If it's possible to make the backwards-incompatible change to remove sync
initialization entirely, that would be good to do, but I'm assuming it's
not.)

There are other fine details (such as feature detection, and possibly
distinguishing initializing from lost), but I'll wait for people to give
their thoughts before delving in deeper.  Aside from giving a consistent way
to report errors, this allows browsers to initialize WebGL contexts in the
background.

-- 
Glenn Maynard


Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Kenneth Russell
On Wed, Apr 13, 2011 at 4:43 PM, Glenn Maynard gl...@zewt.org wrote:
 On Wed, Apr 13, 2011 at 4:21 PM, Kenneth Russell k...@google.com wrote:

 It's essential to be able to report more detail about why context
 creation failed. We have already received a lot of feedback from users
 and developers of popular projects like Google Body that doing so will
 reduce end user frustration and provide them a path toward getting the
 content to work.

 Hixie says this is a bad idea, for security reasons, and that the UA should
 just tell the user directly:
 http://krijnhoetmer.nl/irc-logs/whatwg/20110413#l-1056

 That said, the discussion lead to another approach:

 Calling canvas.getContext(webgl, {async: true}) will cause it to *always*
 return an object immediately, without attempting to initialize the
 underlying drawing context.  This context starts out in the lost state.
 As long as WebGL is supported by the browser, getContext will never return
 null, even for blacklisted GPUs.  The context is initialized
 asynchronously.  On success, webglcontextrestored is fired, as if the
 context had just come back from a normal context loss.  On failure,
 webglcontextcreationerror is fired with a statusMessage, and possibly a flag
 indicating whether it's a permanent failure (GPU blacklisted) or a
 recoverable one (insufficient resources).

 If {async: true} isn't specified, then an initial context failure returns
 null (using the unsupported contextId approach), and there's no interface
 to get an error message--people should be strongly discouraged from using
 this API (deprecating it if possible).

 (If it's possible to make the backwards-incompatible change to remove sync
 initialization entirely, that would be good to do, but I'm assuming it's
 not.)

 There are other fine details (such as feature detection, and possibly
 distinguishing initializing from lost), but I'll wait for people to give
 their thoughts before delving in deeper.  Aside from giving a consistent way
 to report errors, this allows browsers to initialize WebGL contexts in the
 background.

Providing a programmatic status message about why WebGL initialization
failed (for example, that the user's card or driver is blacklisted) is
not a security issue. First, there would be no way to issue work to
the GPU to exploit any vulnerabilities that might exist, since the app
couldn't get a WebGLRenderingContext. Second, there wouldn't be
detailed enough information in the error message to find out what
graphics card is in use and attempt any other kind of targeted attacks
using other web rendering mechanisms.

Adding support for asynchronous initialization of WebGL is a good
idea, and should be proposed on public_webgl, but this discussion
should focus solely on improving the specification of the existing
synchronous initialization path, and its error conditions.

Given that the proposed asynchronous initialization path above uses
webglcontextcreationerror and provides a status message, I think that
should continue to be the error reporting mechanism for the current
initialization path. Then the introduction of any asynchronous
initialization path would be very simple: the application should
anticipate that it will receive a context lost event immediately,
rather than assuming it can immediately do its initialization. Error
reporting would be identical in the two scenarios.

-Ken


Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Cedric Vivier
On Thu, Apr 14, 2011 at 09:01, Kenneth Russell k...@google.com wrote:
 Adding support for asynchronous initialization of WebGL is a good
 idea, and should be proposed on public_webgl, but this discussion
 should focus solely on improving the specification of the existing
 synchronous initialization path, and its error conditions.

I don't think the added complexity/verbosity provides any advantage
over my proposal above (for the applications that even desire to show
additional failure information).
Is there a scenario I overlooked?

Regards,


Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Glenn Maynard
On Wed, Apr 13, 2011 at 9:01 PM, Kenneth Russell k...@google.com
wrote:Return a new object for contextId
 Adding support for asynchronous initialization of WebGL is a good
 idea, and should be proposed on public_webgl, but this discussion
 should focus solely on improving the specification of the existing
 synchronous initialization path, and its error conditions.

I only brought it up here because they're related.  If an async path exists,
it can affect the design of the sync path as well.

 Given that the proposed asynchronous initialization path above uses
 webglcontextcreationerror and provides a status message, I think that
 should continue to be the error reporting mechanism for the current
 initialization path.

So, the main difference from how it is now would be that getContext would
return an object, even on fatal errors, since WebGL can't return null from
context creation.  That seems to work, and it does minimize the number of
things that would need to change for async initialization.  It doesn't
distinguish between permanent and recoverable errors as we discussed
earlier, but that might just be overcomplicating things.  (If that's wanted
too, it could be supported by treating preventDefault on the error event the
same as on the lost event, saying if it becomes possible to create a
context later, I'm prepared for it.

User code for this is very simple:

var gl = canvas.getContext(webgl);
if (!gl) {
// WebGL is not supported
} else if (gl.isContextLost()) {
 // WebGL could not be initialized; the error message can be received
from
// webglcontextcreationerror (or webglcontextlost)
}

On Wed, Apr 13, 2011 at 10:53 PM, Cedric Vivier cedr...@neonux.com wrote:
 I don't think the added complexity/verbosity provides any advantage
 over my proposal above (for the applications that even desire to show
 additional failure information).
 Is there a scenario I overlooked?

Another advantage of using webglcontextlost is that, if the context
restoration proposal in the other thread is accepted, you could
preventDefault during it, just as with any other time the event is
received.  It would tell the browser if it ever becomes possible to create
the context in the future, give it to me (via webglcontextrestored).  You
could do that with *creationerror as well, but it would be duplicate logic.

-- 
Glenn Maynard