Re: [whatwg] Java language bindings for HTML5

2010-05-19 Thread Shiki Okasaka
On Tue, May 18, 2010 at 3:27 PM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 18 May 2010 04:38:21 +0200, Shiki Okasaka sh...@google.com wrote:

 On Mon, May 17, 2010 at 6:27 PM, Kühn Wolfgang wo.ku...@enbw.com wrote:

 Hi,
 As for the html5 elements, will there be a new package org.w3c.dom.html5?

 This is our concern, too. Historically each W3C specification
 introduced its own module name. However, the recent specifications
 tend to omit the module specification in the IDL definition.

    cf.
 http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1380.html

 In the IDL files we used above, we chose module names that seem to be
 practical, but those are not part of the standard. Hopefully more
 people will revisit this issue sometime soon.

 Can't they all just use org.w3c.dom? We cannot make the interface names
 overlap anyway.

I think one module name for all of the Web platform would work fine
for programming languages joining to the Web platform only recently.
But for languages like Java, I guess it would be nice to have a rule
for obtaining module names.

I'm curious how directory name (cssom, workers, postmsg, etc.) is
assigned for each specification today. Can we use the same name as a
module name in most of the cases? It wouldn't work for cssom,
cssom-vew, though.

 - Shiki




 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Java language bindings for HTML5

2010-05-19 Thread Anne van Kesteren

On Wed, 19 May 2010 08:40:16 +0200, Shiki Okasaka sh...@google.com wrote:
On Tue, May 18, 2010 at 3:27 PM, Anne van Kesteren ann...@opera.com  
wrote:

Can't they all just use org.w3c.dom? We cannot make the interface names
overlap anyway.


I think one module name for all of the Web platform would work fine
for programming languages joining to the Web platform only recently.
But for languages like Java, I guess it would be nice to have a rule
for obtaining module names.

I'm curious how directory name (cssom, workers, postmsg, etc.) is
assigned for each specification today. Can we use the same name as a
module name in most of the cases? It wouldn't work for cssom,
cssom-vew, though.


Usually the WG working on the specifications comes up with a shorthand and  
proposes this to the W3C Team. Very ad-hoc. But they are unique in a way  
so I suppose they could be used. The thing I would like to avoid is having  
to put module {} around most interface definitions because it is not  
useful for most of the target audience.



--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Speech input element

2010-05-19 Thread Anne van Kesteren
On Tue, 18 May 2010 10:52:53 +0200, Bjorn Bringert bring...@google.com  
wrote:
On Tue, May 18, 2010 at 8:02 AM, Anne van Kesteren ann...@opera.com  
wrote:
I wonder how it relates to the device proposal already in the draft.  
In theory that supports microphone input too.


It would be possible to implement speech recognition on top of a
microphone input API. The most obvious approach would be to use
device to get an audio stream, and send that audio stream to a
server (e.g. using WebSockets). The server runs a speech recognizer
and returns the results.

Advantages of the speech input element:

- Web app developers do not need to build and maintain a speech
recognition service.

- Implementations can choose to use client-side speech recognition.
This could give reduced network traffic and latency (but probably also
reduced recognition accuracy and language support). Implementations
could also use server-side recognition by default, switching to local
recognition in offline or low bandwidth situations.

- Using a general audio capture API would require APIs for things like
audio encoding and audio streaming. Judging from the past results of
specifying media features, this may be non-trivial. The speech input
element turns all audio processing concerns into implementation
details.

- Implementations can have special UI treatment for speech input,
which may be different from that for general audio capture.


I guess I don't really see why this cannot be added on top of the device  
element. Maybe it is indeed better though to separate the too. The reason  
I'm mostly asking is that one reason we went with device rather than  
input is that the result of the user operation is not something that  
will partake in form submission. Now obviously a lot of use cases today  
for form controls do not partake in form submission but are handled by  
script, but all the controls that are there can be used as part of form  
submission. input type=speech does not seem like it can.




Advantages of using a microphone API:

- Web app developers get complete control over the quality and
features of the speech recognizer. This is a moot point for most
developers though, since they do not have the resources to run their
own speech recognition service.

- Fewer features to implement in browsers (assuming that a microphone
API would be added anyway).


Right, and I am pretty positive we will add a microphone API. What e.g.  
could be done is that you have a speech recognition object of some sorts  
that you can feed the audio stream that comes out of device. (Or indeed  
you feed the stream to a server via WebSocket.)



--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Speech input element

2010-05-19 Thread Anne van Kesteren
On Tue, 18 May 2010 11:30:01 +0200, Bjorn Bringert bring...@google.com  
wrote:

Yes, I agree with that. The tricky issue, as Olli points out, is
whether and when the 'error' event should fire when recognition is
aborted because the user moves away or gets an alert. What does
XMLHttpRequest do?


I don't really see how the problem is the same as with synchronous  
XMLHttpRequest. When you do a synchronous request nothing happens to the  
event loop so an alert() dialog could never happen. I think you want  
recording to continue though. Having a simple dialog stop video  
conferencing for instance would be annoying. It's only script execution  
that needs to be paused. I'm also not sure if I'd really want recording to  
stop while looking at a page in a different tab. Again, if I'm in a  
conference call I'm almost always doing tasks on the side. E.g. looking up  
past discussions, scrolling through a document we're discussing, etc.



--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Speech input element

2010-05-19 Thread Satish Sampath

 I don't really see how the problem is the same as with synchronous
 XMLHttpRequest. When you do a synchronous request nothing happens to the
 event loop so an alert() dialog could never happen. I think you want
 recording to continue though. Having a simple dialog stop video conferencing
 for instance would be annoying. It's only script execution that needs to be
 paused. I'm also not sure if I'd really want recording to stop while looking
 at a page in a different tab. Again, if I'm in a conference call I'm almost
 always doing tasks on the side. E.g. looking up past discussions, scrolling
 through a document we're discussing, etc.


Can you clarify how the speech input element (as described in the current
API sketch) is related to video conferencing or a conference call, since it
doesn't really stream audio to any place other than potentially a speech
recognition server and feeds the result back to the element?

--
Cheers
Satish


Re: [whatwg] Speech input element

2010-05-19 Thread Anne van Kesteren
On Wed, 19 May 2010 10:22:54 +0200, Satish Sampath sat...@google.com  
wrote:

I don't really see how the problem is the same as with synchronous
XMLHttpRequest. When you do a synchronous request nothing happens to the
event loop so an alert() dialog could never happen. I think you want
recording to continue though. Having a simple dialog stop video  
conferencing
for instance would be annoying. It's only script execution that needs  
to be paused. I'm also not sure if I'd really want recording to stop  
while looking at a page in a different tab. Again, if I'm in a  
conference call I'm almost always doing tasks on the side. E.g. looking  
up past discussions, scrolling through a document we're discussing, etc.


Can you clarify how the speech input element (as described in the current
API sketch) is related to video conferencing or a conference call, since  
it doesn't really stream audio to any place other than potentially a  
speech

recognition server and feeds the result back to the element?


Well, as indicated in the other thread I'm not sure whether this is the  
best way to do it. Usually we start with a lower-level API (i.e.  
microphone input) and build up from there. But maybe I'm wrong and speech  
input is a case that needs to be considered separately. It would still not  
be like synchronous XMLHttpRequest though.



--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Speech input element

2010-05-19 Thread James Salsman
On Wed, May 19, 2010 at 12:50 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 18 May 2010 10:52:53 +0200, Bjorn Bringert bring...@google.com 
 wrote:
...
 Advantages of the speech input element:

 - Web app developers do not need to build and maintain a speech
 recognition service.

But browser authors would, and it's not clear they will do so in a
cross-platform, compatible way.  Client devices with limited cache
memory sizes and battery power aren't very good at the Viterbi beam
search algorithm, which isn't helped much by small caches because it's
mostly random reads across wide memory spans.

 - Implementations can have special UI treatment for speech input,
 which may be different from that for general audio capture.

 I guess I don't really see why this cannot be added on top of the device
 element. Maybe it is indeed better though to separate the two. The reason
 I'm mostly asking is that one reason we went with device rather than
 input is that the result of the user operation is not something that will
 partake in form submission

That's not a good reason.  Audio files are uploaded with input
type=file all the time, but it wasn't until Flash made it possible
that browser authors started considering the possibilities of
microphone upload, even though they were urged to address the issue a
decade ago:

 From: Tim Berners-Lee ti...@w3.org
 Date: Fri, 31 Mar 2000 16:37:02 -0500
...
 This is a question of getting browser manufacturers to
 implement what is already in HTML  HTML 4 does already
 include a way of requesting audio input.  For instance,
 you can write:

 INPUT name=audiofile1 type=file accept=audio/*

 and be prompted for various means of audio input (a recorder,
 a mixing desk, a file icon drag and drop receptor, etc).
 Here file does not mean from a disk but large body of
 data with a MIME type.

 As someone who used the NeXT machine's lip service many
 years ago I see no reason why browsers should not implement
 both audio and video and still capture in this way.   There
 are many occasions that voice input is valuable. We have speech
 recognition systems in the lab, for example, and of course this
 is very much needed  So you don't need to convince me of
 the usefulness.

 However, browser writers have not implemented this!

 One needs to encourage this feature to be implemented, and
 implemented well.

 I hope this helps.

 Tim Berners-Lee

Further back in January, 2000, that same basic feature request had
been endorsed by more than 150 people, including:

* Michael Swaine - in his article, Sounds like... -
webreview.com/pub/98/08/21/frames  - mswa...@swaine.com - well-known
magazine columnist for and long-time editor-in-chief of Dr. Dobb's
Journal
* David Turner and Keith Ross of Institut Eurecom - in their
paper, Asynchronous Audio Conferencing on the Web -
www.eurecom.fr/~turner/papers/aconf/abstract.html -
{turner,ro...@eurecom.fr
* Integrating Speech Technology in Language Learning SIG -
dbs.tay.ac.uk/instil - and InSTIL's ICARE committee, both chaired by
Lt. Col. Stephen LaRocca - gs0...@exmail.usma.army.mil - a language
instructor at the U.S. Military Academy
* Dr. Goh Kawai - g...@kawai.com - a researcher in the fields of
computer aided language instruction and speech recognition, and
InSTIL/ICARE founding member - www.kawai.com/goh
* Ruth Ross - r...@earthlab.com - IEEE Learning Technologies
Standards Committee - www.earthlab.com/RCR
* Phil Siviter - phil.sivi...@brighton.ac.uk - IEEE LTSC -
www.it.bton.ac.uk/staff/pfs/research.htm
* Safia Barikzai - s.barik...@sbu.ac.uk - IEEE LTSC - www.sbu.ac.uk/barikzai
* Gene Haldeman - g...@gene-haldeman.com - Computer Professionals
for Social Responsibility, Ethics Working Group
* Steve Teicher - steve-teic...@att.net - University of Central
Florida; CPSR Education Working Group
* Dr. Melissa Holland - mholl...@arl.mil - team leader for the
U.S. Army Research Laboratory's Language Technology Group
* Tull Jenkins - jenki...@atsc.army.mil - U.S. Army Training
Support Centers

However, W3C decided not to move forward with the implementation
details at http://www.w3.org/TR/device-upload because they were said
to be device dependent, which was completely meaningless, really.

Regards,
James Salsman


Re: [whatwg] Speech input element

2010-05-19 Thread Jeremy Orlow
Has anyone spent any time imagining what a microphone/video-camera API that
supports the video conference use case might look like?  If so, it'd be
great to see a link.

My guess is that it's going to be much more complicated and much more
invasive security wise.  Looking at Bjorn's proposal, it seems as though it
fairly elegantly supports the use cases while avoiding the need for explicit
permission requests (i.e. infobars, modal dialogs, etc) since permission is
implicitly granted every time it's used and permission is revoked when, for
example, the window loses focus.

I'd be very excited if a WG took a serious look at
microphone/video-camera/etc, but I suspect that speech to text is enough of
a special case (in terms of how it's often implemented in hardware and in
terms of security) that it won't be possible to fold into a more general
microphone/video-camera/etc API without losing ease of use, which is pretty
central the use cases listed in Bjorn's doc.

J

On Wed, May 19, 2010 at 9:30 AM, Anne van Kesteren ann...@opera.com wrote:

 On Wed, 19 May 2010 10:22:54 +0200, Satish Sampath sat...@google.com
 wrote:

 I don't really see how the problem is the same as with synchronous
 XMLHttpRequest. When you do a synchronous request nothing happens to the
 event loop so an alert() dialog could never happen. I think you want
 recording to continue though. Having a simple dialog stop video
 conferencing
 for instance would be annoying. It's only script execution that needs to
 be paused. I'm also not sure if I'd really want recording to stop while
 looking at a page in a different tab. Again, if I'm in a conference call I'm
 almost always doing tasks on the side. E.g. looking up past discussions,
 scrolling through a document we're discussing, etc.


 Can you clarify how the speech input element (as described in the current
 API sketch) is related to video conferencing or a conference call, since
 it doesn't really stream audio to any place other than potentially a speech
 recognition server and feeds the result back to the element?


 Well, as indicated in the other thread I'm not sure whether this is the
 best way to do it. Usually we start with a lower-level API (i.e. microphone
 input) and build up from there. But maybe I'm wrong and speech input is a
 case that needs to be considered separately. It would still not be like
 synchronous XMLHttpRequest though.



 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Java language bindings for HTML5

2010-05-19 Thread Kühn Wolfgang
Hi,

In the future, I see a lot of libraries soft-implementing WebIDL interfaces
without binding against a standard interface, may it be Java, C# or C++.

This is not good for many reasons. The most obvious are that consumers cannot
exchange implementations, and that implementors have no tool support to check
the conformance of their implementation.

A quick search reveals the following implementations for the
HTML5 Canvas Element alone:

Java
com.googlecode.gwt.graphics2d.client.canvas.HTMLCanvasElement
com.google.gwt.gears.client.canvas.Canvas
com.google.gwt.corp.gfx.client.canvas.CanvasElement
gwt.g2d.client.graphics.canvas.CanvasElement
gwt.ns.graphics.canvas.client.Canvas

C++
WebCore.html.HTMLCanvasElement (WebKit)
dom.nsIDOMHTMLCanvasElement (Firefox)

Other static typed languages
org.milescript.canvas.HTMLCanvas
dom.HTMLCanvasElement (esidl)
com.w3canvas.ascanvas.HTMLCanvasElement
com.googlecode.flashcanvas.Canvas


Agreeing on a name space does have far reaching consequences, as the example of
org.w3c.dom.html.HTMLImageElement
in DOM Level 1 shows.

Because of a subtle change in the api the w3c chose to rename the package to
org.w3c.dom.html2.HTMLImageElement
in DOM Level 2.

However, some 8 years later, the JRE only ships with org.w3c.dom.html,
and the xerces DOM implementation and HTML parser do only support Level 1.

Web-centric use cases for implementing in static typed languages are
* UA implementations such as WebKit or Gecko
* Cross-compiling to JavaScript (for example GWT)
* Automating browsers for testing and debugging

Greetings, Wolfgang

-Ursprüngliche Nachricht-
Von: Shiki Okasaka [mailto:sh...@google.com] 
Gesendet: Mittwoch, 19. Mai 2010 05:22
An: Kühn Wolfgang
Cc: Anne van Kesteren
Betreff: Re: [whatwg] Java language bindings for HTML5

Hi Kühn,

I think this is a very good point. Would you mind sending this to
wha...@lists.whatwg.org?

I wonder if we apply this rule to HTML5, what will be the likely
module name for HTML today; html5, html101, or html2010? Any guesses?

The interface versioning is a very important topic for the static
languages like Java, C++. But I guess this would be mainly the problem
of the programming language side; since HTML is growing very rapidly
these days, and browsers often implement draft specifications, we
cannot simply wait for the drafts become the recommendations. I'm very
interested in what would be the best way to dealing with that with the
static languages.

Best,

 - Shiki


On Tue, May 18, 2010 at 7:25 PM, Kühn Wolfgang 
wo.ku...@enbw.com wrote:
 Hi,

 addition is possible. Modification is a problem. For example 
there was a change
 in the semantic of HTMLImageElement from DOM Level 1 to Level 2:

 org.w3c.dom.html.HTMLImageElement
        String getHeight()

 org.w3c.dom.html2.HTMLImageElement
        int getHeight()

 These two definitions are not compatible and must be in 
different namespaces.


 Greetings, Wolfgang

 -Ursprüngliche Nachricht-
 Von: Anne van Kesteren [mailto:ann...@opera.com]
 Gesendet: Dienstag, 18. Mai 2010 08:28
 An: Kühn Wolfgang; Shiki Okasaka
 Cc: whatwg@lists.whatwg.org
 Betreff: Re: [whatwg] Java language bindings for HTML5

 On Tue, 18 May 2010 04:38:21 +0200, Shiki Okasaka 
sh...@google.com wrote:
 On Mon, May 17, 2010 at 6:27 PM, Kühn Wolfgang 
wo.ku...@enbw.com wrote:
 Hi,
 As for the html5 elements, will there be a new package
 org.w3c.dom.html5?

 This is our concern, too. Historically each W3C specification
 introduced its own module name. However, the recent specifications
 tend to omit the module specification in the IDL definition.

     cf.
 
http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1380.html

 In the IDL files we used above, we chose module names that 
seem to be
 practical, but those are not part of the standard. Hopefully more
 people will revisit this issue sometime soon.

 Can't they all just use org.w3c.dom? We cannot make the 
interface names
 overlap anyway.


 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Window events that bubble?

2010-05-19 Thread Randy Drielinger

Of course, in a theoretical future where we'd add an object above
the Window object, these events would bubble to that object. But
that's not the case today as no such object exists.


This is actually already used nowadays. Whenever you implement a browser 
object in another application (like the IE object in Visual Studio), you can 
capture these bubbled events in your application. Good thing that it 
actually stayed in the specs :)


We use that in our call scripter tool for contact center agents.

Regards


- Original Message - 
From: Jonas Sicking jo...@sicking.cc

To: David Flanagan da...@davidflanagan.com
Cc: whatwg@lists.whatwg.org
Sent: Tuesday, May 18, 2010 12:12 AM
Subject: Re: [whatwg] Window events that bubble?


On Mon, May 17, 2010 at 3:07 PM, David Flanagan da...@davidflanagan.com 
wrote:

Section 6.5.9 History Traversal defines popstate and hashchange events
that are fired on the Window object. It specifies that these events *must*
bubble. Where should they bubble to? What does it mean to bubble up from a
Window? These events aren't going across frames, are they?

Is the specification that they must bubble a formality because existing
implementations set the bubbles property of the Event object to true? Or
does it actually have some impact on event propagation?


My understanding is that the only noticable effect of defining that
these effects bubble is that their bubbles property is set to true.
Same thing happens to bubbling events fired at an XMLHttpRequest
object.

Of course, in a theoretical future where we'd add an object above
the Window object, these events would bubble to that object. But
that's not the case today as no such object exists.

/ Jonas



[whatwg] Fwd: INCLUDE and links with @rel=embed

2010-05-19 Thread Bjartur Thorlacius
Forwarding a message 'cause I forgot to CC WHATWG so it got stuck in moderation.

-- Forwarded message --
From: bjartur svartma...@gmail.com
Date: Tue, 18 May 2010 21:20:30 +
Subject: Re: [whatwg] INCLUDE and links with @rel=embed
To: Tab Atkins Jr. jackalm...@gmail.com



Is the existing syntax backwards compatible? When using A, you get
a nice link as fallback content automagically, not requiring any
special workarounds by the content author. AFAICT you don't even get
that when using a browser that doesn't support audio and video.

What I'm trying to write is that not all browsers support JavaScript,
not all pages must be able to control playback more than play, pause,
seek etc and that a mechanism for linking to files and alternative
encodings thereof semantically. Currently, that seems to be only
possible with video and audio. But if you create a media element that
adds no extra interface, you get this for all other types as well
(albeit with a lesser scripting interface). Although the include
element won't be as good integration point between one media and
scripts, it will have a standard interface somewhat applicable to many
medias/mediums and at least provide something to all medias, versus
(close to) nothing.

I also propose using normal backwards compatible a elements with
another relation (=embed) rather a new source element as the use
case of a (linking to resources) seems to cover multimedia resources
as well, and because it's more backwards compatible than a brand new
element -- in fact it can remove the need to add a fallback link to
content that would otherwise be required to be added for unsupporting
browsers, or even worse: forgotten (though authors will still be able
to add other fallback content if they so desire).

At last, I propose not forbidding usage of @rel=embed outside of
media elements.

---

a is more widely implemented than source, links to multimedia are
also links, so it seems to me that a should be used. Otherwise you
have to readd attributes like hreflang, charset, type, etc to an
element with almost exactly the same meaning, but slightly different
behaviour. Using source is like adding a new element for paragraphs
that should be colored, rather than using p.

The source element allows authors to specify multiple alternative media 
resources for media elements. It does not represent anything on its own.
I want to change this to allow authors to specify alternative
resources. And possibly to make it represent the linked media itself
if it appears on itself (not inside a media element) and/or doesn't
have the alternate keyword in @rel.

source vs a rel=embedded aside, I don't see the cost of
introducing a new media element that has just exposes the interface
HTMLMediaElement so resources that don't need audio or video specific
scripting interfaces and/or when such interfaces haven't been
standardized WHATWG/W3C for the media in question can still embed
multiple alternative a rel=embeddeds, tracks, etc.

Silvia,
I don't understand what you do mean when you say that audio and
video need to be seperate elements so they can be formatted
properly (the reason possibly being that I've never coded in Flash).
But it's not totally unclear what the content will be if the author
uses proper metadata (which can be mandated by the spec).
Embedding metadata about external resources in tag names
is abusing their purpose. It's especially bad when you don't even
provide one element per MIME-type (like model and text).

 Bjartur, I do wonder what exactly your motivation is in suggesting
 audio and video be degraded back to undetermined embedded outside
 content. What would be won by going back to that way? Lots will be
 lost though - if you have ever developed a Website with Flash video,
 you will know how much work is involved in creating a specific
 JavaScript API to the Flash video player so that you can have some
 interaction between the rest of the Webpage and the video or audio
 resource. We honestly don't ever want to go back to that way.
I'm not writing that you must throw audio and video out of the
window. You should add a new element; include. That element
will be a media element, might expose HTMLMediaElement (if it's
not deemed to audio and video specific) or some subset/superset
thereof. It will not put as much restrictions on what type of media
may be linked to with it (and in specific: not require that media to be
of certain MIME-type). If audio and video can't be generalized
to expose some interfaces based on what the underlying media is
capable of or throw exceptions when scripts try to do impossible things
(like playing static/non-linear media or media that hasn't been loaded
yet) than keep audio and video. Just don't make the features of
those elements that don't require media specific scripting interfaces,
like multiple alternative resources to those first-class media types.

Allowing scripting second-class 

Re: [whatwg] Fwd: INCLUDE and links with @rel=embed

2010-05-19 Thread Silvia Pfeiffer
On Wed, May 19, 2010 at 9:56 PM, Bjartur Thorlacius
svartma...@gmail.com wrote:
 Forwarding a message 'cause I forgot to CC WHATWG so it got stuck in 
 moderation.

 -- Forwarded message --
 From: bjartur svartma...@gmail.com
 Date: Tue, 18 May 2010 21:20:30 +
 Subject: Re: [whatwg] INCLUDE and links with @rel=embed
 To: Tab Atkins Jr. jackalm...@gmail.com

 

        Is the existing syntax backwards compatible? When using A, you get
 a nice link as fallback content automagically, not requiring any
 special workarounds by the content author. AFAICT you don't even get
 that when using a browser that doesn't support audio and video.

        What I'm trying to write is that not all browsers support JavaScript,
 not all pages must be able to control playback more than play, pause,
 seek etc and that a mechanism for linking to files and alternative
 encodings thereof semantically. Currently, that seems to be only
 possible with video and audio. But if you create a media element that
 adds no extra interface, you get this for all other types as well
 (albeit with a lesser scripting interface). Although the include
 element won't be as good integration point between one media and
 scripts, it will have a standard interface somewhat applicable to many
 medias/mediums and at least provide something to all medias, versus
 (close to) nothing.

        I also propose using normal backwards compatible a elements with
 another relation (=embed) rather a new source element as the use
 case of a (linking to resources) seems to cover multimedia resources
 as well, and because it's more backwards compatible than a brand new
 element -- in fact it can remove the need to add a fallback link to
 content that would otherwise be required to be added for unsupporting
 browsers, or even worse: forgotten (though authors will still be able
 to add other fallback content if they so desire).

        At last, I propose not forbidding usage of @rel=embed outside of
 media elements.

 ---

 a is more widely implemented than source, links to multimedia are
 also links, so it seems to me that a should be used. Otherwise you
 have to readd attributes like hreflang, charset, type, etc to an
 element with almost exactly the same meaning, but slightly different
 behaviour. Using source is like adding a new element for paragraphs
 that should be colored, rather than using p.

The source element allows authors to specify multiple alternative media 
resources for media elements. It does not represent anything on its own.
 I want to change this to allow authors to specify alternative
 resources. And possibly to make it represent the linked media itself
 if it appears on itself (not inside a media element) and/or doesn't
 have the alternate keyword in @rel.

 source vs a rel=embedded aside, I don't see the cost of
 introducing a new media element that has just exposes the interface
 HTMLMediaElement so resources that don't need audio or video specific
 scripting interfaces and/or when such interfaces haven't been
 standardized WHATWG/W3C for the media in question can still embed
 multiple alternative a rel=embeddeds, tracks, etc.

 Silvia,
 I don't understand what you do mean when you say that audio and
 video need to be seperate elements so they can be formatted
 properly (the reason possibly being that I've never coded in Flash).
 But it's not totally unclear what the content will be if the author
 uses proper metadata (which can be mandated by the spec).
 Embedding metadata about external resources in tag names
 is abusing their purpose. It's especially bad when you don't even
 provide one element per MIME-type (like model and text).

 Bjartur, I do wonder what exactly your motivation is in suggesting
 audio and video be degraded back to undetermined embedded outside
 content. What would be won by going back to that way? Lots will be
 lost though - if you have ever developed a Website with Flash video,
 you will know how much work is involved in creating a specific
 JavaScript API to the Flash video player so that you can have some
 interaction between the rest of the Webpage and the video or audio
 resource. We honestly don't ever want to go back to that way.
 I'm not writing that you must throw audio and video out of the
 window. You should add a new element; include. That element
 will be a media element, might expose HTMLMediaElement (if it's
 not deemed to audio and video specific) or some subset/superset
 thereof. It will not put as much restrictions on what type of media
 may be linked to with it (and in specific: not require that media to be
 of certain MIME-type). If audio and video can't be generalized
 to expose some interfaces based on what the underlying media is
 capable of or throw exceptions when scripts try to do impossible things
 (like playing static/non-linear media or media that hasn't been loaded
 yet) than keep audio and video. Just don't make the features of
 those elements that don't 

Re: [whatwg] Fwd: INCLUDE and links with @rel=embed

2010-05-19 Thread bjartur

 This all seems way too abstract - I think you are arguing for the
 wrong case with the right reasons. But in any case, you should try and
 make an example markup with your ideas and check if it really gives
 you what you think it will. I have sincere doubts.
Yeah, maybe my crazy idealism and tendency to reuse existing things don't mix 
up in this case.
The main purpose of video and audio is to create a scripting interface to 
online video.
But they also add new linking capabilities which should be available to any 
content whatsoever.
OK, letting me loose and only using current draft as a vague guideline.

include href=./gpl title=GNU General Public License !-- URI of general 
resource --
include rel=alternate href=./gpl.en hreflang=en_US title=GPL 
!-- Some other way of marking alternatives would be better --
!-- may add hreflang to all links herein for maximum 
compatibility --
a rel=embed href=./gpl.en.text type=text/plain.../a
a rel=embed href=./gpl.en.html type=text/html 
charset=./a
/include
include rel=alternate href=./gpl.fr hreflang=fr title=GPL
!-- similiar to above --
/include
include href=./gpl-notes title=Some secondary resource that should 
be bundled with the GPL
a rel=alternate embed href=./gpl-notes.en 
hreflang=en_USNotes on GPL/a
a rel=alternate embed href=./gpl-notes.fr hreflang=is 
lang=isUm GPL/a
/include
/include

More sane rewrite:

choose href=./gpl title=GNU General Public License id=gpl
choose href=./gpl.en hreflang=en_US
a rel=embed href=./gpl.en.text type=text/plain.../a
a rel=embed href=./gpl.en.html type=text/html 
charset=./a
/choose
choose href=./gpl.fr hreflang=fr
!-- ... --
/choose
/choose
choose href=./gpl-notes tilte=Notes on GPL id=notes
!-- ... --
/choose

Note: choose states relationship between alternative resources, not that the UA
must choose to render only one of them. Element might be named alt.

Note: Requires more metadata to state relationship between #gpl and #notes.

I think that making one of those elements a media element might make some sense.
Also, I think that a rel=embed should be substituted for source and @src
of video and audio should be removed, and a rel=embed used instead.
If that brakes scripting, ignore my suggestion.

Radically changing link might also fulfill these use cases.
That's not compatible with existing implementations though,
and that might require allowing link inside of body.

Do you think that link rdf:about=./gpl rel=alternate hreflang=fr 
href=./gpl.fr
is better? Or
div rdf:about=./gpl
div rdf:about=./gpl.en about-/href-lang=en_US
link rel=alternate type=text/plain href=./gpl.en.text
link rel=alternate type=text/html  href=./gpl.en.html
/div
link rel=alternate hreflang=fr href=./gpl.fr
/div
Although it brakes compatibility with browsers it expresses the desired info.
This feels like something that might be in a XHTML+XLink document.


[whatwg] Need more diagnostic information for ApplicationCache events

2010-05-19 Thread Patrick Mueller

I've been playing with application cache for a while now, and
found the diagnostic information available to be sorely
lacking.

For example, to diagnose user-land errors that occur when using
appcache, this is the only practical tool I have at my disposal:

   tail -f /var/log/apache2/access_log /var/log/apache2/error_log

I'd like to be able to get the following information:

- during progress events, as identified in step 17 of the application
cache download process steps in 6.6.4  Downloading or updating an
application cache), I'd like to have the URL of the resource that
is about to be downloaded.  The progress event from step 18 (
indicating all resources have been downloaded) doesn't need this.

- for all error conditions, some indication of WHAT error occurred.
Presumably an error code.  If the error involved a particular resource,
I'd like the URL of the resource as well.

I'm not sure what the best mechanisms might be to provide this info:

- extend the events used to add this information

- provide this information in the ApplicationCache interface -
lastErrorCode, lastResourceDownloaded, etc

- define a new object as the target for these events (currently
undefined,or at least not clear to me), and add that info to the target

- something else

In terms of cleaniness, I'd prefer the third, I think, which would imply
creating a new interface perhaps called ApplicationCacheStatus or 
something.


Actually, creating a new interface to hold this sort of information
might be the best thing to do for all three of the mechanisms I
suggested - extend the events used with a new attribute of type
ApplicationCacheStatus, and add a new attribute to ApplicationCache
of type ApplicationCacheStatus, etc.

--
Patrick Mueller - http://muellerware.org



Re: [whatwg] Java language bindings for HTML5

2010-05-19 Thread Benjamin Smedberg

On 5/19/10 5:41 AM, Kühn Wolfgang wrote:


C++
WebCore.html.HTMLCanvasElement (WebKit)
dom.nsIDOMHTMLCanvasElement (Firefox)


Mozilla nsI* interfaces, if they continue to exist, should be treated as 
internal. We have little interest in binding to a frozen interface 
definition. The interfaces may change or be extended in the future.


--BDFS


Re: [whatwg] Speech input element

2010-05-19 Thread David Singer
I am a little concerned that we are increasingly breaking down a metaphor, a 
'virtual interface' without realizing what that abstraction buys us.  At the 
moment, we have the concept of a hypothetical pointer and hypothetical 
keyboard, (with some abstract states, such as focus) that you can actually 
drive using a whole bunch of physical modalities.  If we develop UIs that are 
specific to people actually speaking, we have 'torn the veil' of that abstract 
interface.  What happens to people who cannot speak, for example? Or who cannot 
say the language needed well enough to be recognized?


David Singer
Multimedia and Software Standards, Apple Inc.



Re: [whatwg] Speech input element

2010-05-19 Thread timeless
On Thu, May 20, 2010 at 12:38 AM, David Singer sin...@apple.com wrote:
 I am a little concerned that we are increasingly breaking down a metaphor,
 a 'virtual interface' without realizing what that abstraction buys us.

I'm more than a little concerned about this and hope that we tread
much more carefully than it seems some parties are willing to do. I'm
glad I'm not alone.

 At the moment, we have the concept of a hypothetical pointer and hypothetical
 keyboard, (with some abstract states, such as focus) that you can actually 
 drive
 using a whole bunch of physical modalities.

 If we develop UIs that are specific to people actually speaking, we have
 'torn the veil' of that abstract interface.  What happens to people who cannot
 speak, for example? Or who cannot say the language needed well enough
 to be recognized?


[whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread James Salsman
 From: David Gerard dger...@gmail.com
 Subject: [Wikitech-l] VP8 freed!
 To: Wikimedia developers, Wikimedia Commons Discussion List

 http://www.webmproject.org/

 http://openvideoalliance.org/2010/05/google-frees-vp8-codec-for-html5-the-webm-project/?l=en

 http://www.h-online.com/open/news/item/Google-open-source-VP8-as-part-of-the-WebM-Project-1003772.html

 Container will be .webm, a modified version of Matroshka. Audio is Ogg Vorbis.

 YouTube is serving up .webm *right now*. Flash will also include .webm.


Re: [whatwg] Should scripts and plugins in contenteditable content be enabled or disabled?

2010-05-19 Thread Robert O'Callahan
On Wed, May 19, 2010 at 5:35 AM, Ojan Vafai o...@chromium.org wrote:

 The webkit behavior of allowing all scripts makes the most sense to me. It
 should be possible to disable scripts, but that capability shouldn't be tied
 to editability. The clean solution for the CKEditor developer is to use a
 sandboxed iframe.


Discussion led to the point that there's a fundamental conflict between
sandboxed iframes and JS-based framebusting techniques. The point of
https://bugzilla.mozilla.org/show_bug.cgi?id=519928 is that Web sites using
JS-based techniques to prevent clickjacking can be thwarted if the
containing page has a way to disable JS in the child document. Currently
'designmode' is usable that way in Gecko, but 'sandbox' would work even
better.

Maybe sites should all move to declarative techniques such as CSP or
X-Frame-Options (although there are suggestions that maybe they don't want
to for some reason --- see
https://bugzilla.mozilla.org/show_bug.cgi?id=519928#c5 ). But there are
still issues with existing sites. Should we care?

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread Nils Dagsson Moskopp
James Salsman jsals...@talknicer.com schrieb am Wed, 19 May 2010
14:58:38 -0700:

  Container will be .webm, a modified version of Matroshka. Audio is
  Ogg Vorbis.

You mean Vorbis. /pedantic ;)

-- 
Nils Dagsson Moskopp // erlehmann
http://dieweltistgarnichtso.net


signature.asc
Description: PGP signature


Re: [whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread David Gerard
On 20 May 2010 00:34, Nils Dagsson Moskopp
nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
 James Salsman jsals...@talknicer.com schrieb am Wed, 19 May 2010
 14:58:38 -0700:

  Container will be .webm, a modified version of Matroshka. Audio is
  Ogg Vorbis.

 You mean Vorbis. /pedantic ;)


*cough*

x264 don't think much of VP8, they think it's just not ready:

http://x264dev.multimedia.cx/?p=377

OTOH, that may not end up mattering.


- d.


Re: [whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread ニール・ゴンパ
On Wed, May 19, 2010 at 6:38 PM, David Gerard dger...@gmail.com wrote:

 On 20 May 2010 00:34, Nils Dagsson Moskopp
 nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
  James Salsman jsals...@talknicer.com schrieb am Wed, 19 May 2010
  14:58:38 -0700:

   Container will be .webm, a modified version of Matroshka. Audio is
   Ogg Vorbis.

  You mean Vorbis. /pedantic ;)


 *cough*

 x264 don't think much of VP8, they think it's just not ready:

 http://x264dev.multimedia.cx/?p=377

 OTOH, that may not end up mattering.


 - d.


Given that the main reason against Theora was the fact that hardware devices
supported baseline profile H.264 (which looks terrible compared to the other
profiles), I think VP8 may be fine. VP8 already has hardware decoder chip
support, so that isn't an issue. Patents aren't an issue, since Google has
dealt with that.

Nevertheless, Firefox already has support for it in the trunk, Opera
released a labs build that adds a GStreamer plugin for WebM to their builds,
and Chrome trunk added support for it.

Adobe announced support for VP8 in a future version of Flash, and probably
Silverlight will have it too. Whether they'll include complete WebM support
is unknown, though.


Re: [whatwg] Should scripts and plugins in contenteditable content be enabled or disabled?

2010-05-19 Thread Adam Barth
On Wed, May 19, 2010 at 4:32 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, May 19, 2010 at 5:35 AM, Ojan Vafai o...@chromium.org wrote:

 The webkit behavior of allowing all scripts makes the most sense to me. It
 should be possible to disable scripts, but that capability shouldn't be tied
 to editability. The clean solution for the CKEditor developer is to use a
 sandboxed iframe.

 Discussion led to the point that there's a fundamental conflict between
 sandboxed iframes and JS-based framebusting techniques. The point of
 https://bugzilla.mozilla.org/show_bug.cgi?id=519928 is that Web sites using
 JS-based techniques to prevent clickjacking can be thwarted if the
 containing page has a way to disable JS in the child document. Currently
 'designmode' is usable that way in Gecko, but 'sandbox' would work even
 better.

 Maybe sites should all move to declarative techniques such as CSP or
 X-Frame-Options (although there are suggestions that maybe they don't want
 to for some reason --- see
 https://bugzilla.mozilla.org/show_bug.cgi?id=519928#c5 ). But there are
 still issues with existing sites. Should we care?

Virtually none of the JavaScript framebusting scripts used by web
sites are effective.  You can build one that is effective, but you
need to build it with the idea that JavaScript might be disabled, and,
in that case, it will play nice with @sandbox.  I'd recommend that
sites use something declarative, such as X-Frame-Options or
Content-Security-Policy.

Adam


Re: [whatwg] Should scripts and plugins in contenteditable content be enabled or disabled?

2010-05-19 Thread Collin Jackson
On Wed, May 19, 2010 at 4:57 PM, Adam Barth w...@adambarth.com wrote:

 Virtually none of the JavaScript framebusting scripts used by web
 sites are effective.


Yes. If anyone would like to see more evidence of this, here's a recent
study of the Alexa Top 500 web sites. None of them were framebusting
correctly with JavaScript.

http://w2spconf.com/2010/papers/p27.pdf

Collin


Re: [whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread Silvia Pfeiffer
2010/5/20 Sir Gallantmon (ニール・ゴンパ) ngomp...@gmail.com:
 On Wed, May 19, 2010 at 6:38 PM, David Gerard dger...@gmail.com wrote:

 On 20 May 2010 00:34, Nils Dagsson Moskopp
 nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
  James Salsman jsals...@talknicer.com schrieb am Wed, 19 May 2010
  14:58:38 -0700:

   Container will be .webm, a modified version of Matroshka. Audio is
   Ogg Vorbis.

  You mean Vorbis. /pedantic ;)


 *cough*

 x264 don't think much of VP8, they think it's just not ready:

 http://x264dev.multimedia.cx/?p=377

 OTOH, that may not end up mattering.


 - d.

 Given that the main reason against Theora was the fact that hardware devices
 supported baseline profile H.264 (which looks terrible compared to the other
 profiles), I think VP8 may be fine. VP8 already has hardware decoder chip
 support, so that isn't an issue. Patents aren't an issue, since Google has
 dealt with that.


Apologies, but how has Google dealt with patents? They make the ones
they bought from On2 available for free - which is exactly the same
situation as for Theora. They don't indemnify anyone using WebM.

However, I do appreciate that for any commercial entity having to
chose between the patent risk on Theora and the one on WebM, it is an
easy choice, because Google would join such a courtcase for WebM and
their massive financial status just doesn't compare to Xiph's. ;-)


 Nevertheless, Firefox already has support for it in the trunk, Opera
 released a labs build that adds a GStreamer plugin for WebM to their builds,
 and Chrome trunk added support for it.
 Adobe announced support for VP8 in a future version of Flash, and probably
 Silverlight will have it too. Whether they'll include complete WebM support
 is unknown, though.

I think with the weight that Google have in the market, they may well
be able to push WebM through - in particular with the help of Adobe
(ironically). We may yet see a solution to the baseline codec and it
will be a free codec, yay!

Cheers,
Silvia.


Re: [whatwg] forwarded: Google opens VP8 video codec

2010-05-19 Thread ニール・ゴンパ
2010/5/19 Silvia Pfeiffer silviapfeiff...@gmail.com

 2010/5/20 Sir Gallantmon (ニール・ゴンパ) ngomp...@gmail.com:
  On Wed, May 19, 2010 at 6:38 PM, David Gerard dger...@gmail.com wrote:
 
  On 20 May 2010 00:34, Nils Dagsson Moskopp
  nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
   James Salsman jsals...@talknicer.com schrieb am Wed, 19 May 2010
   14:58:38 -0700:
 
Container will be .webm, a modified version of Matroshka. Audio is
Ogg Vorbis.
 
   You mean Vorbis. /pedantic ;)
 
 
  *cough*
 
  x264 don't think much of VP8, they think it's just not ready:
 
  http://x264dev.multimedia.cx/?p=377
 
  OTOH, that may not end up mattering.
 
 
  - d.
 
  Given that the main reason against Theora was the fact that hardware
 devices
  supported baseline profile H.264 (which looks terrible compared to the
 other
  profiles), I think VP8 may be fine. VP8 already has hardware decoder chip
  support, so that isn't an issue. Patents aren't an issue, since Google
 has
  dealt with that.


 Apologies, but how has Google dealt with patents? They make the ones
 they bought from On2 available for free - which is exactly the same
 situation as for Theora. They don't indemnify anyone using WebM.

 However, I do appreciate that for any commercial entity having to
 chose between the patent risk on Theora and the one on WebM, it is an
 easy choice, because Google would join such a courtcase for WebM and
 their massive financial status just doesn't compare to Xiph's. ;-)


Google's patent license states that anyone that attempts to sue over VP8
will automatically lose their patent license. That's a huge deterrent.
AFAIR, the VC-1 codec didn't have that kind of clause, which caused the
debacle that led to the VC-1 patent pool...