Re: [whatwg] Video with MIME type application/octet-stream

2010-09-01 Thread Brian Campbell
On Aug 31, 2010, at 9:40 AM, Boris Zbarsky wrote:

 On 8/31/10 3:36 AM, Ian Hickson wrote:
 You might say Hey, but aren't you content sniffing then to find the
 codecs and you'd be right. But in this case we're respecting the MIME
 type sent by the server - it tells the browser to whatever level of
 detail it wants (including codecs if needed) what type it is sending. If
 the server sends 'text/plain' or 'video/x-matroska' I wouldn't expect a
 browsers to sniff it for Ogg content.
 
 The Microsoft guys responded to my suggestion that they might want to
 implement something like this with what's the benefit of doing that?.
 
 One obvious benefit is that videos with the wrong type will not work, and 
 hence videos will be sent with the right type.

What makes you say this? Even if they are sent with the right type initially, 
the correct types are at high risk of bitrotting.

The big problem with MIME types is that they don't stick to files very well. 
So, while someone might get them working when they initially use video, if they 
move to a different web server, or upgrade their server, or someone mirrors 
their video, or any of a number of other things, they might lose the proper 
association of files and MIME types.

The real problem is that there is no standard way of storing and transmitting 
file type metadata on the majority of filesystems and majority of internet 
protocols, meaning that people need to maintain separate databases of MIME 
types, which are extremely easy to lose when moving between web servers. Until 
this problem is fixed (and this is a pretty big problem, even Apple gave up on 
tracking file type metadata years ago due to it's incompatibility with how 
other systems work), it will simply be too hard to maintain working 
Content-Type headers, and sniffing will be much more likely to produce the 
effects that the authors intended.

It seems that periodically, web standards bodies decide this time, if we're 
strict, people will just get the content right or it won't work (such as XHTML 
with XML parsing rules), and invariably, people manage to screw it up anyhow. 
Sure, when the author tests their page the first time it's fine, but a mistaken 
lack of quoting in a comments field breaks the whole page. This causes people 
to migrate to the browsers or technologies that are less strict, and actually 
show the user what they want to see, rather than just breaking due to something 
out of the user's control.

-- Brian

Re: [whatwg] Proposal for secure key-value data stores

2010-08-22 Thread Brian Campbell
On Aug 16, 2010, at 6:58 PM, Ian Hickson wrote:

 On Tue, 30 Mar 2010, Nicholas Zakas wrote:
 
 In attempting to use localStorage at work, we ran into some major 
 security issues. Primary among those are the guidelines we have in place 
 regarding personalized user data. The short story is that personalized 
 data cannot be stored on disk unless it's encrypted using a 
 company-validated encryption mechanism and key. So if we actually wanted 
 to use localStorage, we'd be forced to encrypt each value as it was 
 being written and then decrypt each value being read. Because of this 
 tediousness, we opted not to use it.
 
 Doing that wouldn't actually help, either, since anyone attacking the user 
 could simply intercept the key and then decrypt it all offline. (In this 
 scenario, I'm assuming the attack being defeated is that of an attacker 
 obtaining the data, and I'm assuming that the attacker has physical access 
 to the computer, since otherwise the Web's security model would be 
 sufficient to block the attack, and that the computer is logged in, since 
 otherwise whole-disk encryption would be sufficient to block this attack.)

Note that there are several different types of attack you might want to defend 
against. While it's true that there's no real defense against someone taking 
physical control of the machine, installing a keylogger, putting the machine 
back in the users control, and then either uploading the data somewhere or 
retrieving the data physically at a later time, that attack works against 
almost all security mechanisms in the web platform (password authentication, 
HTTPS, cookies, etc). That is a very expensive type of attack, with a high risk 
of discovery (several opportunities for physical discovery, and the keylogger 
may be discovered too).

There are several attacks that are much cheaper and easier which encryption of 
data on disk can prevent. I think one of the biggest risks is simply the 
discarded hard drives. While most places which deal in sensitive data require 
that hard drives are erased or destroyed before being disposed of, in practice, 
that's something that's very easy to overlook when discarding an old computer. 
There is considerable risk of someone just doing dumpster diving being able to 
recover sensitive data, which can be prevented (with no risk of the password 
being sniffed) by just encrypting all potentially sensitive data before writing 
it to the disk.

The stolen laptop attack is similar; encryption will prevent sensitive data 
from being leaked, and stealing a laptop is a lot easier than stealing a 
laptop, installing a keylogger, returning it unnoticed, and then collecting the 
data from the keylogger unnoticed.

So, there are real security benefits to ensuring that sensitive data is stored 
encrypted. One way to do this is to require the platform to encrypt the data, 
either the browser itself or the browser ensuring that the operating system has 
some sort of full-disk encryption. The web app could then require the browser 
report that data will be encrypted before sending the data down. The problem 
with this is that browsers may lie, or be mistaken about full-disk encryption. 
Microsoft Exchange has a flag that notifies the server whether data is stored 
encrypted, and some companies have policies of not allowing clients that don't 
support encryption. Of course, this means that several clients just lie, 
claiming to encrypt the data while not actually doing so (I believe the initial 
version of the iPhone Exchange client did this, though my memory may be hazy).

Anyhow, I think most of the reasonable ideas have been suggested in this thread 
(allow the browser to report whether data will be stored encrypted, provide a 
JS crypto API to allow web apps to more easily encrypt and decrypt data 
piecemeal on the client side). The one thing I'd add is that if you really want 
to make sure that private data will be encrypted, it's probably best not to 
allow users to have access to that data unless they are known (by some out of 
band means, such as IT department policy and limiting access to the data to 
certain machines) to be on managed machines that have full-disk encryption, or 
that they have read and agreed to a policy that states that they must use 
full-disk encryption. This way, it's the user or the IT department's 
responsibility to ensure that the disk is encrypted securely, not the browser 
vendor which may or may not know.


-- Brian

Re: [whatwg] Multiple file download

2010-02-23 Thread Brian Campbell
On Feb 23, 2010, at 6:02 PM, And Clover wrote:

 Boris Zbarsky wrote:
 
 or fixing UAs to only prompt once, to inventing yet another package format 
 here.
 
 I'd go further: why not just give UAs an option to decompress a ZIP archive 
 (or potentially other recognised archive format) to multiple files (or a 
 folder containing them)?

Some UAs already have this feature. In Safari, if you have the 'Open safe 
files after downloading' option enabled (I believe it's enabled by default, 
though I usually leave it disabled because there have been a few exploits 
contained in supposedly safe files), it will unpack your ZIP or .tar.gz 
archives, and delete the archive leaving only the resulting folder. 

 This would require no standards work and would be of general utility for all 
 existing file downloads (I'd certainly be happy to shed a few clicks from the 
 ZIP download-open-extract-delete shuffle).

Yep, there's nothing stopping browsers from implementing this, and there are 
already browser that do it. It's just a matter of encouraging other browser 
vendors to follow suit.

-- Brain

Re: [whatwg] video feedback

2010-02-10 Thread Brian Campbell
On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:

 On Sat, 31 Oct 2009, Brian Campbell wrote:
 
 As a multimedia developer, I am wondering about the purpose of the timeupdate
 event on media elements.
 
 It's primary use is keeping the UIs updated (specifically the timers and 
 the scrubber bars).
 
 
 On first glance, it would appear that this event would be useful for 
 synchronizing animations, bullets, captions, UI, and the like.
 
 Synchronising accompanying slides and animations won't work that well with 
 an event, since you can't guarantee the timing of the event or anything 
 like that. For anything where we want reliable synchronisation of multiple 
 media, I think we need a more serious solution -- either something like 
 SMIL, or the SMIL subset found in SVG, or some other solution.

Yes, but that doesn't exist at the moment, so our current choices are to use 
timeupdate and to use setInterval().

 At 4 timeupdate events per second, it isn't all that useful. I can 
 replace it with setInterval, at whatever rate I want, query the time, 
 and get the synchronization I need, but that makes the timeupdate event 
 seem to be redundant.
 
 The important thing with timeupdate is that it also fires whenever the 
 time changes in a significant way, e.g. immediately after a seek, or when 
 reaching the end of the resource, etc. Also, the user agent can start 
 lowering the rate in the face of high CPU load, which makes it more 
 user-friendly than setInterval().

I agree, it is important to be able to reduce the rate in the face of high CPU 
load, but as currently implemented in WebKit, if you use timeupdate to keep 
anything in sync with the video, it feels fairly laggy and jerky. This means 
that for higher quality synchronization, you need to use setInterval, which 
defeats the purpose of making timeupdate more user friendly.

Perhaps this is just a bug I should file to WebKit, as they are choosing an 
update interval at the extreme end of the allowed range for their default 
behavior; but I figured that it might make sense to mention a reasonable 
default value (such as 30 times per second, or once per frame displayed) in the 
spec, to give some guidance to browser vendors about what authors will be 
expecting.

 On Thu, 5 Nov 2009, Brian Campbell wrote:
 
 Would something like video firing events for every frame rendered 
 help you out?  This would help also fix the canvas over/under 
 painting issue and improve synchronization.
 
 Yes, this would be considerably better than what is currently specced.
 
 There surely is a better solution than copying data from the video 
 element to a canvas on every frame for whatever the problem that that 
 solves is. What is the actual use case where you'd do that?

This was not my use case (my use case was just synchronizing bullets, slide 
transitions, and animations to video), but an example I can think of is using 
this to composite video. Most (if not all) video formats supported by video 
in the various browsers do not store alpha channel information. In order to 
composite video against a dynamic background, authors may copy video data to a 
canvas, then paint transparent to all pixels matching a given color.

This use case would clearly be better served by video formats that include 
alpha information, and implementations that support compositing video over 
other content, but given that we're having trouble finding any video format at 
all that the browsers can agree on, this seems to be a long way off, so 
stop-gap measures may be useful in the interim.

Compositing video over dynamic content is actually an extremely important use 
case for rich, interactive multimedia, which I would like to encourage browser 
vendors to implement, but I'm not even sure where to start, given the situation 
on formats and codecs. I believe I've seen this discussed in Theora, but never 
went anywhere, and I don't have any idea how I'd even start getting involved in 
the MPEG standardization process.

 On Thu, 5 Nov 2009, Andrew Scherkus wrote:
 
 I'll see if we can do something for WebKit based browsers, because today 
 it literally is hardcoded to 250ms for all ports. 
 http://trac.webkit.org/browser/trunk/WebCore/html/HTMLMediaElement.cpp#L1254
 
 Maybe we'll end up firing events based on frame updates for video, and 
 something arbitrary for audio (as it is today).
 
 I strongly recommend making the ontimeupdate rate be sensitive to system 
 load, and no faster than one frame per second.

I'm assuming that you mean no faster than once per frame?

 On Fri, 6 Nov 2009, Philip Jägenstedt wrote:
 
 We've considered firing it for each frame, but there is one problem. If 
 people expect that it fires once per frame they will probably write 
 scripts which do frame-based animations by moving things n pixels per 
 frame or similar. Some animations are just easier to do this way, so 
 there's no reason to think that people won't do it. This will break 
 horribly if a browser

Re: [whatwg] video feedback

2010-02-10 Thread Brian Campbell
On Feb 10, 2010, at 1:37 PM, Eric Carlson wrote:

 
 On Feb 10, 2010, at 8:01 AM, Brian Campbell wrote:
 
 On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:
 
 On Sat, 31 Oct 2009, Brian Campbell wrote:
 
 At 4 timeupdate events per second, it isn't all that useful. I can 
 replace it with setInterval, at whatever rate I want, query the time, 
 and get the synchronization I need, but that makes the timeupdate event 
 seem to be redundant.
 
 The important thing with timeupdate is that it also fires whenever the 
 time changes in a significant way, e.g. immediately after a seek, or when 
 reaching the end of the resource, etc. Also, the user agent can start 
 lowering the rate in the face of high CPU load, which makes it more 
 user-friendly than setInterval().
 
 I agree, it is important to be able to reduce the rate in the face of high 
 CPU load, but as currently implemented in WebKit, if you use timeupdate to 
 keep anything in sync with the video, it feels fairly laggy and jerky. This 
 means that for higher quality synchronization, you need to use setInterval, 
 which defeats the purpose of making timeupdate more user friendly.
 
 Perhaps this is just a bug I should file to WebKit, as they are choosing an 
 update interval at the extreme end of the allowed range for their default 
 behavior; but I figured that it might make sense to mention a reasonable 
 default value (such as 30 times per second, or once per frame displayed) in 
 the spec, to give some guidance to browser vendors about what authors will 
 be expecting.
 
 I disagree that 30 times per second is a reasonable default. I understand 
 that it would be useful for what you want to do, but your use case is not a 
 typical. I think most pages won't listen for 'timeupdate' events at all so 
 instead of making every page incur the extra overhead of waking up, 
 allocating, queueing, and firing an event 30 times per second, WebKit sticks 
 with  the minimum frequency the spec mandates figuring that people like you 
 that need something more can roll their own.

Do browsers fire events for which there are no listeners? It seems like it 
would be easiest to just not fire these events if no one is listening to them.

And as Ian pointed out, just basic video UI can be better served by having at 
least 10 updates per second, if you want to show time at a resolution of tenths 
of a second.

 On Thu, 5 Nov 2009, Brian Campbell wrote:
 
 Would something like video firing events for every frame rendered 
 help you out?  This would help also fix the canvas over/under 
 painting issue and improve synchronization.
 
 Yes, this would be considerably better than what is currently specced.
 
 There surely is a better solution than copying data from the video 
 element to a canvas on every frame for whatever the problem that that 
 solves is. What is the actual use case where you'd do that?
 
 This was not my use case (my use case was just synchronizing bullets, slide 
 transitions, and animations to video), but an example I can think of is 
 using this to composite video. Most (if not all) video formats supported by 
 video in the various browsers do not store alpha channel information. In 
 order to composite video against a dynamic background, authors may copy 
 video data to a canvas, then paint transparent to all pixels matching a 
 given color.
 
 This use case would clearly be better served by video formats that include 
 alpha information, and implementations that support compositing video over 
 other content, but given that we're having trouble finding any video format 
 at all that the browsers can agree on, this seems to be a long way off, so 
 stop-gap measures may be useful in the interim.
 
 Compositing video over dynamic content is actually an extremely important 
 use case for rich, interactive multimedia, which I would like to encourage 
 browser vendors to implement, but I'm not even sure where to start, given 
 the situation on formats and codecs. I believe I've seen this discussed in 
 Theora, but never went anywhere, and I don't have any idea how I'd even 
 start getting involved in the MPEG standardization process.
 
 Have you actually tried this? Rendering video frames to a canvas and 
 processing every pixel from script is *extremely* processor intensive, you 
 are unlikely to get reasonable frame rate. 

Mozilla has a demo of this working, in Firefox only:

https://developer.mozilla.org/samples/video/chroma-key/index.xhtml

But no, this isn't something I would consider to be production quality. But 
perhaps if the WebGL typed arrays catch on, and start being used in more 
places, you might be able to start doing this with reasonable performance.

 The H.262 does support alpha (see AVC spec 2nd edition, section 7.3.2.1.2 
 Sequence parameter set extension), but we do not support it correctly in 
 WebKit at the moment. *Please* file bugs against WebKit if you would like to 
 see this properly supported. QuickTime movies support alpha

Re: [whatwg] Lists and legal documents

2010-02-05 Thread Brian Campbell
On Feb 5, 2010, at 10:21 AM, Anne van Kesteren wrote:

 These indicators are part of the content and cannot be governed by style 
 sheets. End users having their own custom style sheets overwriting the 
 indicators with their own preference would be a problem, for instance.
 
 I have seen at least one editor used that generates markup like this:
 
  ul
   lispan class=ola./span .../li
   ...

The obsolete and non-conforming @type, along with the @value attribute on li, 
can be used for this purpose:

ol type=a
  li value=1...
/ol

Or, if you want to keep the type information together with the value:

ol
  li type=a value=1...
/ol

Would it make sense to make this no longer obsolete and non-conforming, as the 
list item type really is meaningful in many documents? Also, is the behavior of 
@type currently documented anywhere in HTML5? While the values that @type 
currently accepts are fairly limited (a, A, 1, i, I, as far as I know), they 
could be extended to include all of the values defined in CSS, with the old 
values deprecated.

I'm not particularly attached to this solution, but it is implemented in 
browsers already, and it is fairly widely used; for instance, see the Mozilla 
Public License http://www.mozilla.org/MPL/MPL-1.1.html , the IBM public license 
http://www.opensource.org/licenses/ibmpl.php , and various other usages you can 
find with a Google code search: 
http://www.google.com/codesearch?q=lang:html+ol+typehl=enbtnG=Search+Code . 
Many of those uses may be better handled with a stylesheet, but @type is used 
in many places in which the format of the number is meaningful for referring to 
clauses in legal documents.

-- Brian




Re: [whatwg] Canvas size and double buffering.

2010-02-04 Thread Brian Campbell
On Feb 4, 2010, at 1:55 AM, Robert O'Callahan wrote:

 On Thu, Feb 4, 2010 at 6:50 PM, Brian Campbell lam...@continuation.org 
 wrote:
 I think the most reasonable approach would be to say that the 
 getBoundingClientRect().width or height is rounded to the nearest pixel. 
 Boxes are displayed rounded to the nearest pixel, with no fractional pixels 
 being drawn, right?
 
 No.

In what cases are fractional pixels drawn? I see fractional widths being 
returned here, but it seems to be drawing rounded to the nearest pixel:

http://ephemera.continuation.org/percentage-width.html

 Why would they report a width or height that is different than how they are 
 displayed? All browsers that I've tested (the ones listed above, so not IE) 
 report integral values for getBoundingClientRect().width and height (or for 
 left and right in the case of Opera, which doesn't support width and height).
  
 Firefox often returns non-integral values for getBoundingClientRect().width 
 or height.

Sorry, I meant to write but Firefox, but somehow missed that. Safari, Chrome, 
and Opera all return values rounded to the nearest pixel.

-- Brian

Re: [whatwg] api for fullscreen()

2010-02-03 Thread Brian Campbell
On Feb 3, 2010, at 5:04 AM, Smylers wrote:

 Brian Campbell writes:
 
 I'm a bit concerned about when the fullscreen events and styles apply,
 though. If the page can tell whether or not the user has actually
 allowed it to enter fullscreen mode, it can refuse to display content
 until the user gives it permission to enter fullscreen mode.
 
 Why is that a problem?
 
 Or even if it's not refusing to display content, it may simply not
 scale the content up to the full window if the user neglects to give
 permission for full screen.
 
 If the user wants the content to be large, why would he withhold
 permission?

A user may want to view the content scaled up to the full size of the window, 
without it being full-screen.

 As I understand it, the risk with full-screen view is that a malicous
 site may spoof browser chrome, such as the URL bar, thereby tricking a
 user who isn't aware the site is full-screen.

This is addressing a different scenario; not malicious sites per-se, but sites 
that insist on being displayed full screen.

 So these scenarios seem relevant:
 
 1  A malicious site wishes to switch to full-screen view and spoof
chrome.  The user hadn't asked for full-screen, so withholds
permission.  The site may at this point refuse to display content
as you put it, but since that content's only purpose is to trick the
user, its non-display is a good thing.
 
 2  A user wishes to display some content full-screen, so grants
permission and views it.
 
 3  A user doesn't wish to display some content full-screen, so ignores
any attempt by the site to become full-screen, and continues to view
it normal size.
 
 I'm struggling to come up with a scenario in which your concerns apply.
 Please could you elaborate.  Thanks.

Sure. At my previous job, I wrote immersive interactive educational multimedia. 
My boss was very insistent about content being displayed full screen, to make 
the experience more immersive and reduce distractions (given the content, this 
wasn't unreasonable; there were parts that were time-critical simulations in 
which you wouldn't want to be distracted part way through by a chat window 
popping up). Had we been developing for the web, I could imagine him asking us 
to start with something that said Please press the button to enter full-screen 
mode and start the program, and the program would not start until full-screen 
mode was entered. I could imagine games, and other content doing the same as 
well.

I think that this behavior is fairly user hostile, however. There are some 
times when a user really doesn't want his entire screen filled, for a good 
reason. If there is content that won't start until the fullscreen event has 
fired, or fullscreen pseudo-class has been applied, then that user has no 
choice but to skip that content or allow it to enter fullscreen mode.

Another scenario applies to most video player sites. Almost all video player 
sites using Flash have a full screen button. Many of them do not have a full 
window button, however. If a user wishes to view content scaled up to fill the 
window, without the distractions of navigational links, comments, descriptions, 
and so on, they don't usually have a way to do this. If it were possible to use 
the full-screen button, but deny permission to actually go full screen, and 
have that simply display the content in the full window exactly as if it were 
full screen, it would give the users more control over how they view the 
content.

In short, there are several scenarios in which certain functionality in a web 
content is not available unless you enter fullscreen mode. Content authors 
should not be able to force fullscreen mode on users, however, so I think it 
would be best if the spec allows UAs to send the fullscreen event and set the 
fullscreen pseudoclass even if the content is not actually filling the entire 
screen. How exactly the UAs implement this is up to them, though I would 
recommend scaling the content up to the full window and sending the fullscreen 
events immediately, if they are waiting for permission to scale to fill the 
full screen. 

All the spec would have to say to cover all of the possible implementations is 
that the fullscreen events may be sent even if the content isn't actually 
filling an entire screen, and that the screen size may be changed even if you 
are already in fullscreen mode (which would need to be the case anyhow, since 
you may change the resolution of the screen when attaching a projector, or for 
devices in which the screen can rotate).

Does this make it any clearer?

-- Brian

Re: [whatwg] Canvas size and double buffering.

2010-02-03 Thread Brian Campbell
On Feb 3, 2010, at 3:14 PM, Boris Zbarsky wrote:

 On 2/3/10 2:54 PM, Tim Hutt wrote:
 
 Well, yes it would be good to have onresize for all elements.
 
 Which is why it's being worked on anyway.

I'm curious; where is this being worked on? Discussed here on this list? On 
another list? Or is it just being implemented by some browser vendors 
independently? I don't see discussion of it recently on this list, so I'm 
curious as to where the discussion would be; there seem to be a lot of lists in 
a lot of places these days for discussing various bits of browser standards, 
and it's hard to figure out what goes where.

For core HTML  the DOM, there's this group, public-html at the W3C, and 
public-webapps at the W3C; am I missing any? There's also the Device API groups 
which seems to overlap the domain of the webapps group quite a bit. There are 
of course the more clearly delineated groups, such as es-discuss, www-style, 
SVG, MathML, and the like, but I sometimes find it a bit hard to figure out 
where I should look for discussion about HTML and the HTML DOM because it's 
spread out over several lists which don't seem like they have clear boundaries. 
Has anyone written up an explanation of where to look for what?

-- Brian

Re: [whatwg] api for fullscreen()

2010-02-03 Thread Brian Campbell
On Feb 3, 2010, at 7:19 PM, Smylers wrote:

 Brian Campbell writes:
 
 As I understand it, the risk with full-screen view is that a
 malicous site may spoof browser chrome, such as the URL bar, thereby
 tricking a user who isn't aware the site is full-screen.
 
 This is addressing a different scenario; not malicious sites per-se,
 but sites that insist on being displayed full screen.
 
 OK.  That's 'merely' an annoyance, not a security threat.  There are
 lots of ways authors can be obnoxious if they choose; I'm not sure it's
 desirable, or even possible, to outlaw them.

Annoyances can turn into security threats if they train the user to click 
through security questions without thinking.

And actually, this can be a security issue as well. A malicious site could 
entice the user by offering some sort of desirable content (a movie or TV show, 
porn, a game), but only if the user allows it to go fullscreen with keyboard 
access. It could then wait until the user has been idle for a bit, and display 
something that looks like a standard login screen for the operating system in 
question. The user may then be enticed to enter their password without 
realizing it's still the full-screen site they were using and not a real login 
screen.

As ROC points out, it may be impossible to completely avoid a malicious site 
detecting whether you are fullscreen, as it can detect when the content is at 
one of the standard screen sizes; but we can make it a bit more difficult, and 
less likely that the user will be used to sites that require full screen, by 
providing the user an easy way to display full screen content in a window.

 My boss was very insistent about content being displayed full screen,
 to make the experience more immersive and reduce distractions ...
 Please press the button to enter full-screen mode and start the
 program, and the program would not start until full-screen mode was
 entered. I could imagine games, and other content doing the same as
 well.
 
 I think that this behavior is fairly user hostile, however.
 
 In general user-agents are allowed to display content in anyway that a
 user has configured them to do, regardless of what the spec gives as the
 normal behaviour.

Yes. As I said, most of this is a UA issue, not a spec issue. I just want to 
encourage a usage that does not wind up with content locked behind requiring 
you to enter an actual full screen mode.

 If a user wishes to view content scaled up to fill the window, without
 the distractions of navigational links, comments, descriptions, and so
 on, they don't usually have a way to do this. If it were possible to
 use the full-screen button, but deny permission to actually go full
 screen, and have that simply display the content in the full window
 exactly as if it were full screen, it would give the users more
 control over how they view the content.
 
 I've seen Firefox options (possibly in an extension) which allow users
 to tweak which toolbars and the like are still displayed when in
 full-screen view.
 
 If a browser (or an extension) wished to implement full-screen view as
 still having borders, the title bar, status bar, and so on then it
 could.  And there's nothing an author could do about it.

Of course. Obviously, the UA gets the final say on how the content is 
displayed. I just want to encourage the UAs to have a mode that tells the 
application it's in fullscreen mode even if it really is not, and have that 
mode be easy to get into for the majority of users (thus, not depend on an 
extension or obscure configuration somewhere).

 Content authors should not be able to force fullscreen mode on users,
 however, so I think it would be best if the spec allows UAs to send
 the fullscreen event and set the fullscreen pseudoclass even if the
 content is not actually filling the entire screen.
 
 To say that slightly differently: authors can dictate that certain
 output is only displayed when in full-screen view; but they have no
 control how full-screen view looks for a particular user and user-agent.

Yes, exactly.

 All the spec would have to say to cover all of the possible
 implementations is that the fullscreen events may be sent even if the
 content isn't actually filling an entire screen,
 
 Allowing that behaviour is entirely reasonable.  Though I think it
 should be covered by a more general statement that user-agents may
 display things however they want if so-configured, rather than just
 stating it for this particular narrow case.

Well, user agents may display things however they want, but I think it's worth 
pointing out that the events may not correspond to the time that the user 
actually gives permission to enter full screen mode, in those cases in which 
permission is required. Instead, content may be scaled to the full window and 
the events sent as soon as the button is pushed, and then the content resized 
to the actual full screen once permission has been granted.

-- Brian

Re: [whatwg] Canvas size and double buffering.

2010-02-03 Thread Brian Campbell
On Feb 3, 2010, at 7:00 PM, Tim Hutt wrote:

 On 3 February 2010 23:16, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/3/10 6:12 PM, Tim Hutt wrote:
 
 Ah yes that works nicely
 
 Hmm maybe I spoke too soon. The interaction of the CSS size and the
 canvas.width/height is confounding! It seems if you set a CSS width
 of, say 80% then that is that and you are stuck with it. Unfortunately
 it doesn't round to the nearest pixel!
 
 I have created a test case here:
 
 http://concentriclivers.com/canvas.html (the source is nicely
 formatted and very short)
 
 If anyone can get it to work as described on that page, then thank
 you! Otherwise I think things need to be changed to make it possible.

That example:
  * Works for me in Chrome (5.0.307.1 dev, all browsers listed are on Mac OS X 
10.6.2).
  * Works in Safari (WebKit nightly 6531.21.10, r54122) except for when you 
cause a scroll bar to appear or disappear by resizing; then you get a moiré 
pattern in the vertical stripes until you resize again. It appears that you get 
the resize event before the scrollbar appears, so the scrollbar appearing 
causes the canvas to scale down without sending a resize event allowing the 
canvas to redraw.
  * Doesn't work in Firefox (3.6). At most sizes, you get grey lines, due to 
the non-integral width; only occasionally does it look correct. Also, the 
canvas always resizes to a square; does Firefox try to preserve the aspect 
ratio when scaling a canvas in one dimension only?
  * Fails badly in Opera (10.10). It fails to redraw because width and 
height aren't defined on the BoundingClientRect; just left, right, top, and 
bottom. For some reason it also ignores the 500px height of the canvas given in 
the HTML, instead using the default of 300x150, with the width overridden to 
80%.

I think the most reasonable approach would be to say that the 
getBoundingClientRect().width or height is rounded to the nearest pixel. Boxes 
are displayed rounded to the nearest pixel, with no fractional pixels being 
drawn, right? Why would they report a width or height that is different than 
how they are displayed? All browsers that I've tested (the ones listed above, 
so not IE) report integral values for getBoundingClientRect().width and height 
(or for left and right in the case of Opera, which doesn't support width and 
height).

-- Brian

Re: [whatwg] api for fullscreen()

2010-02-01 Thread Brian Campbell
On Jan 28, 2010, at 9:42 PM, Robert O'Callahan wrote:

 enterFullscreen always returns immediately. If fullscreen mode is currently 
 supported and permitted, enterFullscreen dispatches a task that a) imposes 
 the fullscreen style, b) fires the beginfullscreen event on the element and 
 c) actually initiates fullscreen display of the element. The UA may 
 asynchronously display confirmation UI and dispatch the task when the user 
 has confirmed (or never).

I like this proposal overall. I'm looking forward to being able to display 
content full screen; this is one of the last features necessary to start moving 
the kind of content that I work on to the web.

I'm a bit concerned about when the fullscreen events and styles apply, though. 
If the page can tell whether or not the user has actually allowed it to enter 
fullscreen mode, it can refuse to display content until the user gives it 
permission to enter fullscreen mode. Or even if it's not refusing to display 
content, it may simply not scale the content up to the full window if the user 
neglects to give permission for full screen. This could lead to the problem 
that Hixie mentions, of training users to click through security dialogs, even 
if this is done through a drop-down asynchronous notification instead of a 
modal dialog.

If a user clicks a fullscreen button, and declines to give permission to the 
site to actually use the whole screen, the behavior should probably be to 
simply resize the element to the full viewport. A standard interface (close 
button, or escape key, or the like) could then take the element back out of 
fullscreen (or in this case, full viewport) mode; the application would behave 
the same as if it were in fullscreen mode, it would just be constrained within 
the window without knowing it. If the user does give permission, using an 
asynchronous notification drop down or similar interface, then the browser 
would have to send a resize event or something like that to let the application 
know that it needs to resize again. The beginfullscreen/endfullscreen events, 
and pseudoclass, would apply when resizing to the full window, if full screen 
permission hasn't been granted.

This would also help applications deal more gracefully with users who don't 
give permission to go full screen; the application would likely have to do the 
resizing to the full window itself if it doesn't get permission to use the full 
screen, but it won't know how long to wait before deciding that the user hasn't 
given permission. This way, it would resize immediately and automatically to 
the viewport before permission is granted, and resize again to the full screen 
if permission has been granted.

-- Brian

Re: [whatwg] api for fullscreen()

2010-02-01 Thread Brian Campbell

On Feb 1, 2010, at 5:38 PM, Robert O'Callahan wrote:

 On Tue, Feb 2, 2010 at 10:41 AM, Brian Campbell lam...@continuation.org 
 wrote:
 I'm a bit concerned about when the fullscreen events and styles apply, 
 though. If the page can tell whether or not the user has actually allowed it 
 to enter fullscreen mode, it can refuse to display content until the user 
 gives it permission to enter fullscreen mode. Or even if it's not refusing to 
 display content, it may simply not scale the content up to the full window if 
 the user neglects to give permission for full screen.
 
 We could simply modify the proposal to apply the fullscreen pseudoclass (but 
 not fullscreen the window) if permission is denied.

The user may never notice the notification and explicitly deny permission; the 
whole point of asynchronous notifications is that they are less obtrusive and 
not modal, but that means that users can ignore or fail to notice them and keep 
going without ever dismissing them.

I think it would be best to immediately go as full screen as possible (so, full 
window if permission hasn't yet been given), and then resize to full screen if 
permission is granted. This will avoid content authors having to duplicate that 
same functionality themselves for their users that don't ever give or deny 
permission.

Resizing when in full screen mode will need to be implemented anyhow, to 
support devices like the iPhone or iPad which can change orientation and will 
need to reshape the screen.

 However, in general I don't think we can prevent Web content from detecting 
 that it is not fullscreen. For example it could check whether the window size 
 is one of a set of common screen sizes.

No, you can't stop someone who is truly dedicated from guessing based on the 
exact size. My concern is more with authors who feel that their content is best 
displayed in full screen, and so may simply refuse to play it until they've 
gotten the fullscreen event or have the fullscreen pseudoclass. That would be 
pretty easy to implement, if you have that functionality available to you. I 
know my previous director would have requested it; he is very particular about 
content being displayed in full screen, and while I would argue that we 
shouldn't lock people out who don't want to be in full screen mode, I may have 
been overruled if such functionality were available and so easy to use.

Here are three possible scenarios, for the user clicking the fullscreen 
button on some content and then denying permission to allow full screen access:

1) The original proposal: 
  * User clicks full screen button. 
  * Notification pops up, no events are sent or classes applied. 
  * User clicks deny, no events are sent or classes applied. 
  * The user's full screen request has been ignored, and now the page author 
needs to do something special to resize to the full window if desired.
2) Your suggestion above, to apply the pseudoclass if permssion is denied: 
  * User clicks full screen button. 
  * Notification pops up, no events sent or classes applied. 
  * User clicks deny, and fullscreen class is applied. 
  * You didn't mention whether you intend for the event to also be sent, and 
content automatically resized to fit the viewport; if that doesn't happen, then 
the page author needs to add special handling for expanding the content to the 
full window, or something of the sort. 
  * At this point, you have an odd effect, in which denying permission for full 
screen causes the content to scale to the full window.
3) My suggestion: 
  * The user clicks full screen. 
  * Notification pops up, content scales up to fill the window, pseudoclass is 
applied, event is sent. 
  * The user clicks deny, which simply dismisses the notification. 
  * The content is now accessible, in pseudo fullscreen mode, giving the user 
access to the controls and content for that mode.

This may not be the most common use case, of the user clicking the fullscreen 
button and then denying permission, but I think that my proposal gives a fairly 
sensible behavior for that use case, encouraging a user friendly experience 
without requiring the author to do too much extra work, and without encouraging 
content to be unavailable outside of full screen mode.

Of course, much of this discussion is of details that could be left up to the 
UA. As far as the spec is concerned, the main points to include would be that 
the fullscreen events, class, and resizing may occur even when the content is 
not actually being displayed on the full screen, and that resizing may occur 
after entering fullscreen mode.

-- Brian

Re: [whatwg] What is the purpose of timeupdate?

2009-11-07 Thread Brian Campbell

On Nov 6, 2009, at 5:52 PM, Simon Pieters wrote:

On Fri, 06 Nov 2009 18:11:18 +0100, Brian Campbell brian.p.campb...@dartmouth.edu 
 wrote:


Brian, since Firefox is doing what you proposed -- can you think  
of any other issues with its current implementation?  What about  
for audio files?


The way Firefox works is fine for me. I haven't yet tested it with  
audio only, but something around 25 or 30 updates per second would  
work fine for all use cases that I have; 15 updates per second is  
about the minimum I'd consider useful for synchronizing one off  
events like bullets or slide transitions (this is for stuff where  
you want good, tight sync for stuff with high production values),  
and while animations would work at that rate, they'd be pretty jerky.


What if you have a video with say one frame per second? Unless I'm  
mistaken Firefox will still fire timeupdate once per frame. (The  
spec says you have to fire at least every 250ms.)


That's a fair point, though videos with frame rates that low are  
pretty rare. I suppose if you encoded something like a slideshow as a  
video, with a very low frame rate, you might get such an effect. Of  
course, if you're playing a video at that frame rate, I'm wondering  
what you would need to be synchronized at a higher frame rate; though  
if it has an audio track as well, you may be trying to synchronize  
against the audio.


If this is something that we think is worth worrying about, then I'd  
advocate for just saying that timeupdate events should be fired  
approximately 30 times per second, perhaps with a minimum of 15 and a  
a maximum of 60 or so. For videos over  15 FPS, the browser could  
simply send one update per frame; for videos with a lower frame rate,  
the browser would have to generate intermediate timeupdate events to  
keep the interval reasonable. Or, the browser could just pick a rate  
in that range and send the events at that rate, without tying them to  
when the frames are displayed. I'd be fine with just having a  
reasonably consistent rate of 30 updates per second while the video is  
playing.


-- Brian



[whatwg] What is the purpose of timeupdate?

2009-10-31 Thread Brian Campbell
As a multimedia developer, I am wondering about the purpose of the  
timeupdate event on media elements. On first glance, it would appear  
that this event would be useful for synchronizing animations, bullets,  
captions, UI, and the like. The spec specifies a rate of 4 to 66 Hz  
for these events. The high end of this (30 or more Hz) is pretty  
reasonable for displaying things in sync with the video. The low end,  
however, 4 Hz, is far too slow for most types of synchronization;  
everything feels laggy at this frequency. From my testing on a two  
year old MacBook Pro, Firefox is giving me about 25 timeupdate events  
per second, while Safari and Chrome are giving me the bare minimum, of  
4 timeupdate events per second.


At 4 timeupdate events per second, it isn't all that useful. I can  
replace it with setInterval, at whatever rate I want, query the time,  
and get the synchronization I need, but that makes the timeupdate  
event seem to be redundant. At 25 timeupdate events per second, it is  
reasonably useful, and can be used to synchronize various things to  
the video.


So, I'm wondering if there's a purpose for the timeupdate event that  
I'm missing. If it is intended for any sort of synchronization with  
the video, I think it should be improved to give better guarantees on  
the interval between updates, or just dropped from the spec; it's not  
useful enough in its current form. To improve it, the maximum interval  
between updates could be reduced to about 40 ms, or perhaps the  
interval could be made settable so the author could control how often  
they want to get the event.


-- Brian


Re: [whatwg] framesets

2009-10-13 Thread Brian Campbell

On Oct 13, 2009, at 11:20 PM, Peter Brawley wrote:


Ian,

 Your requirements aren't met by framesets

Eh? Our solution meets the requirement and uses framesets.


You have specified that your requirement is to prevent people from  
linking to or bookmarking individual pages inside of frames. Framesets  
do not satisfy that requirement. They make it a little more difficult,  
but they do not prevent it.




 iframes have been demonstrated to work as well as framesets

No-one's been able to point to a working non-frameset solution that  
meets this requirement.


Ian posted one, had-written just for you, which you dismissed without  
giving any reason.



, and, well, framesets suck.

Unargued, subjective.


Framesets suck because they combine both layout and embedding  
semantics into one syntax, which give them much less layout  
flexibility than using CSS. Anything you can do with framesets (except  
resizing), you can do with iframes and CSS. However, there are lots of  
things you can do with iframes and CSS that you cannot do with  
framesets. Framesets are a completely different language than HTML;  
you cannot use framesets and any other content elements in the same  
document. This means that you are forced to, for example, use a frame  
for the header of your page, which may cause a scrollbar to appear if  
you don't allocate enough space, rather than just putting the header  
in the page directly and using iframes to include the other pages.



I agree that there's
lots of legacy content using framesets; that's why HTML5 defines  
how they

should work (in more detail than any previous spec!).

?! According to http://www.html5code.com/index.php/html-5-tags/html-5-frameset-tag/ 
, The frameset tag is not supported in HTML 5.


That is not the draft HTML5 spec. I'm not sure what that is, but as  
far as I know, it has no official affiliation with either the W3C or  
the WHATWG. Try: http://whatwg.org/html5 or more specifically:


http://www.whatwg.org/specs/web-apps/current-work/multipage/rendering.html#frames-and-framesets

The HTML5 spec includes two types of conformance requirements;  
conformance requirements for authors (or other producers of HTML), and  
conformance requirements for user agents (browser and other consumers  
of HTML). It is non-conforming in HTML5 to produce documents with  
framesets, however a conforming user agent must parse and process them  
consistently with the spec.


The only thing that is easier with framesets than otherwise appears  
to be

resizing, which I agree is not well-handled currently.

Unsubstantiated claim absent a working example of the spec  
implemented without framesets.


Did you take a look at Ian's example?

http://damowmow.com/playground/demos/framesets-with-iframes/001.html



As noted before,
though, that's an issue for more than just frames; we need a good  
solution
for this in general, whether we have framesets or not. Furthermore,  
that's

a styling issue, not an HTML issue.

For those who have to write or generate HTML, that's a distinction  
without a difference.


PB

-

Ian Hickson wrote:

On Tue, 13 Oct 2009, Peter Brawley wrote:


I don't know if there are pages that do this (and I sure hope  
none are

using table for it!), but the lack of an existence proof is not
proof of the lack of existence.



Of course. The point is if no-one can point to a working iframes
solution, ie, to an instance of them actually being preferred, the  
claim
that iframes provide a preferable alternative is simply not  
credible, to

put it mildly.




At this point I don't really understand what you want framesets  
for. Your
requirements aren't met by framesets, iframes have been  
demonstrated to
work as well as framesets, and, well, framesets suck. I agree that  
there's
lots of legacy content using framesets; that's why HTML5 defines  
how they
should work (in more detail than any previous spec!). But that  
doesn't

mean we should encourage them.

The only thing that is easier with framesets than otherwise appears  
to be
resizing, which I agree is not well-handled currently. As noted  
before,
though, that's an issue for more than just frames; we need a good  
solution
for this in general, whether we have framesets or not. Furthermore,  
that's

a styling issue, not an HTML issue.





No virus found in this incoming message.
Checked by AVG -
www.avg.com

Version: 8.5.421 / Virus Database: 270.14.13/2432 - Release Date:  
10/13/09 06:35:00








Re: [whatwg] HTML as a text format: Should title be optional?

2009-10-08 Thread Brian Campbell

On Oct 5, 2009, at 10:40 PM, Ian Hickson wrote:


For example, see Google Gadgets
http://www.google.com/webmasters/gadgets/, or iframe sandboxes used
for isolating untrusted content while still being inline in the page.


Yes, if we add doc= support to iframe maybe that would make this  
case

common enough that we should reconsider.


I had to look up what doc= meant, so for the edification of others,  
the proposal is here:


http://lists.w3.org/Archives/Public/public-webapi/2008May/0326.html


So, my recommendation is that title be made optional; perhaps a
validator could issue a warning if you leave it off, but there are
perfectly valid cases of wanting to produce an HTML document that
doesn't have any sort of meaningful title or for which a title will
never be seen or used, it doesn't seem likely that people will  
forget it

in cases in which it's useful,


I think this is something we should revisit in a future version. I'm  
not
convinced we're at a stage yet where there are enough non-standalone  
HTML
pages that it makes sense to not require title for any pages.  
Changing
something this fundamental can have social repurcussions in the  
community
that aren't obvious (e.g. old timers saying we're ruining HTML4),  
and I
feel that we've done enough of that already with HTML5 without  
changing

this also, frankly.


Fair enough. I can definitely see the value in that argument.

-- Brian


Re: [whatwg] HTML as a text format: Should title be optional?

2009-09-23 Thread Brian Campbell

On Jun 4, 2009, at 6:42 PM, Ian Hickson wrote:


On Fri, 17 Apr 2009, Øistein E. Andersen wrote:


A title is usually a good idea, but is it really necessary to  
require

this for conformance?  After all, a title is not something which an
author is likely to forget, and leaving it out has no unexpected
consequences.


Leaving it out has a pretty important consequence, it breaks user
interfaces that need to refer to the document, e.g. bookmarks  
features

in browsers.


HTML documents sometimes occur in places in which they are not  
independently bookmarkable, such as in an HTML email, or embedded in  
an Atom or RSS feed.


They also sometimes occur in places in which it's generally not  
expected that someone will bookmark them, nor particularly easy to do  
so, such as within an iframe which is being used for a gadget or some  
sort of sandboxed content that isn't really independent of the  
containing page.



On Sat, 18 Apr 2009, Randy Drielinger wrote:


If you're converting from a textfile, title could refer to the  
filename.


If it's an automated process, it can be added by default.

If it's manual, they'll have to remember the short html5 doctype  
and the

title element.


It does indeed seem easy to include it.


Yes, but sometimes you don't have a reasonable value for the title.  
Say you are converting some word processing document format to HTML  
(or spreadsheet, or slides, or anything of the sort); what would you  
use if there isn't any sort of heading to take the title from?


There are several options (you could use the filename, or look for  
anything heading like, or use the first couple of words), but if a  
title element is missing, then the user agent could do the same.



On Sat, 18 Apr 2009, Øistein E. Andersen wrote:


It could, but chances are that the original filename would  
typically be

less useful than the URL, which is what most browsers use when the
title element is omitted, so this rather sounds like an argument
against forcing authors to include a title.


I don't see why this would be the case. In practice, however, if one  
is at

a loss as to what to use for the title, but one has an h1, then I
would recommend using the h1's contents.


This has the problem of duplicating content, which may get out of  
sync. And why can't the user agent just extract the title from the  
heading if it needs it rather than the generator?



Yes, my concern is that a validator should be useful as an authoring
tool and not overwhelm the author with spurious errors.  As I see it,
leaving out title is very much like leaving out a paragraph of text
and not something that should matter for validation.


As it affects user interfaces, and since the cost of including a  
title

is so low, I think it makes sense to continue to make it required.


As it's something that affects the user interface in a fairly visible  
way, it's not likely that many authors will neglect to use it if it's  
useful to do so. However, in some cases it is not possible to include  
a title, or there isn't any really useful title to include, so I'm not  
sure why it should be required in those cases.


This seems somewhat analogous to the img alt case; there are some  
times when you don't have any useful text to put in there, and putting  
in auto-generated placeholder text just to conform seems less than  
useful. The number of pages unhelpfully titled Untitled or Untitled  
page seems to confirm this.


There are other cases in which a title just isn't all that useful,  
like an iframe; the title will be invisible, and it's pretty uncommon  
for people to open that iframe in a new tab and then bookmark it from  
there (unless maybe they happen to be web geeks like us). For example,  
see Google Gadgets http://www.google.com/webmasters/gadgets/, or  
iframe sandboxes used for isolating untrusted content while still  
being inline in the page.


So, my recommendation is that title be made optional; perhaps a  
validator could issue a warning if you leave it off, but there are  
perfectly valid cases of wanting to produce an HTML document that  
doesn't have any sort of meaningful title or for which a title will  
never be seen or used, it doesn't seem likely that people will forget  
it in cases in which it's useful, and right now it is sometimes being  
filled with useless values like Untitled that actually get in the  
way of a UA computing a better value (such as the URL or the top level  
heading).


-- Brian Campbell



Re: [whatwg] notation for typographical uncertainty

2009-09-20 Thread Brian Campbell

On Sep 20, 2009, at 8:43 PM, ddailey wrote:


Ya'll probably have dealt with this already but here is the usage case

My son and I are are typing my recently deceased Dad's memoirs from  
the Manhattan project.


I'm saying to son: if you can't figure out what it says, type the  
characters you are sure about. Use '?' marks for the letters that  
you aren't sure about.


 Ultimately this is ASCII with the most minimal of markup.

Question: what markup will be least cumbersome (and hence most  
recommended) within a plain text document that may ultimately be  
converted (automagically) to HTML5, assuming, in the meantime, that  
we may stoop so low as to put it in HTML4. I know folks claim HTML5  
will never break the web, but those folks and I have some beer to  
drink before we see eye to eye on that subject, having seen the web  
break so many times in the last 1.7 decades since I started playing  
with HTML at NCSA. Let us say I am a skeptic.

cheers
David


I'm rather confused about what your question is. Are you asking if you  
can use question marks as an ad-hoc markup for unknown characters?  
There's nothing in HTML5 that will break that usage, so that should be  
fine. But I don't think that there's been anything in the history of  
HTML that has gone so far as appropriating formerly legal characters  
for markup. Can you point to such an example?


Is there a particular form of breakage that you are trying to avoid?  
HTML5 does obsolete a few features, though none that I think should be  
relevant to the use case that you provided, and it documents how  
browsers must render obsolete features which have worked in the past,  
so that features being made obsolete does not break anything that  
already works. If you can find examples in the draft of this not being  
the case, you should probably point those out.


-- Brian


Re: [whatwg] Microdata

2009-08-26 Thread Brian Campbell

On Aug 22, 2009, at 5:51 PM, Ian Hickson wrote:


Based on some of the feedback on Microdata recently, e.g.:

  http://www.jenitennison.com/blog/node/124

...and a number of e-mails sent to this list and the W3C lists, I am  
going
to try some tweaks to the Microdata syntax. Google has kindly  
offered to

provide usability testing resources so that we can try a variety of
different syntaxes and see which one is easiest for authors to  
understand.


If anyone has any concrete syntax ideas that they would like me to
consider, please let me know. There's a (pretty low) limit to how many
syntaxes we can perform usability tests on, though, so I won't be  
able to

test every idea.


Here's an idea I've been mulling around. I think it would simplify the  
syntax and semantic model considerably.


Why do we need separate items and item properties? They seem to  
confuse people, when something can be both an item and an itemprop at  
the same time. They also seem to duplicate a certain amount of  
information; items can have types, while itemprops can have names,  
but they both seem to serve about the same role, which is to indicate  
how to interpret them in the context of page or larger item.


What if we just had item, filling both of the roles? The value of  
the item would be either an associative array of the descendent items  
(or ones associated using about) if those exists, or the text  
content of the item (or URL, depending on the tag) if it has no items  
within it.


Here's an example used elsewhere in the thread, marked up as I suggest:

section id=bt200x item=com.example.product
  link item=about href=http://example.com/products/bt200x;
  h1 item=nameGPS Receiver BT 200X/h1
  pRating: #x22C6;#x22C6;#x22C6;#x2729;#x2729; meta  
item=rating content=2/p

  pRelease Date:
time item=reldate datetime=2009-01-22January 22/time/p
  p item=reviewa item=reviewer href=http://ln.hixie.ch/;Ian
/a:
span item=textLots of memory, not much battery, very little
   accuracy./span/p
/section
figure item=work
  img item=about src=image.jpeg
  legend
pcite item=titleMy Pond/cite/p
psmallLicensed under the a item=license
href=http://www.opensource.org/licenses/mit-license.php;MIT
  license/a./small
  /legend
/figure
pimg subject=bt200x item=image src=bt200x.jpeg alt=.../p

This would translate into the following JSON. Note that this is a  
simpler structure than the existing one proposed for microdata; it is  
a lot closer to how people generally use JSON natively, rather than  
using an extra level of nesting to distinguish types and properties:


// JSON DESCRIPTION OF MARKED UP DATA
// document URL: http://www.example.org/sample/test.html
{
 com.example.product: [
   {
 about: [ http://example.com/products/bt200x; ],
 image: [ http://www.example.org/sample/bt200x.jpeg; ]
 name: [ GPS Receiver BT 200X ],
 reldate: [ 2009-01-22 ],
 review: [
   {
 reviewer: [ http://ln.hixie.ch/; ],
 text: [ Lots of memory, not much battery, very little  
accuracy. ]

   }
 ],
   },
 ],
 work: [
 {
   about: [ http://www.example.org/sample/image.jpeg; ],
   license: [ http://www.opensource.org/licenses/mit- 
license.php ]

   title: [ My Pond ],
 }
  ]
}

This has the slightly surprising property of making something like this:

  section item=fooSome text. a href=somewhereA link/a. Some  
more text/section


Result in:

  // http://example.org/sample/test
  { foo: [ Some text. A link. Some more text ] }

While simply changing link an item:

  section item=fooSome text a item=link href=somewhereA link/ 
a. Some more text/section


Gives you:

  // http://example.org/sample/test
  { foo: [ { link: [ http://example.org/sample/somewhere; ] } ] }

However, I think that people will generally expect item to be used  
for its text/URL content only on leaf nodes or nodes without much  
nested within them, while they would expect item to return  
structured, nested data when the DOM is nested deeply with items  
inside it, so I don't think people would be surprised by this behavior  
very often.


I haven't yet looked at every use case proposed so far to see how well  
this idea works for them, nor have I worked out the API differences  
(which should be simpler than the existing API). If there seem to be  
no serious problems with this idea, I can write up a more detailed  
justification and examples.


-- Brian


Re: [whatwg] Comments on the definition of a valid e-mail address

2009-08-24 Thread Brian Campbell

On Aug 24, 2009, at 3:24 PM, Aryeh Gregor wrote:

Yup.  If it is deliverable then surely it's an alias to the same  
address
without the trailing dot, in which case a browser could choose to  
remove

it.


Yes, it's not possible for example.com. to mean anything different
from example.com.  (In fact they do mean something different in DNS,
but example.com. means the same thing as what example.com is
normally used to mean.  Moreover, the meaning of example.com in DNS
is basically nonsense for web apps processing user-submitted e-mail
addresses.  At least, as far as I understand it; I don't know too much
about DNS.)


Actually, the trailing dot is meaningful. A domain without a trailing  
dot is a relative domain; for example, if you are within the  
example.com domain, then foo could resolve to  
foo.example.com (or if that doesn't exist, then it would try  
resolving that at the root level, and fail since foo is not a TLD).  
A domain with a trailing dot is an absolute domain; it will only ever  
be resolved at the root level.


This difference may be significant. If someone manages to register the  
top level domain mail (which may be possible if the proposed new  
gTLD rules are passed), and has an email address of f...@mail, then  
you might want to distinguish between that resolving to f...@mail.wikimedia.org 
 vs. f...@mail.


Of course, this is complicated because the trailing dot is technically  
not allowed in an email address, but it seems to work in some contexts  
that I've tried (though most just strip off the trailing dot).


About the more general subject of this thread, I have tested sending  
myself email at all of the following addresses, all of which seem to  
work just fine, though some generate warnings in my mail client (Apple  
mail):


Brian P. campb...@dartmouth.edu
...brian...p...campbell...@dartmouth.edu
brian.p.campb...@dartmouth.edu.
Brian (this is a test) P (of comments) Campbell (and whitespace)@(here  
comes the domain) dartmouth.edu

brian p campbell

Note that Dartmouth has a very permissive email system that allows  
name components to be delimited by whitespace and/or periods, and  
prefixes of name components as long as you wind up with a unique  
match. And of course the address without the domain only works when  
I'm sending within the same domain. In some cases, the addresses were  
altered slightly in the process of being sent, for example 'Brian P. campb...@dartmouth.edu 
' came through as 'Brian P. Campbell@dartmouth.edu'.


Given that there are so many technically invalid addresses that  
actually do work to deliver mail, and that I'm sure some people have  
odd addresses due to poor form validation (perhaps someone has signed  
up for an email account on a web form and it allowed spaces in the  
address), it's probably best to be relatively lenient about the  
addresses allowed. I think the best you can do is look for at least  
one character, followed by an @ sign, followed by a legal domain name  
(which seems to be more strictly checked, though given the presence of  
IDNs, may not be easy to restrict in the future as well).


-- Brian


Re: [whatwg] the cite element

2009-08-17 Thread Brian Campbell
 in a bibliography:
  .bibliography cite {}

- And for general use of titles in text (which does seem to be the  
default usage of cite if not in another context):

  cite {}


What's the alternative? Just say em, i, cite and dfn mean 'italics'?
That doesn't seem particularly useful either. Why not just drop all  
but

i if that's what we do?

No, it seems useful to have elements that people can use for specific
purposes, so that style sheets can be shared, so that tools can make  
use

of the elements, if only in limited circles.


No, I don't believe that you should remove all mention of semantics  
that aren't machine checkable from the spec; just that the tightening  
of the semantics in this case does not seem to be gaining anything  
(what is actually going to change if people use cite only for  
titles, and resort to spans to mark up authors or full bibliographic  
citations?), while simultaneously ruling out usages that are currently  
valid and don't seem to cause any harm.



Backwards compatibility (with legacy documents, which uses it to mean
title of work) is the main reason.



People who use cite seem to use it for titles



In the 15
or more years that cite has supposedly been used for citations,  
I'm only

aware of one actual use of that semantic, and that use has since been
discontinued. Meanwhile, lots of people use cite for title of  
work.


You claim that people seem to use it for titles many times, but in  
practice, while that is the most common use, it is also used to refer  
to authors or speakers, and sometimes also used for full bibliographic  
citations. How many sites using cite for other purposes, including  
quite prominent ones, would it take to convince you that this is  
indeed a common pattern?


-- Brian Campbell



Re: [whatwg] the cite element

2009-08-17 Thread Brian Campbell

On Aug 16, 2009, at 7:21 AM, Ian Hickson wrote:

On Wed, 12 Aug 2009, Erik Vorhes wrote:

On Wed, Aug 12, 2009 at 6:21 PM, Ian Hicksoni...@hixie.ch wrote:

On Mon, 3 Aug 2009, Erik Vorhes wrote:
It is often the most semantically appropriate element for marking  
up

a name


There is no need to mark up a name at all.


I don't understand.


What is the problem solved by marking up people's names?

Why is this:

  pI live with nameBrett/name and nameDamian/name./p

...better than this?:

  pI live with Brett and Damian./p


Has anyone claimed that the cite element should be used in such a  
case? The only usage I've seen offered is that the cite element may  
be used to mark up a persons name when that person is the source of a  
quotation; as in, when you are citing that person (hence, the term  
cite). In this case, you frequently do want to distinguish them from  
the quotation. It is especially common in block level quotations, such  
as a testimonial, an epigraph, or the like.



I don't think it makes sense to ignore the existing behaviour of
authors.


Existing behaviour of authors is not to mark up names with cite.


Except for the authors that do mark up names with cite


There are some, but they are not the majority.


Should only the majority usage ever be allowed? Or if there is another  
usage, that is somewhat less common, but is still logically  
consistent, usefully takes advantage of fallback styling in the  
absence of CSS, and meets the English language definition of the term,  
should that also be allowed?


-- Brian


Re: [whatwg] the cite element

2009-07-09 Thread Brian Campbell

On Jun 5, 2009, at 3:53 AM, Ian Hickson wrote:


I don't really understand what problem this is solving.

HTML4 actually defined cite more like what you describe above; we
changed it to be a title of work element rather than a citation
element because that's actually how people were using it.

I don't think it makes sense to use the cite element to refer to  
people,
because typographically people aren't generally marked up anyway. I  
don't

really see how you'd use it to refer to untitled works.

Thus, I don't really think it makes sense to make the change you  
propose.


There are plenty of times when you want to mark up someone's name. For  
instance, if you're quoting someone in a testimonial, you may want the  
quote to appear in normal roman text, but the person's name who you  
are quoting to be in italic and right aligned:


Best value for the money!
  -- J. Random User

I might format this as:

aside class=testimonial
  qBest value for the money!/q
  citeJ. Random User/cite
/aside

aside.testimonial cite:before { content: — }
aside.testimonial cite { display: block; font-style: italic; text- 
align: right }


Here's an example of someone asking about this specific use case, of  
how to mark up a testimonial and its source:


http://stackoverflow.com/questions/758785/what-is-the-best-way-to-markup-a-testimonial-in-xhtml

(note that I don't believe the uses of blockquote mentioned here,  
including by me, are correct, as the citation actually refers to the  
quote rather than being part of it, but I think the use of cite is  
perfectly reasonable)


The Mozilla Style Guide also uses formatting for cite that I believe  
would be appropriate for citing either a work or a person:


http://www.mozilla.org/contribute/writing/markup#quotations

Of course, it's generally preferable to cite a work, rather than a  
person, as then the citation can be verified; if you just include a  
person's name, you have to assume that they mean personal  
correspondence which is unverifiable, or simply that the work is left  
unspecified and someone else will have to track it down. But people do  
write quotes and attribute the quotation to the person rather than the  
work, and as HTML is about marking up content and not about enforcing  
academic standards, I don't see why HTML5 should be adding this  
unenforceable restriction that doesn't seem to add much value.


I wonder if there is value in specifying the semantics of elements  
like cite in much detail, in cases where there is no way to  
automatically verify those semantics and there is no use case for  
machine processing of those semantics. It seems that whatever the  
definition of cite is, you're going to need to use a microformat or  
microdata or RDFa to actually provide semantics that are machine- 
readable, so the spec should be relatively loose and leave the precise  
semantics up to one of the more flexible systems for specifying  
semantics.


-- Brian Campbell

Re: [whatwg] Canvas context.drawImage clarification

2009-07-09 Thread Brian Campbell

On Jul 9, 2009, at 9:25 PM, Oliver Hunt wrote:

I disagree. When I scale a rectangular opaque image I expect  
rectangular opaque results.  The Firefox implementation does not do  
this.


If you believe that to be the case then you can always file a bug at  
bugs.webkit.org .


Why would he file a bug to WebKit for a Firefox rendering issue? I  
would think that https://bugzilla.mozilla.org/ would get better results.


-- Brian Campbell


Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-09 Thread Brian Campbell

On Jun 9, 2007, at 5:26 PM, Dave Singer wrote:

I have to confess I saw the BBC story about sign-language soon  
after sending this round internally.  But I need to do some study  
on the naming of sign languages and whether they have ISO codes.   
Is it true that if I say that the human language is ISO 639-2 code  
XXX, and that it's signed, there is only one choice for what the  
sign language is (I don't think so -- isn't american sign language  
different from british)?  Alternatively, are there ISO or IETF  
codes for sign languages themselves?


Almost no sign languages are related to the spoken language in the  
same region any more than any two spoken languages are related to  
each other. Sign languages are full-fledged languages in their own  
right, not signed transliterations of spoken language (though they do  
frequently have an alphabet system for signing words and names from  
spoken languages). So, American Sign Language is not actually related  
to English any more than other languages spoken in America are (like  
Cherokee or Spanish).


The situation with the ISO 639-2 codes is unfortunate, because there  
is only a single code for all sign languages, sgn. It appears that  
the solution is to add extensions specifying the actual language,  
such as sgn-US or sgn-UK. There's more information available here:  
http://www.evertype.com/standards/iso639/sgn.html


Re: [whatwg] Cue points in media elements

2007-05-01 Thread Brian Campbell

On Apr 30, 2007, at 7:15 PM, Ralph Giles wrote:


Thanks for adding to the discussion. We're very interested in
implementing support for presentations as well, so it's good
to hear from someone with experience.


Thanks for responding, I'm glad to hear your input.


On Sun, Apr 29, 2007 at 03:14:27AM -0400, Brian Campbell wrote:


in our language, you might see something like this:

  (movie Foo.mov :name 'movie)
  (wait @movie (tc 2 3))
  (show @bullet-1)
  (wait @movie)
  (show @bullet-2)

If the user skips to the end of the media clip, that simply causes
all WAITs on that  media clip to return instantly. If they skip
forward in the media clip, without ending it, all WAITs before that
point will return instantly.


How does this work if, for example, the user seeks forward, and then
back to an earlier position? Would some of the 'show's be undone,  
or do

they not seek backward with the media playback?


We don't expose arbitrary seeking controls to our users; just play/ 
pause, skip forward  back one card (which resets all state to a  
known value) and skip past the current video/audio (which just causes  
all waits on that media element to return instantly).



Is the essential
component of your system that all the shows be called in sequence
to build up a display state, or that the last state trigger before the
current playback point have been triggered?


The former.


Isn't this slow if a bunch
of intermediate animations are triggered by a seek?


Yes, though this is more a bug in our animation API (which could be  
taught to skip directly to the end of an animation when associated  
video/audio ends, but that just hasn't been done yet).


Actually, that brings up another point, which is a bit more  
speculative. It may be nice to have a way to register a callback that  
will be called at animation rates (at least 15 frames/second or so)  
that is called with the current play time of a media element. This  
would allow you to keep animations in sync with video, even if the  
video might stall briefly, or seek forward or backward for whatever  
reason. We haven't implemented this in our current system (as I said,  
it still has the bug that animations still take their full time to  
play even when you skip video), but it may be helpful for this sort  
of thing.



Does your system support live streaming as well? That complicates the
design some when the presentation media updates appear dynamically.


No, we only support progressive download.


Anyway I think you could implement your system with the currently
proposed interface by checking the current playback position and
clearing a separate list of waits inside your timeupdate callback.


I agree, it would be possible, but from my current reading of the  
spec it sounds like some cue points might be missed until quite a bit  
later (since timeupdate isn't guaranteed to be called every time  
anything discontinuous happens with the media). In general, having to  
do extra bookkeeping to keep track of the state of the media may be  
fragile, so stronger guarantees about when cue points are fired is  
better than trying to keep track of what's going on with timeupdate  
events.


I agree this should be clarified. The appropriate interpretation  
should
be when the current playback position reaches the frame  
corresponding to

the queue point, but digital media has quantized frames, while the cue
points are floating point numbers. Triggering all cue point callbacks
between the last current playback position and the current one
(including during seeks) would be one option, and do what you want as
long as you aren't seeking backward. I'd be more in favor of  
triggering

any cue point callbacks that lie between the current playback position
and the current playback position of the next frame (audio frame for
audio/ and video frame for video/ I guess). That means more
bookkeeping to implement your system, but is less surprising in other
cases.


Sure, that would probably work. As I said, bookkeeping is generally a  
problem because it might get out of sync, but with stronger  
guarantees about when cue points are triggered, I think it could work.



  If video
playback freezes for a second, and so misses a cue point, is that
considered to have been reached?


As I read it, cue points are relative to the current playback  
position,

which does not advance if the stream buffer underruns, but it would
if playback restarts after a gap, as might happen if the connection
drops, or in an RTP stream. My proposal above would need to be amended
to handle that case, and the decoder dropping frames...finding the  
right

language here is hard.


Yes, it's a tricky little problem. Our current system stays out of  
trouble because it makes quite a few simplifying assumptions (video  
is played forward only, progressive download, not streaming, etc).  
Obviously, in order to support a more general API, you're

Re: [whatwg] Cue points in media elements

2007-05-01 Thread Brian Campbell

On May 1, 2007, at 1:05 PM, Kevin Calhoun wrote:

I believe that a cue point is reached if its time is traversed  
during playback.


What does traversed mean in terms of (a) seeking across the cue  
point (b) playing in reverse (rewinding) and (c) the media stalling  
an restarting at a later point in the stream?


[whatwg] Cue points in media elements

2007-04-29 Thread Brian Campbell
I'm a developer of a custom engine for interactive multimedia, and  
I've recently noticed the work WHATWG has been doing on adding  
video and audio elements to HTML. I'm very glad to see these  
being proposed for addition to HTML, because if they (and several  
other features) are done right, it means that there may be a chance  
for us to stop using a custom engine, and use an off-the-shelf HTML  
engine, putting our development focus on our authoring tools instead.  
My hope is that eventually, if these features get enough penetration,  
to put our content up on the web directly, rather than having to  
distribute the runtime software with it.


I've taken a look at the current specification for media elements,  
and on the whole, it looks like it would meet our needs. We are  
currently using VP3, and a combination of MP3 and Vorbis audio, for  
our codecs, so having Ogg Theora (based on VP3) and Ogg Vorbis as a  
baseline would be completely fine with us, and much preferable to the  
patent issues and licensing fees we'd need to deal with if we used  
MPEG4.


For the sort of content that we produce, cue points are incredibly  
important. Most of our content consists of a video or voiceover  
playing while bullet points appear, animations play, and graphics are  
revealed, all in sync with the video. We have a very simple system  
for doing cue points, that is extremely easy for the content authors  
to write and is robust for paused media, media that is skipped to the  
end, etc. We simply have a blocking call, WAIT, that waits until a  
specific point or the end of a specified media element. For instance,  
in our language, you might see something like this:


  (movie Foo.mov :name 'movie)
  (wait @movie (tc 2 3))
  (show @bullet-1)
  (wait @movie)
  (show @bullet-2)

If the user skips to the end of the media clip, that simply causes  
all WAITs on that  media clip to return instantly. If they skip  
forward in the media clip, without ending it, all WAITs before that  
point will return instantly. If the user pauses the media clip, all  
WAITs on the media clip will block until it is playing again.


This is a nice system, but I can't see how even as simple a system as  
this could be implemented given the current specification of cue  
points. The problem is that the callbacks execute when the current  
playback position of a media element reaches the cue point. It seems  
unclear to me what reaching a particular time means. If video  
playback freezes for a second, and so misses a cue point, is that  
considered to have been reached? Is there any way that you can  
guarantee that a cue point will be executed as long as video has  
passed a particular cue point? With a lot of bookkeeping and the  
timeupdate event along with the cue points, you may be able to keep  
track of the current time in the movie well enough to deal with the  
user skipping forward, pausing, and the video stalling and restarting  
due to running out of buffer. This doesn't address, as far as I can  
tell, issues like the thread displaying the video pausing for  
whatever reason and so skipping forward after it resumes, which may  
cause cue points to be lost, and which isn't specified to send a  
timeupdate event.


Basically, what is necessary is a way to specify that a cue point  
should always be fired as long as playback has passed a certain time,  
not just if it reaches a particular time. This would prevent us  
from having to do a lot of bookkeeping to make sure that cue points  
haven't been missed, and make everything simpler and less fragile.


We're also greatly interested in making our content accessible, to  
meet Section 508 requirements. For now, we are focusing on captioning  
for the deaf. We have voiceovers on some screens with no associated  
video, video that appears in various places on the screen, and the  
occasional sound effects. Because there is not a consistent video  
location, nor is there even a frame for voiceovers to appear in, we  
don't display the captions directly over the video, but instead send  
events to the current screen, which is responsible for catching the  
events and displaying them in a location appropriate for that screen,  
usually a standard location. In the current spec, all that is  
provided for is controls to turn closed captions on or off. What  
would be much better is a way to enable the video element to send  
caption events, which include the text of the current caption, and  
can be used to display those captions in a way that fits the design  
of the content better.


I hope these comments make sense; let me know if you have any  
questions or suggestions.


Thanks,
Brian Campbell
Interactive Media Lab, Dartmouth College
http://iml.dartmouth.edu