Re: [whatwg] element feedback

2007-03-24 Thread Kevin Marks

On 3/23/07, Silvia Pfeiffer <[EMAIL PROTECTED]> wrote:

On 3/23/07, Nicholas Shanks <[EMAIL PROTECTED]> wrote:
> 3) And a way for users to link to timecodes that aren't marked up at
> all.
I know of only one format that provides for all this functionality at
the moment and that is Ogg Theora with the Annodex and CMML
extensions.


QuickTime has done this for at least 15 years. If you want the Ogg vs
QuickTime theoretical argument. have a look at:

http://lists.xiph.org/pipermail/vorbis-dev/2001-October/004846.html

Broadly, there are 3 approaches to the seeking problem.

1. define everything as a bitstream, and require that you can resync
within a known interval (this is the MPEG1/2 approach, with the GOP
size defining the interval)
2. define a chunk/offset table that maps media to time, and look this
up ahead of any seeking. (this is the QT approach, and that of MPEG4
3. make the file consist of fixed-sized chunks for each time, so you
can seek arbitrarily and hit a good offset. (this is the uncompressed
audio and DV file approach).

Ogg made up a bastard hybrid of 1 and 2, where there is still a
dependency on getting the codebooks from the start of the file, but
the only way to jump is by doing a binary search with a disk seek per
jump. As I said 6 years ago:

How does one seek a Vorbis file with video in and recover framing?

It looks like you skip to an arbitrary point and scan for 'OggS' then
do a 64kB CRC to make sure this isn't a fluke. Then you have some
packets that correspond to some part of a frame of video or audio.
You recover a timestamp, and thus you can pick another random point
and do a binary chop until you hit the timestamp before the one you
wanted. Then you need to read pages until the timestamp changes and
you have resynced that stream. Any other interleaved streams are
presumably being resync'd in parallel so you can then get back to the
read and skip framing. Try doing that from a CD-ROM.

Do let me know if that has since been fixed.


Re: [whatwg] Apple Proposal for Timed Media Elements

2007-03-24 Thread Kevin Marks

On 3/21/07, Chris Double <[EMAIL PROTECTED]> wrote:


> Looping is useful for more presentational uses of video. Start and
> end time are useful in case you want to package a bunch of small bits
> of video in one file and just play different segments, similar to the
> way content authors sometimes have one big image and use different
> subregions. Or consider looping audio, or a single audio file with
> multiple sound effects. These are two examples.

Could the looping be done via javascript rather than having explicit
support for it with loopStartTime, etc? If an event is raised when the
video reaches endTime then event handler could then restart it.



For smooth looping, you need to have the next buffer ready and cued up when
the previous one finishes. Doing this consistently with a roundtrip through
javascript events is going to stutter or gap. For video at 30fps, you can
make the interval, but audio at 48khz means you are more likely to hear a
click or gap.


Re: [whatwg] Codecs (was Re: Apple Proposal for Timed Media Elements)

2007-03-24 Thread Kevin Marks

On 3/23/07, Christian F.K. Schaller <[EMAIL PROTECTED]> wrote:


On Fri, 2007-03-23 at 08:12 -0700, Kevin Calhoun wrote:
> On Mar 23, 2007, at 2:56 AM, Maik Merten wrote:
>
> > MPEG4 adoption to the web has been poor from my point of view. Today
> > I'd
> > guess the absolute king in marketshare is Flash video, then following
> > Windows Media, then followed by QuickTime (that may carry MPEG4... but
> > the container is not MPEG) and perhaps a bit of RealVideo in between.



Are you talking container or codecs here? AVI is a significant container
format, with some variant of MPEG4 codecs in.


Just a quick correction here: QuickTime does support the MPEG-4
> container format.

Yes, but that is the opposite of the stated issue. The issue is that
the .mov files out there are actually not valid MPEG4 files. Which means
that with a MPEG4 compliant demuxer one would not be able to demux a
Quicktime file. So Maik's claim still stand, MPEG4 has almost no
adoption on the web. Apple could have solved this of course by making
sure .mov was MPEG4 compliant, which would have been a natural step
after pushing so hard to make the quicktime container format the basis
for the MPEG4 container format, but I guess the temptation of
proprietary lock-in was to big.



This is entirely backwards. the QT file format is not proprietary, it is
openly documented. There is a patent issue around hint tracks that Apple
could resolve, but other than that case (and it is a very marginal one,
designed only to be read by streaming serves for stored content which is
outside the scope for user agents anyway).

MPEG4 defines a subset of codecs and support levels. QT allows arbitrary
codecs to be contained. So Apple could not make QT files MPEG4 compliant
retrospectively without a time machine.
What Apple have done is support export to compliant MPEG4 files from all
their editing products, and default to them in many cases (the .m4a files
iTunes makes, the m4v ones that iMovie makes, and the audio with chapters
and visual frames in that GarageBand makes are all mpeg4). All of these are
played by iPods as well as clients Apple freely distributes for Mac and
Windows (iTunes), and browser plugins, paying the encoding and decoding
license fees.

This is muddied by the iTunes store DRM that IS designed to be proprietary
and prevent interoperability, but as Steve Jobs said recently, this is a
very small fraction of media files.

Now, if you want a fallback standard that is genuinely widely interoperating
without patent issues, you could pick QuickTime with JPEG video frames and
uncompressed audio. Millions of digital cameras support this format already,
as do all quicktime implementations back to 1990, as well as WMP and
RealPlayer and all the open source players.


Re: [whatwg] On the use of MPEG-4 as baseline codec

2007-04-02 Thread Kevin Marks

On 3/31/07, Asbjørn Ulsberg <[EMAIL PROTECTED]> wrote:

I've investigated a bit on the use of MPEG-4 as a baseline codec in the
proposed  element, and my conclusion is that it can't be used with
the current licensing terms. From the AVC/H.264 Agreement[1]:

# For branded encoder and decoder products sold both to end users
# and on an OEM basis for incorporation into personal computers
# but not part of an operating system [...], royalties (beginning
# January 1, 2005) per legal entity are 0 - 100,000 units per
# year = no royalty [...] US $0.20 per unit after first 100,000
# units each year; above 5 million units per year, royalty =
# US $0.10 per unit.

I'm no lawyer, but I think this provides the necessary information to
conclude that MPEG-4 is unsuited as a baseline codec for the 
element, unless browser vendors (A) find the licensing terms reasonable or
(B) manage to restrict downloads of their application to 100.000 units per
year. I doubt both, but I'd love to be proven wrong, of course.

I find it quite disappointing that the MPEG Licensing Authority doesn't
distinguish between royalty and royalty-free distributions of the codec,
of which most web browsers would fit in the latter group.


Well, you missed the cap clause, which would mean that large
corporations could do this for a known cost, which is how Apple and
Micosot can distribute this:

"The maximum annual royalty ("cap") for an enterprise (commonly controlled
legal entities) is $3.5 million per year 2005-2006, $4.25 million per
year 2007-08, $5
million per year 2009-10 "


Re: [whatwg] Sequential List Proposal

2007-04-10 Thread Kevin Marks

On 4/8/07, Elliotte Harold <[EMAIL PROTECTED]> wrote:

Michel Fortin wrote:

> So I propose a  element (sequential list) which can be used to
> replace  as well as other things. The proposal can be found here:
>

Sounds a little redundant with ol (ordered list). Also sounds needlessly
confusing and hard to explain. I'm not sure we really need dialog, but
at least it's simple and obvious to explain to people what it means. The
more abstract and generic we get the harder this becomes. Concreteness
is underrated among software developers, but widely appreciated by other
users.


I think the  example is a retrograde step. The
 pattern seems much better than redefining
 and , which will confuse XOXO parsers that try to be
Postelian. Did I miss some reasoning here?


Re: [whatwg] List captions

2007-04-10 Thread Kevin Marks

On 4/6/07, Elliotte Harold <[EMAIL PROTECTED]> wrote:

Andy Mabbett wrote:
> How often do we see something like:
>
> Animals:
> 
>   Cat
>   Dog
>   Horse
>   Cow
> 
>
> This would be more meaningful as:
>
> 
>   Cat
>   Dog
>   Horse
>   Cow
> 
>

No, the caption should be displayed to all users. That means it needs to
be element content, not an attribute value. Just maybe

 
   Animals
   Cat
   Dog
   Horse
   Cow
 


Seems to me that what you want semantically is:


 Animals
 Cat
 Dog
 Horse
 Cow


or maybe


 Animals
 
 Cat
 Dog
 Horse
 Cow
 


if you're feeling XOXO-esque


Re: [whatwg] Sequential List Proposal

2007-04-11 Thread Kevin Marks

On 4/10/07, Benjamin Hawkes-Lewis <[EMAIL PROTECTED]> wrote:

Kevin Marks wrote:

> I think the  example is a retrograde step. The
>  pattern seems much better than redefining
>  and , which will confuse XOXO parsers that try to be
> Postelian. Did I miss some reasoning here?

Fictional dialogs don't involve the excerpt and citation of external
sources, which is what q/blockquote and cite are properly for. Given the
HTML4 spec's own use of dt and dd, it's far from clear that any
redefinition is involved. That isn't to suggest that dt and dd are
optimal however.


My point is that this is breaking the expected containment of 
in a - if you want a new structure purely for dialog, define
 and keep .  I really fail to see why redefining a
definition list as speech is less 'proper' than expanding the context
of  slightly.


Re: [whatwg] Attribute for holding private data for scripting

2007-04-11 Thread Kevin Marks

On 4/11/07, Jon Barnett <[EMAIL PROTECTED]> wrote:

> If you want structured data in this attribute, why not just use JSON?

That's an idea that crossed my mind as well.  I dismissed it for a few
reasons:
- authors would have to entitize quotes and ampersands in their attributes,
which they're not used to doing with JSON normally.
- evaluating it would mean:
var obj = eval(myelement.getAttribute("_myjson");


How about defining an attribute that is the name of the js variable
for use with that element? Then you can define the variable in a

Re: [whatwg] Attribute for holding private data for scripting

2007-04-11 Thread Kevin Marks

On 4/11/07, Jon Barnett <[EMAIL PROTECTED]> wrote:



On 4/11/07, Kevin Marks <[EMAIL PROTECTED]> wrote:
> On 4/11/07, Jon Barnett <[EMAIL PROTECTED]> wrote:
> > > If you want structured data in this attribute, why not just use JSON?
> >
> > That's an idea that crossed my mind as well.  I dismissed it for a few
> > reasons:
> > - authors would have to entitize quotes and ampersands in their
attributes,
> > which they're not used to doing with JSON normally.
> > - evaluating it would mean:
> > var obj = eval( myelement.getAttribute("_myjson");
>
> How about defining an attribute that is the name of the js variable
> for use with that element? Then you can define the variable in a
>  tag, and use pure JSON cleanly.

I don't understand what you mean there.  It was said that we don't need to
add something new to the DOM.  If I understand, you're suggesting a single
attribute hypothetically called "params" spec'ed to be a JSON format:
<div params="{foo: 'bar', bish: &quot;bash&quot;}"></div>
with the DOM attribute named params that parses that attribute as JSON into
an object so that something like this happens in JavaScript:
...
mydiv.params.foo == 'bar'; // it is!

While that would be nice, it's not something browsers currently do, and the
goal is to spec something that today's browsers already handle and HTML5
validators will be happy with.  Granted, you can use eval() in Javascript to
get what you want in todays browsers, but is it best to actually spec it
that way?
</pre></blockquote><pre style="margin: 0em;">

No, what I'm suggesting is that you have, say, a 'localdata' attribute
that names the associated variable:
<script>myparams={"foo":"bar","bish":"bash"};


mydiv.localdata.foo =="bar"; // it is

I think making this work in current browsers would be doable by having
a script that creates the DOM elements by looking for the 'localdata'
parameters.


Re: [whatwg] Give guidance about RFC 4281 codecs parameter

2007-04-12 Thread Kevin Marks

On 4/11/07, Dave Singer <[EMAIL PROTECTED]> wrote:


We had to settle on one type that was valid for all files, to deal
with the (common) case where the server was not willing to do
introspection to find the correct type.  We decided that "audio/"
promises that there isn't video, whereas "video/" indicates that
there may be.  It's not optimal, agreed.


I agree that video/xxx and audio/xxx are useful distinctions. Another
point is that as IE ignores MIME types in favour of extensions, in
practice we end up with multiple extensionss pointing to the same
filetype, to give a cue for differentiation:
.wmv vs .wma
.m4v vs .m4a (also .m4p for DRM'd and .m4b for audiobooks, no?)

That these distinctions keep being made, despite neutral formats with
extensions like .mov, .avi, .mp4 and .ogg implies that there is some
utility there.


Re: [whatwg] Blurry lines in 2D Canvas (and SVG)

2013-07-25 Thread Kevin Marks
On chrome android I see the opposite - the left rests are sharp, the middle
ones fuzzy. Sounds like tests needed.
On Jul 23, 2013 5:18 PM, "David Dailey"  wrote:

> Hi Rik,
>
> Just affirming what you've said in SVG:
> http://cs.sru.edu/~ddailey/svg/edgeblurs.svg
>
> The middle rects are crisp, having been merely translated leftward and
> downward by half a pixel. Zooming in from the browser rectifies the problem
> (as expected) after a single tick.
>
> I remember folks discussing sub-pixel antialiasing quite a bit on the SVG
> lists circa fall/winter 2011. It seemed to cause some troubles for D3. Is
> that the same issue?
>
> Cheers
> David
>
>
> -Original Message-
> From: whatwg-boun...@lists.whatwg.org
> [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of Rik Cabanier
> Sent: Tuesday, July 23, 2013 7:19 PM
> To: wha...@whatwg.org
> Subject: [whatwg] Blurry lines in 2D Canvas (and SVG)
>
> All,
>
> we've noticed that if you draw lines in canvas or SVG, they always end up
> blurry.
> For instance see this fiddle: http://jsfiddle.net/V92Gn/128/
>
> This happens because you offset 1 pixel and then draw a half pixel stroke
> on
> each side. Since it covers only half the pixel, the color gets mapped to
> 50%
> gray.
> You can work around this by doing an extra offset of half the
> devicepixelratio, but ideally this should never happen.
>
> Is this behavior specified somewhere?
> Is there a way to turn this off?
>
>
>


Re: [whatwg] Supporting more address levels in autocomplete

2014-02-21 Thread Kevin Marks
Those names come from vcard - if adding a new one, consider how to model it
in vcard too. Note that UK addresses can have this too - eg 3 high street,
Kenton, Harrow, Middlesex, UK
Would putting the 2 degrees of locality as comma separated in that field
make more sense?
Given that this schema is the most widespread addressbook format, I'm sure
someone has a dataset to discover usage (Google? Apple? Microsoft?)
On 21 Feb 2014 16:30, "Ian Hickson"  wrote:

> On Fri, 21 Feb 2014, Dan Beam wrote:
> >
> > While internationalizing Chrome’s implementation of
> > requestAutocomplete(), we found that Chinese, Korean, and Thai addresses
> > commonly ask for [at least] 3 levels of administrative region. For
> > example, in this Chinese address:
> >
> >   Humble Administrator’s Garden
> >   n°178 Dongbei Street, Gusu, Suzhou
> >   215001 Jiangsu
> >   China
> >
> > the first-level address component is “Jiangsu” (province) as it’s the
> > first level below country, “Suzhou” is a prefecture level city (below
> > province), and “Gusu” is a district of Suzhou.
> >
> > To support this address format and arbitrarily many administrative
> > levels, we propose adding new tokens to the autocomplete spec:
> > address-level-n, for arbitrary n.
>
> This would be the first open-ended field name. Do we really want to make
> this open-ended? What happens if a form has n=1..3, and another has
> n=2..4? What if one has n=1, n=2, and n=4, but not n=3? How does a site
> know how many levels to offer?
>
> What should a Chinese user interacting with a US company put in as their
> address, if they want something shipped to China?
>
>
> > The current HTML spec supports “region” and “locality”. We feel these
> > should remain in the spec, as they are still useful for typical Western
> > addresses. In a typical Western address, address-level-1 would align
> > with “region” and address-level-2 would align with “locality”.
>
> So they would be synonyms? Or separate fields?
>
> Note that in the case of US addresses, in particular, the "region" field
> is often exposed as a  drop-down, not a free-form field. It's
> important that we be consistent as to which field maps to which list of
> names, in cases like this. (I don't know how common this is outside the
> US; I don't recall seeing it in European contexts.)
>
>
> > Compared to the alternative of adding another one-off such as
> > “dependent-locality” or “sub-locality”, we feel this is a more
> > descriptive and general way to tackle additional administrative levels
> > without making false implications about the semantics of the value that
> > is returned.
>
> I agree that at this point, it's better to use numbers than more specific
> names.
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Supporting more address levels in autocomplete

2014-02-21 Thread Kevin Marks
On 21 Feb 2014 17:03, "Ian Hickson"  wrote:
>
> On Fri, 21 Feb 2014, Kevin Marks wrote:
> >

>
> > Those names come from vcard - if adding a new one, consider how to
model it
> > in vcard too. Note that UK addresses can have this too - eg 3 high
street,
> > Kenton, Harrow, Middlesex, UK
>
> That's actually a bogus UK address. I'm not sure exactly which town you
> meant that to be in, but official UK addresses never have more than two
> "region" levels, and usually only one (the "post town"). The only time
> they have two is when the post town has two streets with the same name.

The real address, where I grew up,  was:
2 Melbury Road, Kenton, Harrow, Middlesex, HA3 9RA

>
> (Note that a lot of people in the UK have no idea how to write their
> address according to current standards. For example, people often include
> the county, give the "real" town rather than the "post town", put things
> out of order, indent each line of the address, etc.)

Damn humans, not following specs. Actually UK addresses have a huge amount
of leeway, as they are routed by postcode in the main (though I did receive
a postcard addressed to "Kevin, Sidney, Cambridge" once).

UK forms tend to ask for postcode and street number these days, and barf
when given US addresses


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-09 Thread Kevin Marks
Setting volume above 1.0 can be very useful if the original is too quiet.
For example, Quicktime allows a volume of 300% to amplify quiet tracks

On May 31, 2010 11:30 PM, "Philip Jägenstedt"  wrote:

On Tue, 01 Jun 2010 14:17:03 +0800, Silvia Pfeiffer <
silviapfeiff...@gmail.com> wrote:

> On Tue, Ju...
This would make volume even more special, as a float that reflects as an
integer percentage. Just using the existing definition for reflecting a
float would be simpler.



>> So, I am neither in favor or against of reflecting volume and mute as
>> content attributes. Im...
I'd be fine with reflecting muted if many people think it would be useful.
I'm not the one to make that judgment though.

Volume isn't a huge problem, just not as trivial as one might suspect.
Another thing to consider is that it is currently impossible to set volume
to a value outside the range [0,1] via the DOM API. With a content
attribute, volume="-1" and volume="1.1" would need to be handled too. I'd
prefer it being ignored rather than being clamped.



>> [1]
>>
http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#reflect
>
>
>
> Ch...

-- 
Philip Jägenstedt
Core Developer
Opera Software


Re: [whatwg] Processing the zoom level - MS extensions to window.screen

2010-11-23 Thread Kevin Marks
Most video displays have non-square pixels. Standard definition video
processing resolutions are 720 by 480 for NTSC and 720 by 576 for PAL though
both are 4 by 3 aspect ratio.
You can argue whether tv standards count as modern, but there are a lot out
there.

On 21 Nov 2010 16:56, "Robert O'Callahan"  wrote:

On Mon, Nov 22, 2010 at 1:40 PM, Charles Pritchard  wrote:

>
> I would point out that the MS proposal has an independent X and Y scaling
mechanism.

Does anyone know of any modern displays which have different X and Y
resolution?



>
> I believe that dpi ratio is simply set to "2" (or .5... sorry a bit rusty)
on the iOS 4 retina ...
There will be cases where zooming doesn't change device-pixel-ratio. Mobile
browsers tend to have a "fast" zoom out which doesn't change the layout
(mostly), and that might not change device-pixel-ratio. I think that's OK
for your use cases as long as device-pixel-ratio reports the ratio as if the
page is "zoomed in".



Rob
-- 
"Now the Bereans were of more noble character than the Thessalonians, for
they received th...


Re: [whatwg] Processing the zoom level - MS extensions to window.screen

2010-11-23 Thread Kevin Marks
Well, if we care about doing video processing with Canvas, understanding
anamorphic pixels is needed.

On Tue, Nov 23, 2010 at 4:50 PM, Robert O'Callahan wrote:

> On Wed, Nov 24, 2010 at 1:24 PM, Kevin Marks  wrote:
>
>> Most video displays have non-square pixels. Standard definition video
>> processing resolutions are 720 by 480 for NTSC and 720 by 576 for PAL though
>> both are 4 by 3 aspect ratio.
>> You can argue whether tv standards count as modern, but there are a lot
>> out there.
>>
>
> Yeah, I slipped in "modern" to exclude examples that don't fit my argument
> :-).
>
> But seriously, GUI platforms and applications used to support non-square
> pixel aspect ratios (e.g. Windows in the CGA/EGA era), but no longer do.
> That time has passed.
>
> Rob
> --
> "Now the Bereans were of more noble character than the Thessalonians, for
> they received the message with great eagerness and examined the Scriptures
> every day to see if what Paul said was true." [Acts 17:11]
>


Re: [whatwg] need a way to set output format from StreamRecorder

2010-11-27 Thread Kevin Marks
For Audio at least, supporting uncompressed should be possible and
uncontroversial, as there are clearly no patent issues here. Anyone serious
about recording and processing audio would not consider recording compressed
audio nowadays. T

There are several widely used raw audio formats (.au, WAV, AIFF, AVI) that
can wrap into a filestream, and there are of course the issues of sample
rate, channel count and bit resolution, but compared to codec issues these
are relatively straightforward from an engineering point of view, and not
tied up with licensing issues.

Raw video is more of a problem at present, given common bandwidth
constraints, but if we are interested in providing for image manipulation
APIs, having pixel formats that map to video better than RGBA may be needed.
The enumeration at
http://developer.apple.com/quicktime/icefloe/dispatch020.html
may be helpful here.

On Fri, Nov 26, 2010 at 11:10 AM, Nils Dagsson Moskopp <
nils-dagsson-mosk...@dieweltistgarnichtso.net> wrote:

> Silvia Pfeiffer  schrieb am Thu, 25 Nov 2010
> 20:01:37 +1100:
>
> > Also, implementing WebM or Ogg Theora encoding is just as royalty-free
> > as decoding them, so Mozilla, Opera and Google wouldn't need to worry
> > there.
>
> Slightly offtopic: Anyone considering the low-bandwith audio use case?
> Surely, speex might be useful here — even a throttled UMTS connection
> suffices for VoIP.
>
> > So, the browsers would implement support for those codecs for which
> > they already implement decoding support - maybe with the exception of
> > Chrome which decode MPEG-4, but may not want to encode it, since it
> > might mean extra royalties.
>
> And probably less WebM content, too boot. Decoding, but not encoding
> MPEG formats could certainly fit into a royalty-free formats agenda,
> depending on the level of aggressiveness Google is wishing to take.
>
> > It would be nice if we could all, say, encode WebM, but I don't see
> > that happening.
>
> I see what you did there.
>
>
> Greetings,
> --
> Nils Dagsson Moskopp // erlehmann
> 
>


Re: [whatwg] Html 5 video element's poster attribute

2010-12-08 Thread Kevin Marks
Apologies for top posting; I'm on my phone.

One case where posters come back after playback is complete is  when there
are multiple videos on the page, and only one has playback focus at a time,
such as a page of preview movies for longer ones to purchase.

In that case, showing the poster again on blur makes sense conceptually.

It seems that getting back into the pre-playback state, showing the poster
again would make sense in this context.

That would imply adding an unload() method that reverted to that state, and
could be used to make any cached media data purgeable in favour of another
video that is subsequently loaded.

On Dec 8, 2010 6:56 PM, "Ian Hickson"  wrote:

On Sun, 19 Sep 2010, Shiv Kumar wrote:
>
> I'd like to see the implementation of the poster attribut...
This is an implementation choice; the spec allows either the poster to be
used or the first frame. This is to allow the browser to use the poster
frame until playback begins, but to then use the first frame if the user
seeks back to the start of the video.



> The poster should not show while the player is seeking (some browser
> implementation do show t...
That's an implementation bug. The spec doesn't allow that.



> The poster should show again after the video has ended.
Why?



> The visibility of the poster should be scriptable and/or controllable
> using an attribute. Mea...
What's the use case for this?



On Mon, 20 Sep 2010, Silvia Pfeiffer wrote:
>
> | When a video element is paused and the current p...
That would be annoying in a different way -- it would mean you couldn't
seek back to the start of the video and see the first frame.


We could make the spec more precise and require that a particular
behaviour occur before playback has ever happened and another after
playback has ever happened, but in practice I think that there is only one
behaviour that is useful and desireable enough that all browsers will
implement it, and we don't gain much by making the other more esotecir
behaviours non-conforming for those few people who would prefer it the
other way. (In general it is considered bad form to require particular UI
unless there is a strong reason to do so.)



On Sun, 19 Sep 2010, Monty Montgomery wrote:
>
> If the default action is to redisplay the poster...
The default behaviour without script should be the most useful behaviour,
not the behaviour that can most easily be turned into another behaviour
with script.



On Mon, 20 Sep 2010, Zachary Ozer wrote:
>
> I'd like to weight in quickly on this based on feedba...
>  * Webkit's original implementation (show the first frame once it's

> available) is requested by a lot of people. What they don't realize is
> that the first frame is ...
> (you have to start loading the video, then call play() and pause() on

> the first frame), but I'd say it's still a good idea to display the
> first frame if there's no p...
This seems consistent with the spec's requirements.



> * Don't show the poster when the video buffers - just pause the video
> and give some visual i...
This also.



> * We've never had anyone request different poster images for begin /
> pause / end. People gen...
> and end, and they want the same image. If someone wants to change it,

> allow them to set the poster attribute via JavaScript.
I'm not aware of people wanting to have it appear at the end -- this never
came up in the study of use cases. Can you elaborate on this? Are there
examples of sites that do this today? It seems like you could just put the
"end poster frame" in the last frame of view instead.



> * Don't clear the poster on load(). A lot of people get confused by
> this. It might make sens...
Not sure what this is referencing.



> * I'm not sure how reset() would work. Would you reset the list of
>  too?
What is reset()?



On Sun, 19 Sep 2010, Shiv Kumar wrote:
>
> First I do want to make clear that it's not about being...
The goal isn't to make HTML declarative to the extent possible, but to
make it declarative for the most common 80% of use cases.



> As regards having control over the poster's visibility using
> attributes/script, the use case ...
> producers frequently want us to show the poster after the video has
> ended.

It seems clear that they can play it again if they want to... why would
they not be able to? Do you have an example of a site I can use that does
this? I'm curious to study this kind of UI.



> Seeing that there is no way to show it again (after it has disappeared)
> I think that there sh...
> any use for the poster attribute if one wants to turn on the poster.

I don't really see why one would want to turn on the poster. What's the
use case?



> Yes, I know one can assign/un-assign the poster attribute. But really is
> that how we see func...
> even this solution will not make the poster visible when required (or
> when desired).

If you want to change the poster, changing the poster="" attribute seems
like a perfectly reasonable way to do it.




On Sun,

Re: [whatwg] Html 5 video element's poster attribute

2010-12-09 Thread Kevin Marks
I know it's not effective at the moment; it is a common use case.
QuickTime had the 'badge' ux for years that hardly anyone took advantage of:

http://www.mactech.com/articles/mactech/Vol.16/16.02/Feb00QTToolkit/index.html

What we're seeing on the web is a converged implementation of the
YouTube-like overlaid grey play button, but this is effectively
reimplemented independently by each video site that enables embedding.

As we see HTML used declaratively for long-form works like ebooks on lower
performance devices, having embedded video that doesn't cumulatively absorb
all the memory available is going to be like the old CD-ROM use cases the QT
Badge was meant for.

On Thu, Dec 9, 2010 at 9:29 AM, David Singer  wrote:

> I think if you want that effect, you flip what's visible in an area of the
> page between a playing video, and an image.  Relying on the poster is not
> effective, IMHO.
>
> On Dec 8, 2010, at 23:11 , Kevin Marks wrote:
>
> Apologies for top posting; I'm on my phone.
>
> One case where posters come back after playback is complete is  when there
> are multiple videos on the page, and only one has playback focus at a time,
> such as a page of preview movies for longer ones to purchase.
>
> In that case, showing the poster again on blur makes sense conceptually.
>
> It seems that getting back into the pre-playback state, showing the poster
> again would make sense in this context.
>
> That would imply adding an unload() method that reverted to that state, and
> could be used to make any cached media data purgeable in favour of another
> video that is subsequently loaded.
>
> On Dec 8, 2010 6:56 PM, "Ian Hickson"  wrote:
>
> On Sun, 19 Sep 2010, Shiv Kumar wrote:
> >
> > I'd like to see the implementation of the poster attribut...
> This is an implementation choice; the spec allows either the poster to be
> used or the first frame. This is to allow the browser to use the poster
> frame until playback begins, but to then use the first frame if the user
> seeks back to the start of the video.
>
>
>
> > The poster should not show while the player is seeking (some browser
> > implementation do show t...
> That's an implementation bug. The spec doesn't allow that.
>
>
>
> > The poster should show again after the video has ended.
> Why?
>
>
>
> > The visibility of the poster should be scriptable and/or controllable
> > using an attribute. Mea...
> What's the use case for this?
>
>
>
> On Mon, 20 Sep 2010, Silvia Pfeiffer wrote:
> >
> > | When a video element is paused and the current p...
> That would be annoying in a different way -- it would mean you couldn't
> seek back to the start of the video and see the first frame.
>
>
> We could make the spec more precise and require that a particular
> behaviour occur before playback has ever happened and another after
> playback has ever happened, but in practice I think that there is only one
> behaviour that is useful and desireable enough that all browsers will
> implement it, and we don't gain much by making the other more esotecir
> behaviours non-conforming for those few people who would prefer it the
> other way. (In general it is considered bad form to require particular UI
> unless there is a strong reason to do so.)
>
>
>
> On Sun, 19 Sep 2010, Monty Montgomery wrote:
> >
> > If the default action is to redisplay the poster...
> The default behaviour without script should be the most useful behaviour,
> not the behaviour that can most easily be turned into another behaviour
> with script.
>
>
>
> On Mon, 20 Sep 2010, Zachary Ozer wrote:
> >
> > I'd like to weight in quickly on this based on feedba...
> >  * Webkit's original implementation (show the first frame once it's
>
> > available) is requested by a lot of people. What they don't realize is
> > that the first frame is ...
> > (you have to start loading the video, then call play() and pause() on
>
> > the first frame), but I'd say it's still a good idea to display the
> > first frame if there's no p...
> This seems consistent with the spec's requirements.
>
>
>
> > * Don't show the poster when the video buffers - just pause the video
> > and give some visual i...
> This also.
>
>
>
> > * We've never had anyone request different poster images for begin /
> > pause / end. People gen...
> > and end, and they want the same image. If someone wants to change it,
>
> > allow them to set the poster attribute via JavaScript.
> I'm not aware of people wanting to have it appear at the end --

Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-10 Thread Kevin Marks
If you really want to test timecode, you need to get into SMPTE drop-frame
timecode too (possibly the single most annoying standards decision of. all
time was choosing 3/1001 as the framerate of NTSC video)

Eric, can you make BipBop movie for this? - Like the ones used in this demo:

http://developer.apple.com/library/mac/#documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html

http://devimages.apple.com/iphone/samples/bipbopgear3.html


On Mon, Jan 10, 2011 at 11:18 AM, Rob Coenen  wrote:

> Thanks for the update.
> I have been testing with WebKit nightly / 75294 on MacOSX 10.6.6 / 13"
> Macbook Pro, Core Duo.
>
> Here's a test movie that I created a while back. Nevermind the video
> quality- the burned-in timecodes are 100% correct, I have verified this by
> exploring each single frame by hand.
>
>
> http://www.massive-interactive.nl/html5_video/transcoded_03_30_TC_sec_ReviewTest.mp4
>
> Please let me know once you guys have downloaded the file, I like to remove
> it from my el-cheapo hosting account ASAP.
>
> thanks,
>
> Rob
>
>
> On Mon, Jan 10, 2011 at 2:54 PM, Eric Carlson  >wrote:
>
> >
> > On Jan 9, 2011, at 11:14 AM, Rob Coenen wrote:
> >
> > I have written a simple test using a H264 video with burned-in timecode
> > (every frame is visually marked with the actual SMPTE timecode)
> > Webkit is unable to seek to the correct timecode using 'currentTime',
> it's
> > always a whole bunch of frames off from the requested position. I reckon
> it
> > simply seeks to the nearest keyframe?
> >
> >   WebKit's HTMLMediaElement implementation uses different media engines
> on
> > different platforms (eg. QuickTime, QTKit, GStreamer, etc). Each media
> > engine has somewhat different playback characteristics so it is
> impossible
> > to say what you are experiencing without more information. Please file a
> bug
> > report at https://bugs.webkit.org/ with your test page and video file,
> and
> > someone will look into it.
> >
> > eric
> >
> >
>


Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-11 Thread Kevin Marks
These goals are orthogonal though - stepping between frames is valuable
whether they are regularly spaced or not.

Timecode is a representation that comes from the legacy video world that
does assume a uniform frame rate

On Tue, Jan 11, 2011 at 2:40 PM, Rob Coenen  wrote:

> Hi David- that is b/c in an ideal world I'd want to seek to a time
> expressed as a SMPTE timecode (think web apps that let users step x frames
> back, seek y frames forward etc.). In order to convert SMPTE to the floating
> point value for video.seekTime I need to know the frame rate.
>
> -Rob
>
>
> On Tue, Jan 11, 2011 at 10:35 PM, David Singer  wrote:
>
>> why does the frame rate make any difference on the accuracy of seeking to
>> a time?  Imagine a video that runs at 1 frame every 10 seconds, and I seek
>> to 25 seconds.  I would expect to see 5 seconds of the third frame, 10
>> seconds of the 4th, and so on.
>>
>> On Jan 11, 2011, at 18:54 , Rob Coenen wrote:
>>
>> > just a follow up question in relation to SMPTE / frame accurate
>> playback: As
>> > far as I can tell there is nothing specified in the HTML5 specs that
>> will
>> > allow us to determine the actual frame rate (FPS) of a movie? In order
>> to do
>> > proper time-code calculations it's essential to know both the
>> video.duration
>> > and video.fps - and all I can find in the specs is video.duration,
>> nothing
>> > in video.fps
>> >
>> > -Rob
>> >
>> >
>> > On Mon, Jan 10, 2011 at 9:32 PM, Kevin Marks 
>> wrote:
>> >
>> >> If you really want to test timecode, you need to get into SMPTE
>> drop-frame
>> >> timecode too (possibly the single most annoying standards decision of.
>> all
>> >> time was choosing 3/1001 as the framerate of NTSC video)
>> >>
>> >> Eric, can you make BipBop movie for this? - Like the ones used in this
>> >> demo:
>> >>
>> >>
>> >>
>> http://developer.apple.com/library/mac/#documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html
>> >>
>> >> http://devimages.apple.com/iphone/samples/bipbopgear3.html
>> >>
>> >>
>> >> On Mon, Jan 10, 2011 at 11:18 AM, Rob Coenen 
>> wrote:
>> >>
>> >>> Thanks for the update.
>> >>> I have been testing with WebKit nightly / 75294 on MacOSX 10.6.6 / 13"
>> >>> Macbook Pro, Core Duo.
>> >>>
>> >>> Here's a test movie that I created a while back. Nevermind the video
>> >>> quality- the burned-in timecodes are 100% correct, I have verified
>> this by
>> >>> exploring each single frame by hand.
>> >>>
>> >>>
>> >>>
>> http://www.massive-interactive.nl/html5_video/transcoded_03_30_TC_sec_ReviewTest.mp4
>> >>>
>> >>> Please let me know once you guys have downloaded the file, I like to
>> >>> remove
>> >>> it from my el-cheapo hosting account ASAP.
>> >>>
>> >>> thanks,
>> >>>
>> >>> Rob
>> >>>
>> >>>
>> >>> On Mon, Jan 10, 2011 at 2:54 PM, Eric Carlson > >>>> wrote:
>> >>>
>> >>>>
>> >>>> On Jan 9, 2011, at 11:14 AM, Rob Coenen wrote:
>> >>>>
>> >>>> I have written a simple test using a H264 video with burned-in
>> timecode
>> >>>> (every frame is visually marked with the actual SMPTE timecode)
>> >>>> Webkit is unable to seek to the correct timecode using 'currentTime',
>> >>> it's
>> >>>> always a whole bunch of frames off from the requested position. I
>> reckon
>> >>> it
>> >>>> simply seeks to the nearest keyframe?
>> >>>>
>> >>>>  WebKit's HTMLMediaElement implementation uses different media
>> engines
>> >>> on
>> >>>> different platforms (eg. QuickTime, QTKit, GStreamer, etc). Each
>> media
>> >>>> engine has somewhat different playback characteristics so it is
>> >>> impossible
>> >>>> to say what you are experiencing without more information. Please
>> file a
>> >>> bug
>> >>>> report at https://bugs.webkit.org/ with your test page and video
>> file,
>> >>> and
>> >>>> someone will look into it.
>> >>>>
>> >>>> eric
>> >>>>
>> >>>>
>> >>>
>> >>
>> >>
>>
>> David Singer
>> Multimedia and Software Standards, Apple Inc.
>>
>>
>


Re: [whatwg] need a way to set output format from StreamRecorder

2011-02-14 Thread Kevin Marks
On Mon, Feb 14, 2011 at 2:39 PM, Ian Hickson  wrote:

> On Fri, 19 Nov 2010, Per-Erik Brodin wrote:
> >
> > We are about to start implementing stream.record() and StreamRecorder.
> > The spec currently says that “the file must be in a format supported by
> > the user agent for use in audio and video elements” which is a
> > reasonable restriction. However, there is currently no way to set the
> > output format of the resulting File that you get from recorder.stop().
> > It is unlikely that specifying a default format would be sufficient if
> > you in addition to container formats and codecs consider resolution,
> > color depth, frame rate etc. for video and sample size and rate, number
> > of channels etc. for audio.
> >
> > Perhaps an argument should be added to record() that specifies the
> > output format from StreamRecorder as a MIME type with parameters? Since
> > record() should probably throw when an unsupported type is supplied, it
> > would perhaps be useful to have a canRecordType() or similar to be able
> > to test for supported formats.
>
> I haven't added anything here yet, mostly because I've no idea what to
> add. The ideal situation here is that we have one codec that everyone can
> read and write and so don't need anything, but that may be hopelessly
> optimistic.


That isn't the ideal, as it locks us into the current state of the art
forever. The ideal is to enable multiple codecs +formats that can be swapped
out over time. That said, uncompressed audio is readily codifiable, and we
could pick a common file format, sample rate, bitdepth and channel caount
specification.


In the meantime I encourage implementors to experiment with
> this (with all the APIs vendor-prefixed of course) to work out what the
> API should look like. Implementation experience is the main thing that
> will drive this forward, I think.
>
>
That is a fair point.


Re: [whatwg] need a way to set output format from StreamRecorder

2011-02-15 Thread Kevin Marks
On Mon, Feb 14, 2011 at 11:52 PM, Nils Dagsson Moskopp <
n...@dieweltistgarnichtso.net> wrote:

> Kevin Marks  schrieb am Mon, 14 Feb 2011 22:33:13
> -0800:
>
> > On Mon, Feb 14, 2011 at 2:39 PM, Ian Hickson  wrote:
> >
> > > […]
> > >
> > > I haven't added anything here yet, mostly because I've no idea what
> > > to add. The ideal situation here is that we have one codec that
> > > everyone can read and write and so don't need anything, but that
> > > may be hopelessly optimistic.
> >
> >
> > That isn't the ideal, as it locks us into the current state of the art
> > forever. The ideal is to enable multiple codecs +formats that can be
> > swapped out over time.
>
> Yeah, because that really worked well with , the generic
> container element. Only it didn't, and with today's media elements
> people are venting stuff like “I had to encode all my sound files in
> Ogg Vorbis and MP3, just because of you, Safari. You make my life
> unnecessarily difficult.”
>
> — <http://www.phoboslab.org/log/2010/09/biolab-disaster>
>
> As someone who sometimes produces audio (and may want to use of
> browser-provided facilities, once they become available), I may have a
> reasonable interest in interoperability between browsers.
>

To be fair to Safari it supports far more audio formats than other browsers,
as it incorporates QuickTime's engine, which was designed to cope with
multiple audio, video and file formats via well designed abstractions, and
the Component Manager, which has lasted since the late 1980s itself (first
public release of QuickTime was 1991, but the codebase goes back pre-1990).


>
> (The Enrichment Center once again reminds you that codec hell is a real
> place where your web application will be sent at the first sign of
> defiance.)
>
> On further thought, “state of the art lock-in” may not be as bad as you
> might fear: First, bandwidth and storage space are becoming cheaper
> over time; second, there is something as “good enough”, GZIP / DEFLATE
> or MP3 being such examples that serve us for over 15 years each — even
> though better specifications (Vorbis, 7z) clearly exist.
>
> Yes, MP3 is the de facto standard, and somehow Mozilla and Webkit Android
still won't play them. (Bizarrely, Webkit Android 'supports' HTML5 
without supporting any codecs or file formats at all).



> Even CPIO is used by my modern desktop system and that was defined
> around the mid-80ies (or so Wikipedia tells me, I certainly wasn't
> released at that point).
>

 and AIFF, MOV, WAV, AVI and MP4 are all based on IFF
http://en.wikipedia.org/wiki/Interchange_File_Format - a future-proof binary
chunk format form the mid 80s too.

Supporting playback of uncompressed audio (and uLaw, aLaw, PCM) in .au .aif
.wav and .mov should be trivial, and not encumbered by any patents. Picking
one to record in by default should be something we could agree on - which is
most widely supported at the moment? WAV?


Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-02-15 Thread Kevin Marks
Returning to this discussion, I think it is lacking in use cases.

Consider the controllers we are used to - they tend to have frame step,
chapter step and some kind of scrub bar.

Frame stepping is used when you want to mark an accurate in or our point, or
catch a still frame. This needs to be accurate, and it is always local.

Chapter stepping means 'move me to the next meaningful break point in this
media. There is a very natural structure for this in almost all professional
media, and it is definitely worth getting this right. This is a long range
jump, but it is likely to be a key frame or start of new file segment.

Scrubbing is when you are dragging the bar back and forth to find a
particular point. It is intermediate in resolution between the previous two,
but it needs to be responsive to work - the lag between moving the bar and
showing something. In many cases decoding only key frames in this state
makes sense, as this is most responsive, and also likely to catch scene
boundaries, which are commonly key frames anyway.

The degenerate case of scrubbing is 'fast-forwarding', where the stream is
fetched faster than realtime, but again only keyframes are shown.

Are we sure all of these use cases are represented by the options mentioned
below?

On Mon, Jan 24, 2011 at 12:49 PM, Robert O'Callahan wrote:

> On Tue, Jan 25, 2011 at 9:34 AM, Philip Jägenstedt  >wrote:
>
> > On Mon, 24 Jan 2011 21:10:21 +0100, Robert O'Callahan <
> > rob...@ocallahan.org> wrote:
> >
> >>
> >> Interesting. It doesn't in Firefox; script always sees a snapshot of a
> >> consistent state until it returns to the event loop or does something
> >> modal
> >> (although audio, and soon video, will continue to play while script
> runs).
> >> I'm not sure if the spec should require that ... overall our APIs try
> >> pretty
> >> hard not to expose races to JS.
> >>
> >
> > How does that work? Do you take a copy of all properties that could
> > possibly change during script execution, including ones that create a new
> > object, like buffered and seekable?
>
>
> All script-accessible state exists on the main thread (the thread that runs
> the event loop), and is updated via asynchronous messages from decoder and
> playback threads as necessary. 'buffered' is always in sync since data
> arrival and eviction from the media data cache happen on the main thread.
> (That cache can be read from other threads though.)
>
> If you instead only make a copy on the first read, isn't it still possible
> > to get an inconsistent state, e.g. where currentTime isn't in the
> buffered
> > ranges?
> >
>
> No, this wouldn't happen, although it might be possible for currentTime to
> be outside the buffered ranges for other reasons.
>
> How about HTMLImageElement.complete, which the spec explicitly says can
> > change during script execution?
> >
>
> Interesting, I didn't know about that.
>
> In any case, it sounds like either HTMLMediaElement is underspecified or
> one
> > of us has interpreted in incorrectly, some interop on this point would be
> > nice.
>
>
> Maybe. If the spec is clarified to allow races when accessing media element
> state, I guess it won't be the end of the world, although I predict interop
> difficulties. But that's always an easy prediction! :-)
>
> The biggest use case is clicking a seek bar and ending up somewhere close
> > enough, but yes, being able to do fast relative seeking is a nice bonus.
> > Maybe we should do what many media frameworks do and use a "reference"
> > parameter, defining what the seek is relative to. Usually you can seek
> > relative to the beginning, end and current position, but perhaps we could
> > reduce that to just "absolute" and "relative". That's a bit less magic
> than
> > inspecting currentTime when the method is called.
> >
> > So far:
> >
> > seek(t, ref, how);
> >
> > ref is "absolute" (default) or "relative"
> >
> > how is "accurate" (default) or "fast"
> >
> > (or numeric enums, if that's what DOM interfaces usually have)
>
>
> That works.
>
> Rob
> --
> "Now the Bereans were of more noble character than the Thessalonians, for
> they received the message with great eagerness and examined the Scriptures
> every day to see if what Paul said was true." [Acts 17:11]
>


Re: [whatwg] Google Feedback on the HTML5 media a11y specifications

2011-02-16 Thread Kevin Marks
On Tue, Feb 8, 2011 at 6:57 PM, Silvia Pfeiffer
wrote:

> Hi Philip, all,
>
>
> On Sun, Jan 23, 2011 at 1:23 AM, Philip Jägenstedt 
> wrote:
> > On Fri, 14 Jan 2011 10:01:38 +0100, Silvia Pfeiffer
> >  wrote:
>
> >> 5. Ability to move captions out of the way
> >>
> >> Our experience with automated caption creation and positioning on
> >> YouTube indicates that it is almost impossible to always place the
> >> captions out of the way of where a user may be interested to look at.
> >> We therefore allow users to dynamically move the caption rendering
> >> area to a different viewport position to reveal what is underneath. We
> >> recommend such drag-and-drop functionality also be made available for
> >> TimedTrack captions on the Web, especially when no specific
> >> positioning information is provided.
> >
> > This would indeed be rather nice, but wouldn't it interfere with text
> > selection? Detaching the captions into a floating, draggable window via
> the
> > context menu would be a theoretically possible solution, but that's
> getting
> > rather far ahead of ourselves before we have basic captioning support.
>
> On YouTube you can only move them within the video viewport. You
> should try it - it's really awesome actually.
>

Moving them only within the video viewport is a bug, not a feature. Classic
TV required this (especially with overscan), but on modern TV's there is
often a letterbox or pillarbox are that captions should go in. On a
decent-sized computer screen, there is no real excuse for obscuring the
video with the captions rather than putting them underneath or alongside.

I know the flash implementation of YouTube ends up treating the video
viewport as a surrogate screen, as you can't draw outside it, but the HTML5
version could do this better.

>
> When you say "interfere with text selection" are you suggesting that
> the text of captions/subtitles should be able to be cut and pasted? I
> wonder what copyright holders think about that.
>

What they think is beside the point; fair use/fair dealing applies in many
cases. Omitting a useful feature because of vague fears of what people think
is the opposite of a use case.


Re: [whatwg] The blockquote element spec vs common quoting practices

2011-07-14 Thread Kevin Marks
There is another common pattern, seen in blogging a lot, of putting
the citation at the top eg
As http://www.gyford.com/phil/";
class="url" rel="acquaintance met colleague">Phil wrote about the http://www.gyford.com/phil/writing/2009/04/28/geocities.php";>ugly
and neglected fragments of Geocities:


  GeoCities is an awful, ugly, decrepit mess. And this is why it
will be sorely missed. It’s not only a fine example of the amateur web
vernacular but much of it is an increasingly rare example of a
period web vernacular. GeoCities sites show what normal,
non-designer, people will create if given the tools available around
the turn of the millennium.


(from jeremy) or pretty much any post here:

http://www.theatlantic.com/ta-nehisi-coates/

Would a  pattern in the blockquote work for this?

If I was writing a detector for this pattern,  followed by a colon
and   would do it pretty reliably...

On Fri, Jul 8, 2011 at 4:20 AM, Jeremy Keith  wrote:
>
> Oli wrote:
> > I’ve outlined the problem and some potential solutions (with their
> > pros and cons) in:
> >  http://oli.jp/2011/blockquote/
>
> Excellent work, IMHO. I've added my own little +1 here: 
> http://adactio.com/journal/4675/
>
> Oli continues:
> > I think the blockquote spec should be changed to allow the inclusion
> > of notes and attribution (quote metadata), perhaps by the addition of
> > a sentence like:
> >  “Block quotes may also contain annotations or attribution, inline or
> > in an optional footer element”
> > This would change blockquote from being purely source content, to
> > being source content with possible metadata inline or in a footer.
> > However I don’t think that’s a problem, as these things increase the
> > value of the quoted content. I think a spec change is necessary to
> > accommodate common quoting practices.
>
> This sounds good to me.
>
> 1) Oli has shown the real-world use cases for attribution *within* 
> blockquotes. I know that the "Pave the cowpaths" principle gets trotted out a 
> lot, but Oli's research here is a great example of highlighting existing 
> cowpaths (albeit in printed rather than online material):
>
> http://www.w3.org/TR/html-design-principles/#pave-the-cowpaths
>
> "When a practice is already widespread among authors, consider adopting it 
> rather than forbidding it or inventing something new."
>
>
> 2) This is something that authors want, both on the semantic and styling 
> level (i.e. a way to avoid having to wrap every blockquote in a div just to 
> associate attribution information with said blockquote). I believe that the 
> problem statement that Oli has outlined fits with the HTML design principle 
> "Solve real problems."
>
> http://www.w3.org/TR/html-design-principles/#solve-real-problems
>
> "Abstract architectures that don't address an existing need are less favored 
> than pragmatic solutions to problems that web content faces today."
>
>
> 3) The solution that Oli has proposed (allowing footer within blockquote to 
> include non-quoted information) is an elegant one, in my opinion. I can think 
> of some solutions that would involve putting the attribution data outside the 
> blockquote and then explicitly associating it using something like the @for 
> attribute and an ID, but that feels messier and less intuitive to me. Simply 
> allowing a footer within a blockquote to contain non-quoted material 
> satisfies the design principle "Avoid needless complexity."
>
> http://www.w3.org/TR/html-design-principles/#avoid-needless-complexity
>
> "Simple solutions are preferred to complex ones, when possible. Simpler 
> features are easier for user agents to implement, more likely to be 
> interoperable, and easier for authors to understand."
>
>
> 4) Because the footer element is new to HTML5, I don't foresee any 
> backward-compatibility issues. The web isn't filled with blockquotes 
> containing footers that are part of the quoted material. Oli's solution would 
> match up nicely with the design principle "Support existing content."
>
> http://www.w3.org/TR/html-design-principles/#support-existing-content
>
> "The benefit of the proposed change should be weighed against the likely cost 
> of breaking content"
>
> Jeremy
>
> --
> Jeremy Keith
>
> a d a c t i o
>
> http://adactio.com/
>
>


Re: [whatwg] a rel=attachment

2011-07-15 Thread Kevin Marks
Enclosure is precisely this use case.

You can go back and grep
http://www.imc.org/atom-syntax/entire-arch.txt for enclosure for the
discussion if you like. After much debate, rel="enclosure" was used to
replace RSS's  element, preserving the name.

This will lead you back via the RSS specs to this post by Dave Winer in 2001:

http://www.thetwowayweb.com/payloadsforrss

which makes the same analogy with email that you're using for "attachment"

The original example RSS file there:

http://static.userland.com/gems/backend/gratefulDead.xml

Is usefully rendered by Firefox and Safari, by translating the XML
file into an HTML representation that makes sense to users, and allows
subscription to it.

The same is true for any Atom or RSS feed containing podcasts.

Sadly, Chrome just shows a document dump of the XML tree, useless to anyone.

On Fri, Jul 15, 2011 at 6:30 PM, Peter Kasting  wrote:
> On Fri, Jul 15, 2011 at 6:25 PM, Tantek Çelik wrote:
>
>> ** Specs *and* publishers/consumers/implementations of rel-enclosure exist
>> (see aforementioned wiki page).
>
>
> The list on the wiki page, which I assume is non-exhaustive, is
> extraordinarily uncompelling.

Indeed, that could do with updating with newer examples and references
to oterh support.

>
>
>> And the name is based on re-using the existing term with the same semantic
>> from the Atom spec.
>>
>
> Don't care.  Atom feeds and HTML pages are very different things.  Basically
> I echo all of Tab's annoyances with this.
>

Atom/RSS Feeds are seen as useful HTML sources by many browser implementations.


Re: [whatwg] / not needed

2012-05-17 Thread Kevin Marks
Jpeg 2000 has wavelet coding and progressive loading so you can stop at the
desired resolution (if you decode on the read thread). Presumably it will
be patent free by 2020...
On May 16, 2012 3:57 PM, "Glenn Maynard"  wrote:

> On Wed, May 16, 2012 at 5:44 PM, Aldrik Dunbar  wrote:
>
> > Of course if someone comes up with a progressively loaded image format
> > this could be handled much more elegantly.
> >
>
> Both PNG and JPEG have had this forever.  (PNG's approach is crude, but
> JPEG's is reasonable.)  However, there's no way to control it client-side;
> without somehow knowing how many bytes to load for a certain amount of
> detail, all you can do is load the whole thing (or make multiple requests,
> which is obviously worse).   I've thought about this in the past, but it's
> a hard thing to make practical use of.
>
> --
> Glenn Maynard
>


Re: [whatwg] apple-touch-icon

2014-07-27 Thread Kevin Marks
some data here: http://indiewebcamp.com/icon


On Sun, Jul 27, 2014 at 5:13 AM, Anne van Kesteren  wrote:

> For  we already define the /favicon.ico fallback. If a
> page lacks  we should probably also look at
> Apple's proprietary extension here given that it's quite widely
> adopted. Chrome supports it and there is some work going on in Firefox
> as well: https://bugzilla.mozilla.org/show_bug.cgi?id=921014
>
>
> --
> http://annevankesteren.nl/
>


Re: [whatwg] apple-touch-icon

2014-07-28 Thread Kevin Marks
Using a single JPEG/PNG that is also part of the home page display is a way
to mitigate bandwidth used.
Another way to do this is to use an SVG for a logo - which browsers support
this now?
On 28 Jul 2014 07:59, "John Mellor"  wrote:

> Chrome 30 dropped support[1] for fetching apple-touch-icon-* from well
> known URLs, since the 404 pages that are usually returned were consuming
> 3-4% of all mobile bandwidth usage[2]. We're unlikely to reverse that.
>
> We still support apple-touch-icon-* via link rel under some circumstances
> (e.g. for add to homescreen), but they're deprecated[3], since we'd like
> authors to use the standard for this, i.e.:
>
> 
>
> (or even good old:
>
> 
>
> with multiple resolutions in the .ico file for compatibility with IE<11).
>
> [1]: https://code.google.com/p/chromium/issues/detail?id=259681
> [2]: https://bugs.webkit.org/show_bug.cgi?id=104585
> [3]: https://code.google.com/p/chromium/issues/detail?id=296962
>
>
> On 28 July 2014 08:35, Mathias Bynens  wrote:
>
> > On Sun, Jul 27, 2014 at 1:13 PM, Anne van Kesteren 
> > wrote:
> > > For  we already define the /favicon.ico fallback. If a
> > > page lacks  we should probably also look at
> > > Apple's proprietary extension here given that it's quite widely
> > > adopted. Chrome supports it and there is some work going on in Firefox
> > > as well: https://bugzilla.mozilla.org/show_bug.cgi?id=921014
> >
> > FWIW, Chrome’s intention was to drop support for Apple’s magic file
> > names at some point.
> >
> https://developer.chrome.com/multidevice/android/installtohomescreen#icon
> > But I agree — it seems that this won’t happen any time soon.
> >
> > In case it helps, here’s some more info on touch icon support on
> > various OS/devices: http://mathiasbynens.be/notes/touch-icons
> >
>


Re: [whatwg] Gapless playback problems with web audio standards

2014-10-25 Thread Kevin Marks
To get gapless, you need to be sample accurate, which is sub millisecond
precision. A playlist element had been discussed before - making it the
browser's job to be sample accurate.

The quicktime plugin had a working version of this a decade ago; SMIL was
supposed to be the way to do it, but a declarative media playlist seems a
natural thing to absorb into the browser as an 80:20 case, rather than the
cross media sync that SMIL promised and didn't deliver.
On 25 Oct 2014 06:30, "David Kendal"  wrote:

> Hi,
>
> 
>
> dpk
>
>


Re: [whatwg] A mask="" advisory flag for

2015-06-24 Thread Kevin Marks
Does this mean we can now have rel=icon with SVG instead of providing a
bitmap for every iOS device specifically (when we add to homepage)? Do
chrome and  Firefox support SVG icon images?
On 24 Jun 2015 2:40 pm, "Tab Atkins Jr."  wrote:

> On Wed, Jun 24, 2015 at 2:36 PM, Maciej Stachowiak  wrote:
> > To close the loop on this, we will change to  href="whatever.svg" color="#aabbcc">. We like the idea of determining the
> color from the SVG, but we won't be able to implement in time for this
> cycle, and having an explicit color override seems useful. So for now we'll
> implement explicit color and we'll consider automatic color picking based
> on the SVG as a fallback when the color is missing as a future extension.
> >
> > Please let me know if anyone disagrees with this approach.
>
> Sounds acceptable to me.  What's the grammar of color=''?  Just hex,
> or full CSS ?  (Either is fine with me.)
>
> ~TJ
>


Re: [whatwg] VIDEO and pitchAdjustment

2015-09-01 Thread Kevin Marks
QuickTime supports full variable speed playback and has done for well over
a decade. With bidirectionally predicted frames you need a fair few buffers
anyway, so generalising to full variable wait is easier than posters above
claim - you need to work a GOP at a time, but memory buffering isn't the
big issue these days.
What QuickTime got right was having a ToC approach to video so being able
to seek rapidly was possible without thrashing , whereas the stream
oriented approaches we are stuck with no wean knowing which bit of the file
to read to get the previous GOP is the hard part.

On Fri, Aug 28, 2015 at 6:02 PM, Xidorn Quan  wrote:

> On Sat, Aug 29, 2015 at 8:27 AM, Robert O'Callahan 
> wrote:
> > On Sat, Aug 29, 2015 at 8:18 AM, James Ross 
> wrote:
> >
> >> Support is certainly poor; Internet Explorer/Trident and Edge both
> support
> >> negative playback rates on desktop (I haven’t tested mobile) but do so
> by
> >> simply showing the key frames as they are reached in reverse, in my
> testing.
> >
> > That's not so hard to implement, but it's also mostly useless since
> > keyframes are often several seconds apart or more.
>
> It could be useful for a few usecases like fast-backward. Windows
> Media Player does it this way.
>
> FWIW, QuickTime supports per-frame backward playback if you press and
> hold the left arrow. I guess they cannot guarantee the rate, which
> makes them require holding the key instead of providing a playback
> rate setting.
>
> - Xidorn
>


Re: [whatwg] VIDEO and pitchAdjustment

2015-09-01 Thread Kevin Marks
On Tue, Sep 1, 2015 at 10:55 AM, David Singer  wrote:

>
> > On Sep 1, 2015, at 10:47 , Yay295  wrote:
> >
> > On Tue, Sep 1, 2015 at 11:30 AM, David Singer  wrote:
> > > On Sep 1, 2015, at 4:03 , Robert O'Callahan 
> wrote:
> > >> On Tue, Sep 1, 2015 at 8:02 PM, Kevin Marks 
> wrote:
> > >> QuickTime supports full variable speed playback and has done for well
> over
> > >> a decade. With bidirectionally predicted frames you need a fair few
> buffers
> > >> anyway, so generalising to full variable wait is easier than posters
> above
> > >> claim - you need to work a GOP at a time, but memory buffering isn't
> the
> > >> big issue these days.
> > >
> > > "GOP”?
> >
> > Group of Pictures.  Video-speak for the run between random access points.
> >
> > > How about a hard but realistic (IMHO) case: 4K video (4096 x 2160), 25
> fps,
> > > keyframe every 10s. Storing all those frames takes 250 x 4096 x 2160 x
> 2
> > > bytes = 4.32 GiB. Reading back those frames would kill performance so
> that
> > > all has to stay in VRAM. I respectfully deny that in such a case,
> memory
> > > buffering "isn't a big issue”.
> >
> > well, 10s is a pretty long random access interval.
> >
> > There's no way to know the distance between keyframes though. The video
> could technically have only one keyframe and still work as a video.
>
> yes, but that is rare. There are indeed videos that don’t play well
> backward, or consume lots of memory and/or CPU, but most are fine.
>
> >
> > >> What QuickTime got right was having a ToC approach to video so being
> able
> > >> to seek rapidly was possible without thrashing , whereas the stream
> > >> oriented approaches we are stuck with no wean knowing which bit of
> the file
> > >> to read to get the previous GOP is the hard part.
> > >
> > > I don't understand. Can you explain this in more detail?
>

I explained the essential difference a while ago here:
http://lists.xiph.org/pipermail/vorbis-dev/2001-October/004846.html

The QuickTime file format defines movies that have tracks made of media;
the tracks are en edit list on the media; the media have the frame layout
information encoded.


> >
> > The movie file structure (and hence MP4) has a table-of-contents
> approach to file structure; each frame has its timestamps, file location,
> size, and keyframe-nature stored in compact tables in the head of the file.
> This makes trick modes and so on easier; you’re not reading the actual
> video to seek for a keyframe, and so on.
> >
> > I suppose the browser could generate this data the first time it reads
> through the video. It would use a lot less memory. Though that sounds like
> a problem for the browsers to solve, not the standard.
>
> There is no *generation* on the browser side; these tables are part of the
> file format.


Well, when it imports stream-oriented media it has to construct these in
memory, but they can be saved out again. I know that in theory this made
its way into the mp4 format, but I'm not sure how much of it is real.


Re: [whatwg] VIDEO and pitchAdjustment

2015-09-01 Thread Kevin Marks
On Tue, Sep 1, 2015 at 11:57 AM, David Singer  wrote:

>
> > On Sep 1, 2015, at 11:36 , Kevin Marks  wrote:
> > > I suppose the browser could generate this data the first time it reads
> through the video. It would use a lot less memory. Though that sounds like
> a problem for the browsers to solve, not the standard.
> >
> > There is no *generation* on the browser side; these tables are part of
> the file format.
> >
> > Well, when it imports stream-oriented media it has to construct these in
> memory, but they can be saved out again. I know that in theory this made
> its way into the mp4 format, but I'm not sure how much of it is real.
>
> Two different questions:
> a) do the QuickTime movie file format and the MP4 format contain these
> tables?  Yes.
> b) if I open another format, what happens?
>
> For case (a), the situation may be more nuanced if Movie Fragments are in
> use (you then get the tables for each fragment of the movie, though they
> are easily coalesced as they arrive).
>
> For case (b), classic QuickTime used to ‘convert to movie’ in memory,
> building the tables.  The situation is more nuanced on more recent engines.
>
> I think the point of the discussion is that one cannot dismiss trick modes
> such as reverse play as being unimplementable.


The other point for me is that given http://aomedia.org/ announcing plans
to create a new video file format to fix everything, that this time we
actually learn from this history and make one that is editable and seekable
again.


Re: [whatwg] What's the element for a paragraph label?

2016-09-08 Thread Kevin Marks
On Thu, Sep 8, 2016 at 10:40 AM, David Singer  wrote:
> I am guessing that he'd like a consistent way to style the terms in a 
> ‘definition of terms’ section, the acronyms in a ‘list of acronyms’ section, 
> and so on.
>
> 
>  defined terms
>  Fruit: delicious stuff that falls 
> from trees
>  …
> 
> 
>  acronyms
>  OTT: lit. Over the Top, 
> figuratively meaning something excessive
>  …
> 
>

What is this random xml? We have long had elements for exactly this purpose:


 defined terms
 Fruitdelicious stuff that falls from trees
 …



 acronyms
 OTT lit. Over the
Top, figuratively meaning something excessive
 …



(there was  too, but  is now recommended instead.


Re: [whatwg] metadata

2017-04-23 Thread Kevin Marks
On Sun, Apr 23, 2017 at 5:58 PM, Andy Valencia
 wrote:
> === Dynamic versus static metadata
>
> Pretty much all audio formats have at least one metadata format.  While
> some apparently can embed them at time points, this is not used by any
> players I can find.  The Icecast/Streamcast "metastream" format is the
> only technique I've ever encountered.  The industry is quickly shifting
> to the so-called "Shoutcast v2" format due to:
> https://forums.developer.apple.com/thread/66586
>
> Metadata formats as applied to static information are, of course, of
> great interest.  Any dynamic technique should fit into the existing
> approach.

There are lots of models for dynamic metadata - look at soundcloud
comments at times, youtube captions and overlays, Historically there
have been chapter list markers in MPEG, QuickTime and mpeg4 (m4a, m4v)
files too.


Re: [whatwg] rel=bookmark

2017-08-05 Thread Kevin Marks
That use case sounds more like rel="canonical"

On 6 Aug 2017 2:07 am, "Ed Summers"  wrote:

> Hi all,
>
> I was wondering if anyone can provide any information, or a pointer to
> previous discussion, about why the bookmark link relation can't be used
> with the  element [1].
>
> The topic has come up recently on the IETF link-relations discussion list
> [2] where a new link relation has been proposed to encourage persistent
> linking [3]. The proposed 'identifier' relation seems to closely resemble
> the idea of a permalink (a persistent link) that can be found in the
> definition of bookmark. If bookmark allowed use with the  element
> then I think there would be less of a demonstrated need for the new
> 'identifier' link relation.
>
> Thanks for any information you can provide. I apologize if I'm restarting
> a conversation that has already happened.
>
> //Ed
>
> [1] https://www.w3.org/TR/html5/links.html#link-type-bookmark
> [2] https://www.ietf.org/mail-archive/web/link-relations/
> current/msg00670.html
> [3] https://datatracker.ietf.org/doc/draft-vandesompel-identifier/


Re: [whatwg] rel=bookmark

2017-08-08 Thread Kevin Marks
This sounds like what we use uid for in microformats - the url that you
want as the persistent identifier.

http://microformats.org/wiki/uid - it looks like you wrote this up a while
back, Ed.

See u-uid in h-entry http://microformats.org/wiki/h-entry



On 8 Aug 2017 5:58 pm, "Ed Summers"  wrote:

> Hi Kevin,
>
> > On Aug 5, 2017, at 9:19 PM, Kevin Marks  wrote:
> >
> > That use case sounds more like rel="canonical"
>
> You weren't the only one (myself included) who thought that. Michael
> Nelson, one of the authors if the identifier I-D, just wrote a blog post
> explaining why not canonical:
>
> http://ws-dl.blogspot.com/2017/08/2017-08-07-
> relcanonical-does-not-mean.html
>
> I think I'm convinced that canonical isn't the right fit for what they are
> talking about. But if rel=bookmark could be used in  elements I think
> it would work better than a slightly similar, oddly named, link relation,
> which IMHO is bound to cause confusion for web publishers.
>
> //Ed


Re: [whatwg] rel=bookmark

2017-08-08 Thread Kevin Marks
See also http://microformats.org/wiki/sharelink-formats for a (recent)
related use case

On 8 Aug 2017 7:01 pm, "Kevin Marks"  wrote:

> This sounds like what we use uid for in microformats - the url that you
> want as the persistent identifier.
>
> http://microformats.org/wiki/uid - it looks like you wrote this up a
> while back, Ed.
>
> See u-uid in h-entry http://microformats.org/wiki/h-entry
>
>
>
> On 8 Aug 2017 5:58 pm, "Ed Summers"  wrote:
>
>> Hi Kevin,
>>
>> > On Aug 5, 2017, at 9:19 PM, Kevin Marks  wrote:
>> >
>> > That use case sounds more like rel="canonical"
>>
>> You weren't the only one (myself included) who thought that. Michael
>> Nelson, one of the authors if the identifier I-D, just wrote a blog post
>> explaining why not canonical:
>>
>> http://ws-dl.blogspot.com/2017/08/2017-08-07-relcanonical-
>> does-not-mean.html
>>
>> I think I'm convinced that canonical isn't the right fit for what they
>> are talking about. But if rel=bookmark could be used in  elements I
>> think it would work better than a slightly similar, oddly named, link
>> relation, which IMHO is bound to cause confusion for web publishers.
>>
>> //Ed
>
>