[whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-17 Thread Roger Hågensen
I searched the list, and looked at the HTML5 briefly and found nothing, 
nor can I ever recall such.

So this is both a question and a proposal.

On my own site currently I mostly replicate the first paragraph of an 
article in my journal as the meta description,

and write one up for other pages, usually replicating some of the content.

I'm both looking for and want a solution to avoid such redundancy.

The perfect solution would be a summary tag, if you look at the 
journal articles on my site you can imagine the first paragraph being 
done like this:


psummaryThis is just an example, it's a replacement for the old meta 
description, and is a brief summary (description) of the page 
(content)/summary/p


This way the first paragraph in a page would remain unchanged from how 
it is done today, and a search engine like Google or screen readers etc. 
would use the summary tag instead
of the meta description (which is no longer needed at all in cases like 
this), if more than one summary tag the first is considered the page 
summary one, while the others are ignored (but still shown as content 
obviously).


If a new tag is overkill for this, maybe doing it this way instead 
(using one of the new HTML5 tags):
pheader summaryThis is just an example, it's a replacement for the 
old meta description, and is a brief summary (description) of the page 
(content)/header/p


I really do not care how this is implemented/speced just as long as it's 
possible to do.


I began thinking of this recently when it annoyed me that I basically 
had to enter the same content twice, after looking at my site links in 
Google,
and thought to myself...Why do I have to use a meta description to tell 
Google to show the content in the first paragraph as the default summary 
of the page link?
Why can't I simply specify that the first paragraph is the page's meta 
description? Why am I forced to bloat the page unnecessarily like this?


Thee is no reason why the meta description can not be the actual content 
as in most cases I've seen the meta description is supposed to be fully 
human readable,
unlike the meta keywords which no search engines bothers with at all any 
more.


So if the meta description is supposed to be humanly readable and 
displayable as the page summary to humans in search results,

why can't it also actually be in the page content?

I can see at least two ways this will be used. The more elegant way I 
showed, where the first paragraph is a summary/the lead in of the page 
(and also happens to be the teaser content in my RSS feed as well),
or at the bottom of a page with possibly linked category tags or similar 
within it, again allowing dual purpose and reduced redundancy.


To re-iterate, the idea of the summary tag (or however it is 
implemented) should be to have a human readable summary (or teaser as 
may be) of a page, which is itself shown in the page,
but also a replacement for search engines that use the old meta 
description avoiding redundancy.


End result is (hopefully) less redundancy, and higher quality summary 
(page description) shown in search engine results, and so on.
Also allowing people to quickly understand what a page is about by just 
reading the first paragraph (or be enticed to read more).


Now if something like this allready exist/is possible I stand corrected 
and ask, please tell me how to do that.

If not then I'd love to see something like this standardized.

BTW! The text in the first paragraph of this very email could for 
example be the summary/description of this email.
So if it was html tagged in some way, a mail indexing or search engine 
could use that as the summary or description view shown to a human user 
scrolling through archived emails.


Regards,
Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-18 Thread Roger Hågensen

On 2010-03-18 03:37, Roger Hågensen wrote:

I know, replying to myself is a big no-no... *cough*

I searched the list, and looked at the HTML5 briefly and found 
nothing, nor can I ever recall such.

So this is both a question and a proposal.

On my own site currently I mostly replicate the first paragraph of an 
article in my journal as the meta description,
and write one up for other pages, usually replicating some of the 
content.


I'm both looking for and want a solution to avoid such redundancy.


I kept searching after posting that and looked more into HTML5 and 
microdata...
Besides a small anurism while trying to understand the darn thing I did 
find a possible solution, but is it valid?


Example using HTML5 microdata:
(would this be appropriate, would browser devs, and Google and other 
search engines support this?)


The following...

!doctype html
html lang=en
head
meta charset=utf-8 /
titleMicrodata replacing metadata example./title
/head
body
article
headerSection header./header
p itemprop=#descriptionThis is the first paragraph in the document 
or an aside or some other content perhaps./p

pMore content here./p
footerAuthor: a href=example.com/author/url/ 
itemprop=#authorRoger Hågensen/a on time 
datetime=2010-03-18T08:00:00 itemprop=#date18th March 2010 at 8 
o'clock./timebr /

span itemprop=#copyright© Roger Hågensen 2010/spanbr /
Keywords: span itemprop=#keywordsa 
href=http://example.com/tag/Example/;Example/a, a 
href=http://example.com/tag/Microdata/;Microdata/a, a 
href=http://example.com/tag/HTML5/;HTML5/a/span/footer

/article
/body
/html

replaces this...

!doctype html
html lang=en
head
meta charset=utf-8 /
meta name=description content=This is the first paragraph in the 
document or an aside or some other content perhaps. /

meta name=author content=Roger Hågensen /
meta name=date content=2010-03-18T08:00:00 /
meta name=copyright content=© Roger Hågensen 2010 /
meta name=keywords content=Example, Microdata, HTML5 /
titleMicrodata replacing metadata example./title
/head
body
article
headerSection header./header
pThis is the first paragraph in the document or an aside or some other 
content perhaps./p

pMore content here./p
footerAuthor: a href=example.com/author/url/Roger Hågensen/a on 
time datetime=2010-03-18T08:00:0018th March 2010 at 8 
o'clock./timebr /

span© Roger Hågensen 2010/spanbr /
Keywords: spana href=http://example.com/tag/Example/;Example/a, 
a href=http://example.com/tag/Microdata/;Microdata/a, a 
href=http://example.com/tag/HTML5/;HTML5/a/span/footer

/article
/body
/html

itemprop=#description would basically need to be reserved in some 
standards document, I just used the # arbitrarily to indicate this 
document in this example.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



[whatwg] Prevent abuse of data-*

2010-03-18 Thread Roger Hågensen

On 2010-03-17 17:28, Jonas Sicking wrote:

/  I'm wondering if data-* attributes should be renamed to priv-* to make
//  it clearer that it's page's _private_ data.
//
//  data- is such a nice generic prefix that I'm afraid sooner or later
//  someone will start basing microformats-like markup on that.
//
//  It's not a bad idea... Unfortunately data-* is already being used quite a
//  lot and has been widely advertised, so we have to be careful with this.
//  Anyone else have an opinion on this?
/
I don't feel strongly that either name is better. Though I would not
that priv- doesn't make things much clearer since it's totally
undefined who it's private to.
   


Maybe a better naming would have been: doc-*
It's short, it kinda reflect what it's related to as well right? Or does 
that clash with something?


Regards,
Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-19 Thread Roger Hågensen

On 2010-03-18 10:04, Ashley Sheridan wrote:
The main problem with that would be that parsers would then need to 
read into the body of the page to produce a description of your 
site. This might not produce much of an overhead on a one-off basis, 
but imagine a parser that is grabbing the description from hundreds or 
thousands of pages, then this could become a bit of a problem.


I do not see how that is any more or less of an problem than today with 
pages that have meta description missing,
what do those parsers do then? Do they stop at /head ? What do they 
use as description instead? The first paragraph?
The parsers used by all major search engines certainly do not halt, they 
break down the entire page right?


As for delays, that is not an issue for consumers, I can not recall any 
browser ever showing me the meta description unless I explicitly view 
page properties.
I can imagine that the seeing impaired community would love something 
like this, as it would basically tell screenreaders that this is the 
first paragraph/summary/description/teaser of the page,

allowing blind people to more rapidly jump from page to page.

Currently the meta description is not always good content, would be 
interesting to see a Google analysis of how the meta description is used,
i.e. how many are basically repeating page content (like I do) and how 
many just dump keywords in there, how many pages on a site have a site 
wide identical description? And so on.


Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] meta name=description href=#desc /

2010-03-19 Thread Roger Hågensen

On 2010-03-18 13:13, Julian Reschke wrote:

On 18.03.2010 03:37, Roger Hågensen wrote:

I searched the list, and looked at the HTML5 briefly and found nothing,
nor can I ever recall such.
So this is both a question and a proposal.

On my own site currently I mostly replicate the first paragraph of an
article in my journal as the meta description,
and write one up for other pages, usually replicating some of the 
content.

...


See related W3C bug: 
http://www.w3.org/Bugs/Public/show_bug.cgi?id=7577.


Best regards, Julian



Thanks Julian, looking at that found me the link to: 
http://lists.w3.org/Archives/Public/public-html/2009Aug/0990.html

It suggests link rel=description href=#desc /, which is ok I guess.

But why not simply allow this instead:
meta name=description href=#desc /

Existing parsers would notice that content= is missing which is stated 
as being required,

parsers that have been updated would notice there is a href= instead,
so search engines could just look for that id in the page.
I think this would have the highest success rate.

If backwards compatibility is such a major concern then this could be done:
meta name=description content= href=#desc /

I'm unsure what gives the best result for varous parsers though,
would empty content make them behave the same as if the meta tag was not 
there at all?

Or would a empty tag cause them to use  as the actual page description?

I'd prefer to have the content attribute missing instead myself, but...

Regards,
Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] meta name=description href=#desc /

2010-03-19 Thread Roger Hågensen
I forgot to mention that in addition to meta name=description 
href=#desc /
it could also be possible to implement it for meta name=keywords 
href=#keyw / etc.


Full example:

!doctype html
html lang=en
head
meta charset=utf-8 /
meta name=description href=#description /
meta name=author href=#author /
meta name=date href=#date /
meta name=copyright href=#copyright /
meta name=keywords href=#keywords /
titlehref extending the meta tags./title
/head
body
article
headerSection header./header
p id=#descriptionThis is the first paragraph in the document or an 
aside or some other content perhaps./p

pMore content here./p
footerAuthor: a href=example.com/author/url/ id=#authorRoger 
Hågensen/a on time datetime=2010-03-18T08:00:00 id=#date18th 
March 2010 at 8 o'clock./timebr /

span id=#copyright© Roger Hågensen 2010/spanbr /
Keywords: span id=#keywordsa 
href=http://example.com/tag/Example/;Example/a, a 
href=http://example.com/tag/Meta/;Meta/a, a 
href=http://example.com/tag/HTML5/;HTML5/a/span/footer

/article
/body
/html

The date example is a minor issue though, but I guess a parser could 
just check for a datetime attribute if the id is for a time tag?


Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-19 Thread Roger Hågensen

On 2010-03-19 15:17, Ashley Sheridan wrote:
Search engines and people are not the only content parsers. Sure, you 
would expect a parser to maybe look further into the content if the 
description meta tag was missing, but imagine if a parser had to do 
this for all the content it looked at? There are still overheads to 
consider.


Why not just use server-side code to output the first paragraph of 
content as the description for the page also?


I just feel that the head and body areas of a page have two 
distinct uses, and unnecessary crossovers shouldn't occur if it's 
avoidable.


True, but there is also such a thing as uneeded redundancy, sure 
repeating the same info in the meta tags which is also in the document 
may not add that many KB,
but with increasing number of page requesters that really pile up the 
bandwidth total. Something both users and hosters and ISPs should have 
an interest in right?
If you look at my other thread Re: [whatwg] meta name=description 
href=#desc /
It allows notifying the parser that the content is in the page, and it 
is up to the parsers configuration whether to scan beyond the header in 
that case. Best of both worlds IMO.


Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-19 Thread Roger Hågensen

On 2010-03-19 15:43, Ashley Sheridan wrote:

On Fri, 2010-03-19 at 15:43 +0100, Roger Hågensen wrote:

On 2010-03-19 15:17, Ashley Sheridan wrote:
  Search engines and people are not the only content parsers. Sure, you
  would expect a parser to maybe look further into the content if the
  description meta tag was missing, but imagine if a parser had to do
  this for all the content it looked at? There are still overheads to
  consider.

  Why not just use server-side code to output the first paragraph of
  content as the description for the page also?

  I just feel that thehead  andbody  areas of a page have two
  distinct uses, and unnecessary crossovers shouldn't occur if it's
  avoidable.

True, but there is also such a thing as uneeded redundancy, sure
repeating the same info in the meta tags which is also in the document
may not add that many KB,
but with increasing number of page requesters that really pile up the
bandwidth total. Something both users and hosters and ISPs should have
an interest in right?
If you look at my other thread Re: [whatwg]meta name=description
href=#desc /
It allows notifying the parser that the content is in the page, and it
is up to the parsers configuration whether to scan beyond the header in
that case. Best of both worlds IMO.

Roger.
 
I did see that, and it looks like a great idea, as it shouldn't really 
break anything, and I saw that it should be possible to use for the 
keywords too, which would fit perfectly with tag cloud systems used on 
a page.


I would presume that this would cause the content parser (browser) to 
strip any and all tags surrounding the marked content?


Thanks,
Ash
http://www.ashleysheridan.co.uk



Well, looking at the example 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-March/025575.html
I remebeerd that thew title element may have html markup in it (seen it 
in the wild), so most parsers probably apply tag stripping to that already,

so yeah, stripping tags the parser do not want shouldn't be an issue really.

Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] summary tag to help avoid redundancy of meta description tag!?

2010-03-20 Thread Roger Hågensen

On 2010-03-19 17:19, Roger Hågensen wrote:

On 2010-03-19 15:43, Ashley Sheridan wrote:

On Fri, 2010-03-19 at 15:43 +0100, Roger Hågensen wrote:

On 2010-03-19 15:17, Ashley Sheridan wrote:
  I just feel that thehead  andbody  areas of a page have two
  distinct uses, and unnecessary crossovers shouldn't occur if it's
  avoidable.

If you look at my other thread Re: [whatwg]meta name=description
href=#desc /
It allows notifying the parser that the content is in the page, and it
is up to the parsers configuration whether to scan beyond the header in
that case. Best of both worlds IMO.

Roger.
 
I did see that, and it looks like a great idea, as it shouldn't 
really break anything, and I saw that it should be possible to use 
for the keywords too, which would fit perfectly with tag cloud 
systems used on a page.


I would presume that this would cause the content parser (browser) to 
strip any and all tags surrounding the marked content?


Thanks,
Ash
http://www.ashleysheridan.co.uk



Well, looking at the example 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-March/025575.html
I remebeerd that thew title element may have html markup in it (seen 
it in the wild), so most parsers probably apply tag stripping to that 
already,
so yeah, stripping tags the parser do not want shouldn't be an issue 
really.


Just made a feature request article at 
http://wiki.whatwg.org/wiki/Meta_element_href as it's just easier to 
reference that than a mailing list post.
Sorry if it looks messy, I just used the advised template, but it's a 
start at least.

If anyone feel like improving the language feel free to go nuts.

Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Maximum length of attribute values

2010-03-24 Thread Roger Hågensen

On 2010-03-24 12:54, Henri Sivonen wrote:

I tried to test if the top 4 browser engines have a hard limit on the length of 
attribute values in their HTML parsers. If they do, it's somewhere over 6.5 
million characters.

Does any one of the top 4 browser engines have a hard limit that is higher than 
what I tested?

Does anyone happen to have data on how long attribute values must be allowed 
for Web compat?
   


Hmm! If there is a limit (in moderm browsers) then it's probably close 
to the addressable memory limit, so that would be around 2GiB on 32bit 
(x86 and similar) systems.


I also believe a max limit is kind of silly in spesifications like this, 
it would be better to specify a minimum supported limit instead.


Obviously this minimum need to be based on the lowest common denominator 
across the major PC browsers, console, mobile and set-top boxes and 
other integrated solutions.
And basically stating that implementers must support this length as 
minimum but are encouraged to leave the max as memory limited?


Obviously it would be silly with 6.5MiB attributes as I'd certainly 
believe that to be a bug or broken tags myself if encountered.


So if a adviced minimum is stated with a note that implemetors should be 
prepared to handle ambiguously large attributes (instead of crashing or 
eating up all memory) then that should be enough right?


Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Maximum length of attribute values

2010-03-24 Thread Roger Hågensen

On 2010-03-24 21:28, Boris Zbarsky wrote:

On 3/24/10 4:20 PM, Roger Hågensen wrote:

Obviously it would be silly with 6.5MiB attributes as I'd certainly
believe that to be a bug or broken tags myself if encountered.


Or an SVG path...  I've certainly seen SVG files with multi-megabyte 
paths in them.


-Boris



Outch! But that's just SVG then right? In which case the SVG specs 
probably states a different minimum requirement on top of the HTML 
one? (haven't checked)


Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



[whatwg] Ping + Ping-prefix meta element.

2010-04-26 Thread Roger Hågensen
Idea originally posted at 
https://bugzilla.mozilla.org/show_bug.cgi?id=409508


META name=Ping-prefix content=/trackout/

If the browser see this meta tag it will behave as if ping attribute was
applied to all externally leading hrefs with the prefix added to the start.
In the example above this would be how the browser should treat it as if it
was:
This tracking script has a url rewrite to it so it looks nice in log parsers
etc.

The behavior would be similar to doing this:

a href=http://example.com;
ping=/trackout/http://example.com;Example.com/a
a href=links/Links/a
a href=http://mozilla.org;
ping=/trackout/http://mozilla.org;Mozilla.org/a

if the meta tag isMETA name=Ping-prefix
content=http://yoursite.com/trackout/;

the behavior would be similar to this:

a href=http://example.com;
ping=http://yoursite.com/trackout/http://example.com;Example.com/a
a href=links/Links/a
a href=http://mozilla.org;
ping=http://yoursite.com/trackout/http://mozilla.org;Mozilla.org/a

Here is an alternative tracking script (no url rewrite on this one).
META name=Ping-prefix content=/trackout?url=

This results in a behavior similar to doing:

a href=http://example.com;
ping=/trackout?url=http://example.com;Example.com/a
a href=links/Links/a
a href=http://mozilla.org;
ping=/trackout?url=http://mozilla.org;Mozilla.org/a


This new meta tag would allow even more rapid adoption as web developers would
not need to add a ping attribute to hundreds of pages with maybe dozens of
links on each page, and it would be very easy to add such a meta tag to various
template scripts/frameworks with just a line of code.

To the web developer AND the end user it would also mean no size increase,
whereas using the ping attribute would potentially double the number of bytes
per href..

Alternatively the body tag could be used instead of a meta tag.
In which case the implementation could be:

body ping=/trackout/
and
body ping=/trackout?url=
and so on...

This may actually be more fitting. (consideringbody target=  attribute
behaves in a similar way to my idea)

Firefox 3 team has the chance to test this out and see which is more popular.
ping attributes in individual a href tags or a single ping attribute in the
body tag.

With a possible saving of bytes due to a single ping attribute being used there
would be no need to use javascript hacks nor redirect urls, nor ping attributes
per url. How can one go wrong?

I am already testing ping attributes for urls on my site,
but damn adding that ping to all those pages is a pain.
abody ping=  to set a default prefix and have the url appended to it
would allow adding url ping by just a single line in the sites template.

PS! individual ping attributes would override the global one obviously, just
like a target attribute would.
Oh, and could someone on the HTML5 list poke some of the guys over there and
see if a ping attribute for the body tag in a similar vein could be considered?


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Ping + Ping-prefix meta element.

2010-04-27 Thread Roger Hågensen

On 2010-04-27 00:41, Aryeh Gregor wrote:

On Mon, Apr 26, 2010 at 4:17 PM, Roger Hågensenresca...@emsai.net  wrote:
   

Oh, and could someone on the HTML5 list poke some of the guys over there and
see if a ping attribute for the body tag in a similar vein could be
considered?
 

This *is* the HTML5 list -- or one of them, anyway.  The editor reads
this list as well as public-html, and responds to all points made on
this list (albeit sometimes months after the fact).
   


Ah! That was a copy paste. (corrected a brainfart typo, aside from that 
it's a duplicate of the text on the bugzilla database, I just didn't 
strip out that part of the text).


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] CSS2 system colors in legacy color values

2010-05-31 Thread Roger Hågensen

On 2010-05-23 23:49, Simon Pieters wrote:
On Sat, 22 May 2010 21:06:53 +0200, L. David Baron dba...@dbaron.org 
wrote:

The rules for parsing a legacy color value in
http://www.whatwg.org/specs/web-apps/current-work/complete/common-microsyntaxes.html#rules-for-parsing-a-legacy-color-value 


specify that CSS2 system colors should be accepted, and that they
should be converted to a simple color.
...
What was the motivation for adding support for CSS2 system colors

IE compat.

(which I would note are deprecated in css3-color) to legacy HTML
color values?  What implementations support them,

I think WebKit and IE.

and do they respond to dynamic changes properly?

I don't know.

It appears that Opera and Gecko don't support system colors. I 
wouldn't mind not supporting them, but it could be interesting to 
research how many pages it affects.


Interesting to know!

I'm kinda surprised that there is no support for floating point colors 
though.
Althought I guess that rgb(x%, x%, x%) An RGB percentage value (e.g. 
rgb(100%,0%,0%))
is as close as you get to that... Does percentage rgb color support 
things like 85.41% though?

I hope so as only rgb(x%, x%, x%) is tentatively gamut independent.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] CSS2 system colors in legacy color values

2010-05-31 Thread Roger Hågensen

On 2010-05-31 09:57, Roger Hågensen wrote:

On 2010-05-23 23:49, Simon Pieters wrote:
On Sat, 22 May 2010 21:06:53 +0200, L. David Baron 
dba...@dbaron.org wrote:

The rules for parsing a legacy color value in
http://www.whatwg.org/specs/web-apps/current-work/complete/common-microsyntaxes.html#rules-for-parsing-a-legacy-color-value 


specify that CSS2 system colors should be accepted, and that they
should be converted to a simple color.
...
What was the motivation for adding support for CSS2 system colors

IE compat.

(which I would note are deprecated in css3-color) to legacy HTML
color values?  What implementations support them,

I think WebKit and IE.

and do they respond to dynamic changes properly?

I don't know.

It appears that Opera and Gecko don't support system colors. I 
wouldn't mind not supporting them, but it could be interesting to 
research how many pages it affects.


Interesting to know!

I'm kinda surprised that there is no support for floating point colors 
though.
Althought I guess that rgb(x%, x%, x%) An RGB percentage value (e.g. 
rgb(100%,0%,0%))
is as close as you get to that... Does percentage rgb color support 
things like 85.41% though?

I hope so as only rgb(x%, x%, x%) is tentatively gamut independent.



Just did some tests! It seems that the latest Firefox, Opera, IE, and 
Chrome at least supports fractional percentages.

So rgb(0%,0%,80.00% equals 0,0,204
and rgb(0%,0%,80.99% equals 0,0,207
and rgb(0%,0%,80.50% equals 0,0,205

I wonder why the specs don't mention this support though,
and I guess that a value of 200% (what, like infared?) would mean twice 
as red as SRGB red (100%),

and in the case of these browsers they clamp anything higher to 255.

PS! To any Chrome folks here, seems like Chrome has a slight rounding 
bug compared to the other 3 browsers.


Example code:
html
head
titlePercentage Fraction color test/title
style type=text/css
  li {
color: white;
background: rgb(0%,0%,80.00%);
margin: 12px 12px 12px 12px;
padding: 12px 0px 12px 12px;
list-style: none
  }
  li.fraction {
color: white;
background: rgb(0%,0%,80.50%);
margin: 12px 12px 12px 12px;
padding: 12px 0px 12px 12px;
list-style: none
  }
/style
script
function getStyle(el)
{
if (el.currentStyle)
{
return el.currentStyle.backgroundColor;
}
if (document.defaultView)
{
return document.defaultView.getComputedStyle(el, 
'').getPropertyValue(background-color);

}
return Don't know how to get color;
}
/script
/head
body
ul
li id=blue1This should be RGB 0,0,204 (#cc) and it is: 
scriptdocument.write(getStyle(document.getElementById('blue1')));/script/li
li id=blue2 class=fractionThis should be RGB 0,0,205 (#cd) and 
it is: 
scriptdocument.write(getStyle(document.getElementById('blue2')))/script/li

/ul
/body
/html

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] 'Main Part of the Content' Idiom

2010-06-04 Thread Roger Hågensen

On 2010-06-04 18:39, Daniel Persson wrote:
I am not advocating ad-tags. The idea of globally structuring content 
on the web is very appealing, it would make it easier for a lot of 
things and a lot of people. Let's do it!
...but I can't see it happening where body would be main content + 
ads + anything there is not a sensible tag for + anything a 
lazy/stressed/unconscious author didn't tag otherwise. Let's just have 
a main content tag or a strong main content strategy.




Hmm! It is a valid point actually.
Oh and here is some food for though. This works in all latest browsers. 
Opera and Firefox have same behavior, while Chrome is a tad different, 
and as IE is unable to style unknown tags sadly.


!doctype html
html
head
titleTest/title
style
  aside {border:1px solid #bf;white-space:nowrap;}
/style
/head
aside
 Just testing aside outside body!
/aside
body
article
  Main part of article.
/article
/body
/html

As you can see the aside is outside the body, all latest browsers seem 
to handle this pretty fine.
http://validator.w3.org/ on the other hand gives the error  /Line 12, 
Column 6/: body start tag found but the body element is already open.| 
body**|


Now, either that is a bug in the validator, or the body is automatic.
And sure enough, removing the body and /body tags the document 
validates, and none of the browsers behave differently at all.

Is the body tag optional or could even be redundant in HTML5 ?

I don't mind really, as currently I only use body to put all the other 
tags inside, so not having to use the body tag at all would be welcome,

though I suspect a lot of legacy things rely on the body tag.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] 'Main Part of the Content' Idiom

2010-06-04 Thread Roger Hågensen

On 2010-06-04 22:03, Tab Atkins Jr. wrote:

On Fri, Jun 4, 2010 at 12:58 PM, Roger Hågensenresca...@emsai.net  wrote:

...
As you can see the aside is outside the body, all latest browsers seem to
handle this pretty fine.
http://validator.w3.org/ on the other hand gives the error  Line 12, Column
6: body start tag found but the body element is already open.body

Now, either that is a bug in the validator, or the body is automatic.
And sure enough, removing thebody  and/body  tags the document
validates, and none of the browsers behave differently at all.
Is the body tag optional or could even be redundant in HTML5 ?

body  is optional.  It automatically gets added as soon as the parser
sees an element that doesn't belong in thehead.  (Thehead  is
optional too, as is thehtml.)  So theaside  triggers abody
element to be created and opened, and then later explicitbody  tags
get dropped.

I don't mind really, as currently I only use body to put all the other
tags inside, so not having to use the body tag at all would be welcome,
though I suspect a lot of legacy things rely on the body tag.

No browser depends on you using thebody  element explicitly.  It's
perfectly fine to write your document like this:

!doctype html
titleTest/title
style
   aside {border:1px solid #bf;white-space:nowrap;}
/style
aside
   Just testing aside outside body!
/aside
article
   Main part of article.
/article

Thetitle  andstyle  get auto-wrapped in ahead, theaside  and
article  get auto-wrapped in abody, and the whole thing below the
doctype gets auto-wrapped in anhtml.


Hmm! Intriguing. That is way cleaner than the container wrappers.
What browsers/engines behaves like that?
Does all HTML 4.01+ compliant browsers behave like this?

Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Resolutions meta tag proposal

2010-07-04 Thread Roger Hågensen

On 2010-07-04 14:34, Marques Johansson wrote:
Another way about handling this PPI ratio business would be with HTTP 
300 multiple choice. 
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.1


This may not be the best answer for every image on a page, but the 
first HTML page in a server controlled session could store the PPI 
ratio setting based on the page the UA chooses and then modify the 
HTML or content-negotiation setting.  A problem with this is that the 
browsers wouldn't be likely to render a page correctly unless they 
were modified for this image request yields 300 behavior.


I still like something like this for client content negotiation:

GET /image/dog HTTP/1.1
Accept: image/*; ppiratio=2
...
HTTP/1.1 200 OK
Content-type: image/jpeg
... d...@2x.jpg

Apache rewrite rules could even handle this by detecting ppiratio in 
the Accept header and then looking for a matching images/ratio/2/dog 
file.  If it didn't exist the rewrite would fail resulting in the 
server responding with images/dog which is suitable if not optimal.


This has me thinking Accept: image/*; x=400; y=300 could be attached 
with any image request based on clients intent for the image.  (The 
HTML said 'width=400 height=300' so I don't need anything better.) 
 The server can ignore this or return something better suited than the 
1200x1200 image that it would otherwise return.


I still don't have a handle on this retinal / ppi stuff so ppiratio 
may be the wording.

I also like Accept: video/*; kbps=500 for a similar purpose.



Again this is negotiation related, and although I'm able to do fancy 
apache stuff on my site I'd rather not have to.


This however takes advantage of CSS 
http://www.alistapart.com/articles/hiresprinting/


Not exactly ideal, but I think it's the better approach, it just need to 
be refined and standardized some way.


But here's an idea I have that would fit into HTML5.

Examples:
1. img src=img/test.png width=512 height=256 
dpi=96;;300;image/test300.png work better?
(96 dpi is current thus empty ;; while 300 dpi is alternative hence 
specified.
2. img src=img/test.png width=512 height=256 dpi=300 woould 
also be valid, indicating that the image is 300 dpi and no alternatives.
3. img src=img/test.png width=512 height=256 
dpi=300;image/test300.png same as example 1, 96 dpi default assumed.
4. img src=img/test.png width=512 height=256 
dpi=*;image/test.svg 96 dpi default assumed, the * indicate any DPI 
and in this case it's a vector format.
If dpi= or is not specified, then the image should be assumed to be 
96dpi, unless the image format has it's own dpi info (JPG support this, 
but does PNG?) that is.


This would make printouts look better, and allows the author to specify 
displayed size (width and height being logical pixels for non-96 dpi 
images obviously)
High DPI displays would make the browser get the high dpi image instead 
of the default.


The only parts of a site that has issues currently is fixed pixel 
graphics (and fixed pixel based layout as well I guess),

it is only pixel based (bitmapped) images that has issues with scaling,
embed video already tend to offer multiple resolutions.

So a dpi param for img might just be a nice way to fix the issue.
The CSS folks might have to add some support for this too, as well as 
scripting support.


This is just something I came up with right now, but it's at least 
simple in use which is the important thing I guess.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-07-30 Thread Roger Hågensen

On 2010-07-30 20:54, Eduard Pascual wrote:

On Fri, Jul 30, 2010 at 12:36 PM, Dennis Joachimsthalerden...@efjot.de  wrote:
   

Having a Content-Disposition property ona  tags which does the same as
the HTTP Header.
For example changing the file name of the file to be downloaded or rather
have a image
file download rather than it being shown in the browser directly.

This would avoid constructs such asa href=hiDownload/a  (Right click
and click save
target as... to download).
 

To top things up, note that saving a file to disk is always equally
or less dangerous than letting the UA perform the default action for
a resource: on the most evil scenario, there would be at least one
further step of confirmation or user action before the saved data has
a chance to do anything (assuming it's in some executable form;
otherwise it will never do anything).
   


I really like this idea too as it has annoyed me when designing webpages 
as well.


I'd propose the following though:

a href=stuff.zip downloadThis defaults to application/octet-stream 
and clicking the link will behave as if the user selected Save As from 
UI context menu!/a


a href=stuff.zip download=audio/vorbisThis tells the browser this 
is audio and of type vorbis, clicking the link will behave as if the 
user selected Save As from UI context menu and the UI may default to the 
user's Music download folder!/a


The reason I suggest download= is that it's flexible, it's for example 
possible to add to this in a compatible way by doing 
download=audio/vorbis;something else here
Where something else here could be filename or modification-date 
as per http://tools.ietf.org/html/rfc2183
This is somewhat important as it's possible that a file on a server 
could have a last modified that does not match reality (backup 
restoration failed to touch the modified date or the server is 
configured wrongly etc. or the file is served using a script),


Example: a href=stuff.zip download=application/zip;filename=Stuff - 
Text Adventure Installer.zipThis defaults to application/octet-stream 
and clicking the link will behave as if the user selected Save As from 
UI context menu!/a


A little remark though, if download is used with something that is not a 
href then I'm not really sure, should this be a validation warning?
Or should it be valid as long as some form of media is referenced, like 
for example:
image src=cool.png download=image/png;Brand X's very cool 
logo.pngThis tells the browser this is a image of type PNG, if the 
user triggers the Save As then the UI may default to the user's Images 
download folder, using the filename and modification date suggested by 
the download attribute!


I'm also sure that web indexers like Google and the rest will love this 
as well, as they can make some early assumptions when scanning the html.
There may no point for Google to bother with the .zip file for example, 
but the .png and possibly the ogg vorbis might be of interest and the 
modified date would also be of interest.


There are other benefits to this as well, as it's possible to serve any 
content from a script it helps reduce ambiguity in cases where you today 
have got: a href=stuff.php?id=15Huh? I don't wanna guess what the 
filename will be or modified datea/
or a href=blah/stuff/Good luck making any assumptions on this one, 
what would even the Save As browser UI look like for this?/a


So something like this certainly makes sense to me at least.

How many here can recall at least once when you tried to download, and 
the filename matched the .php script rathr than the actual file that was 
downloaded? *raises hand and grins*


PS! Obviously this does not excuse not having proper a proper 
Content-Disposition in the HTTP header as a server should provide that 
as well (preferably matching the download attribute exactly but if not 
then the download attribute takes priority in the Save As UI).


Regards,
Roger.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-07-30 Thread Roger Hågensen

On 2010-07-31 03:57, Roger Hågensen wrote:

Another example:

a href=cool.png downloadimage src=cool_sm.jpg/a

How many here have had that wishful thinking work exactly like you wanted?
That is the minimal use case, old browsers would behave as currently,
those supporting this on the other hand would always present an expected 
Save As.
I'm sure half the websites out there that display a thumbnail image but 
links to a larger or original image would jump at this.


Another minimal example would be:

image src=cool_sm.jpg a href=cool.png downloadDownload the full 
image!/a


Oh yeah, I almost forgot. Screenreaders (you know, usually for those 
blind folks) would also benefit for this as ther would be no ambiguity 
as to what action should be taken.
Obviously the two examples about do not make that much sense, but 
imagine if it was the audio example from my other post instead, instead 
of potentially having the browser try and play the audio
it would instead download it to where the user would prefer (if it all), 
that way one could have a low quality audio and a a href download 
next to it that let you download the higher quality audio.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-07-30 Thread Roger Hågensen

On 2010-07-31 04:17, Boris Zbarsky wrote:

On 7/30/10 9:57 PM, Roger Hågensen wrote:

a href=stuff.zip downloadThis defaults to application/octet-stream
and clicking the link will behave as if the user selected Save As from
UI context menu!/a


I would object to implementing this.  I have no problem putting up a 
dialog asking the user whether to save or open in a helper app, but I 
see no reason why I should force saving on the user, as a browser 
developer.  If the user wants to open your zip in a zip reader, why 
shouldn't he?


-Boris



When I say the Save As UI I mean the one you get currently, which 
varies, some browsers only provide a Save As and Cancel, while others 
provide Save As with Open and Cancel.
So based on your remarks, maybe the spec could state that if a browser 
believe it can handle the file type then it should present a Open + Save 
As UI,
but if it can not handle the filetype (aka handling un-configured or the 
user set the browser's settings this way) then it should just present 
Save As without Open.


So thanks for pointing that out, as I actually have been annoyed in the 
past with how different browsers do downloads UIs, and this time we can 
actually establish expected default UI behavior from the ground up 
(however the user should still be able to alter settings to change the 
default behavior, and ther is nothing preventing a Browser maker from 
enhancing the UI even further.).



--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-07-30 Thread Roger Hågensen

On 2010-07-30 20:54, Eduard Pascual wrote:

Let me complement the proposal with a use case:
http://stackoverflow.com/questions/3358209/triggering-a-file-download-without-any-server-request
   


Now something like that is a bit more tricky, but can't Javascript 
actually trigger a proper Save As?
Something like a href=# onclick=*javascript*: 
document.execCommand('Save As','1','saveMe.csv');img 
src=./images/*save*.png/a


Should in theory work, but I think browser support for that is spotty, 
so the solution to that stackoverflow post either lies in deep 
Javascript vodoo.
Or make a suggestion to the Javascript folks would be better? In any 
case a download attribute would only improve such a functionality (if 
javascript support it) even further by displaying the proper UI.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-07-30 Thread Roger Hågensen

On 2010-07-31 04:52, Boris Zbarsky wrote:



When I say the Save As UI I mean the one you get currently, which
varies, some browsers only provide a Save As and Cancel, while others
provide Save As with Open and Cancel.


I can't name a single browser that provides an Open option if you 
select Save As from the context menu.  Can you?


Not explicitly, but if you click the link for example, but this would 
also allow enhancing the explicit Save As (by letting the browser know 
the filetype more accurately and present My Documents, or My Images etc. 
folder destination if supported by browser obviously)





So based on your remarks, maybe the spec could state that if a browser
believe it can handle the file type then it should present a Open + Save
As UI,
but if it can not handle the filetype (aka handling un-configured or the
user set the browser's settings this way) then it should just present
Save As without Open.


So you mean the browser should act as if 
content-disposition:attachment were specified?  Why not just say that?


Um. Because this (the topic by the original poster) is about exactly 
that? *smiles*



--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Content-Disposition property for a tags

2010-08-25 Thread Roger Hågensen

 On 2010-08-25 21:09, Ian Hickson wrote:

On Fri, 30 Jul 2010, Dennis Joachimsthaler wrote:

Having a Content-Disposition property ona  tags which does the same as
the HTTP Header. For example changing the file name of the file to be
downloaded or rather have a image file download rather than it being
shown in the browser directly.

This would avoid constructs such asa href=hiDownload/a  (Right
click and click save target as... to download).

It would also eliminate the need to handle such requests with a server
side scripting engine to change the headers dynamically to enforce
downloading of the content.

On Fri, 30 Jul 2010, Eduard Pascual wrote:

Let me complement the proposal with a use case:
http://stackoverflow.com/questions/3358209/triggering-a-file-download-without-any-server-request

This seems like a fine idea. I would recommend registering a new rel
type (marked as a Hyperlink Annotation) and trying to get browsers to
implement it:

http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F
http://www.whatwg.org/specs/web-apps/current-work/complete/links.html#other-link-types



This is kinda ironic, I've asked for the same earlier here, and upon 
looking at the WHATWG current work link there Ian,

I navigated to the http://wiki.whatwg.org/wiki/RelExtensions where I found:
enclosure described as the destination of the hyperlink is intended 
to be downloaded and cached and it's marked as proposed currently.
And it links further to http://microformats.org/wiki/rel-enclosure which 
was drafted in the summer of 2005.
And it's already in use in the wild (mostly Feed related but by the 
looks of it elsewhere too).


May I suggest adding it to the specs? (after 5 years of proposed status 
it's kinda time to decide right?)


Personally I'm gonna start using myself now as it seems some sites have 
started using it for download urls, and I'll cross my fingers that 
browsers will catch up.
Most browsers already support rel=enclosure for RSS feeds so there 
shouldn't be much work to support it for HTML(5) as well.


From the microformats site I quote the following:
The value enclosure signifies that the IRI in the value of the href 
attribute identifies a related resource which is potentially large in 
size and might require special handling. For atom:link elements with 
rel=enclosure, the length attribute SHOULD be provided.


Most content that a web designer want to see the browser download fit 
this criteria, and now that with HTML(5) that large content that are 
intended to be streamed/played will be using the video and audio tags, 
the rel=enclosure will be relegated to mostly downloading (I'm sure 
that RSS feeds will start to adopt the new video and audio tags in some 
form later too.
So I only see it logical to re-purpose rel=enclosure as a download 
indicator.


The HTML specs could simply state that rel=enclosure signifies that 
the IRI in the value of the href attribute identifies a related resource 
which is potentially large in size and might require special handling 
and should default to a save UI or alternatively a save or open UI, and 
that content intended to be streamed or played directly should use the 
video and audio tags instead.


Heh, I can't believe I totally forgot that rel=enclosure existed (for 
over 5 years). *laughs*


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] default audio upload format (was Fwd: The Media Capture API Working Draft)

2010-09-01 Thread Roger Hågensen

 On 2010-08-31 22:11, James Salsman wrote:

Does anyone object to form input type=file
accept=audio/*;capture=microphone using Speex as a default, as if it
were specified
accept=audio/x-speex;quality=7;bitrate=16000;capture=microphone or
to allowing the requesting of different speex qualities and bitrates
using those mime type parameters?

Speex at quality=7 is a reasonable open source default audio vocoder.
Runner-up alternatives would include audio/ogg, which is a higher
bandwidth format appropriate for multiple speakers or polyphonic
music; audio/mpeg, a popular but encumbered format; audio/wav, a union
of several different sound file formats, some of which are encumbered;
etc.



Actually, wouldn't accept=audio/*;capture=microphone
basically indicate that the server wish to accept anything as audio?
Which means it's up to the browser to decide what is best (among a list 
of the most common audio formats most likely).


The proper way however would be to do:
accept=audio/flac, audio/wav, audio/ogg, audio/aac, 
audio/mp3;capture=microphone

indicating all the audio formats the server can handle.

audio/* basically says to the browser that it can send anything even raw 
data or a image as if it was audio,
and the website would have to respond to the user that this was 
unsupported or that they should submit these formats instead.


Although I guess that audio/* could be taken as a sign that the browser 
should negotiate directly with the server about the preferred format to 
use. (Is POST HEADER request supported even?)


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] default audio upload format (was Fwd: The Media Capture API Working Draft)

2010-09-03 Thread Roger Hågensen

 On 2010-09-01 21:34, David Singer wrote:

seems like a comma-separated list is the right way to go, and that audio/* 
should mean what it says -- any kind of audio (whether that is useful or not 
remains to be seen).

I would suggest that this is likely to be used for short captures, and that 
uncompressed (such as a WAV file or AVI with PCM or u-law audio) should be the 
recommended format.

If your usage is for longer captures or more specific situations, then indicate 
a suitable codec.

Shouldn't there be statements about channels (mono, stereo, more), sampling 
rate (8 kHz speech, 16 kHz wideband speech, 44.1 CD-quality, 96 kHz 
bat-quality) and so on?



Hmm! Channels, bits, frequency should be optional in my opinion, (and 
with a recommendation for a default, stereo 16bit 44.1KHz which is the 
legacy standard for audio in most formats I guess, or maybe 48KHz as 
most soundcards seem to be these days?)
In most cases a service will either A. use it as it's received (since 
most computer systems can play back pretty much anything), or B. it's 
transcoded/converted into one or more formats by the service. (like 
Youtube does etc.)
In other words I am assuming that if the server accept for example the 
WAV format then it actually fully support the WAV format (at least the 
PCM audio part). Ditto with MP3, AAC, Ogg, FLAC, Speex etc.


So any quality, channels, bits, frequency specified in the accept would 
just be what the server prefers (suggested default, or for best 
quality/best effort scenario), but the full format should be supported 
and accepted if asked for.
Now whether the service takes advantage of surround rear recording is up 
to the service, if it simply discards that, takes the front channels and 
mix them to mono then that is up to the service and the user to 
decide/negotiate about rather than the browser.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] default audio upload format (was Fwd: The Media Capture API Working Draft)

2010-09-03 Thread Roger Hågensen

 On 2010-09-04 01:55, James Salsman wrote:

Most of the MIME types that support multiple channels and sample rates
have registered parameters for selecting those.  Using a PCM format
such as audio/L16 (CD/Red Book audio) as a default would waste a huge
amount of network bandwidth, which translates directly into money for
some users.

On Fri, Sep 3, 2010 at 2:19 PM, David Singersin...@apple.com  wrote:

I agree that if the server says it accepts something, then it should cover at 
least the obvious bases, and transcoding at the server side is not very hard.  
However, I do think tht there needs to be some way to protect the server (and 
user, in fact) from mistakes etc.  If the server was hoping for up to 10 
seconds of 8kHz mono voice to use as a security voice-print, and the UA doesn't 
cut off at 10 seconds, records at 48 Khz stereo, and the user forgets to hit 
'stop', quite a few systems might be surprised (and maybe charge for) the size 
of the resulting file.

It's also a pain at the server to have to sample-rate convert, downsample to 
mono, and so on, if the terminal could do it.


Here's an idea. Almost all codecs currently use a quality system.
Where quality is indicated by a range from 0.0 to 1.0 (a few might go 
-1.0 to 1.0, a tuned Ogg Vorbis has a small negative range).
Anyway. If 1.0 could indicate max quality (lossless or lossy) and 0.5 
would indicate 50% quality.
This is similar to what most of the encoders support (usually with a -q 
argument).


So if the server asks for let's say FLAC at quality 0.0 that would mean 
compress the hell out of it vs 1.0 which would be a fast encoding.
While for a lossy codec like say Ogg quality of 1.0 would mean retain as 
much of the original audio as possible, while 0.0 would mean toss away 
as much as possible.


Combine this with a min and max bitrate value etc. and a browser could 
be told that the server wants:
Give me audio in format zxy with medium quality (and medium CPU use as 
well I guess) between 100kbit and 200kbit in stereo at 48khz between 
10seconds and 2minutes long.


Obviously with lossless formats the bitrate and quality means nothing, 
but a low quality value could indicate using the highest compression 
available.


I guess additionally a browser could present a UI if no max duration was 
indicated and ask the user to choose a sensible one. (maybe the standard 
could define a max length if none was negotiated as a extra safetynet?)


Oh and a lossless codec like FLAC there is usually a compression level, 
the higher it is the more CPU/resources are used to compress more tightly.
So a quality indicator only makes sense for lossy, while both lossy and 
lossless should be mapable to a compression level indicator.
But I think that having both quality and compression indicators might be 
best as many lossy codecs allows setting quality and compression level 
(plus bitrate range).


Hmm, has anything similar been discussed on video and image capture as well?
If not, then I think it's best to make sure that audio/image/video 
capture uses the exact same indicators to avoid confusion:


Bits/s: Min/max bitrate is applicable to (lossy mostly, rarely lossless) 
audio, video, video (w/audio), images.
%: Compression level are applicable to (lossy and lossless) audio, 
video, video (w/audio), images.
Seconds: Min/max duration are applicable to (lossy and lossless) audio, 
video, video (w/audio).
Hz: Frequency and channels are applicable to (lossy and lossless) audio, 
video (w/audio).
Bits: Depth of color are applicable to (lossy and lossless) video, video 
(w/audio), images.

Chn: Channels are applicable to (lossy and lossless) audio, video (w/audio).
WxH: Width/Height are applicable to (lossy and lossless) video, video 
(w/audio), images.


Bits/s = 0-??? where 0 indicate no minimum for Min value and no 
maximum for Max, otherwise the value indicate the desired bitrate in 
Bits per second.
% = 0-100 where %100 is max compression lossless or least quality if 
lossy, and %0 is no compression if lossless or max quality if lossy.
Seconds: 0-??? where 0 indicate no minimum duration for the Min 
value, and where 0 indicate no maximum for Max value, otherwise it's a 
number indicating the Min and Max range the server allows/expects.
Hz: 0-??? where 0 indicate that anything is acceptable, otherwise 
the frequency expected.
Bits: 0-??? where 0 indicates no preference, otherwise the desired 
bit depth for the image/video, and for audio.
Chn: 0-??? where 0 indicate no preference, otherwise the desired 
channels.
WxH: 0-??? where 0 indicate no preference, otherwise the desired 
resolution.

FPS: 0-??? where 0 indicate no preference, otherwise desired framerate.

I believe that covers most of them?

Here's an example (of values):
Video (w/audio, and both lossy)
rate=50-100
compression=25-75
duration=0-180
hz=48000 44100
chn=2 1-2
bits=16 24 32
wxh=1920x1080 1280x720 854x480  320x240-1024x768
depth=24
fps=24 50 60 

Re: [whatwg] BINID (Was: Video with MIME type application/octet-stream)

2010-09-04 Thread Roger Hågensen
 indicated by a single byte right before the string itself)

So if the filetype is Cool Test File and is obviously a non existing 
filetype/format.

The filetype name length is 14 bytes.
The string is 0 terminated. so the string is actually 15 bytes in this case.
And thus the entire file format header is 25 bytes in this example.
(BINID header, byte indicating ID length, the UTF-8 filetype name, null 
terminator.)


A mp3 file which itself don't really have a easily identifiable header 
could be just like this example.

Except as filetype it would say MPEG-1 Layer-3 Audio (MP3)
and the text length byte would be the value 26 naturally.
And after the 0 termination byte, you would have a typical mp3 file.
So the full BINID header would be 8+1+26+1 for a total of 36 bytes 
before the MP3 begins.


Features/Why You should use it/cool points:
The cool thing with the binary header, is that it truly is fully 
platform independent.
The header is darn short, 8 bytes and then a byte for the binid text 
string size,
so the issue with Motorola and Intel 16bit and 32bit byte orders etc are 
avoided.

The text string binid allow as you saw in the mp3 example above,
a very accurate and descriptive binid. The binid is easily displayed to 
the user
in the case the filetype is unsupported. No more issues hunting down 
obscure formats.
With upto 255 char long binid string, we wont be running out of 
binid's anytime soon (statement that will haunt me?!).

It is named rather neutrally. it's simply BIN actually as you saw earlier.
So your customers won't wonder why your fileformat is named after some 
other company etc.

The file extension is almost redundant. But it should at least be .bin
or the actual filetype extension (i.e for a MP3 it would be .mp3) to 
remain familiar to users/compatible.
This fileformat is FREE to use, no licensing needed at all (zlib 
license), there never will be, no patent issues or anything like that.
This format is FROZEN/LOCKED. Meaning it will never change. Ever. It's 
just a top layer binid.


Why:
I was tired of having to MAKE new fileformats myself all the time.
Modifying current ones was not possible as that would break file standards.
Just about all file extensions are used up or overused making things 
confusing sometimes.

(i.e thousands of apps using same extensions for different things))
Unknown filetypes can be a true pain to identify. (what the heck is a 
.ecf file? etc...)

looking inside a file using a hex editor doesn't always make you any wiser.
Usually there is either no fileheader/fileid or it's just 4 or 8 bytes 
stating ECF1 or something.

Again a pain to find out what it really is...
The BIN fileid header is inspired by PNG's header, is so easy to 
implement and support.

I'll be using it for ALL my future binary file formats/filetypes,
which includes a file compression/archiver, file encrypter, media 
player, and much more.
Because it's so simple, it's also super flexible. You can easily use 
other container formats inside this one.

An OS could easily support this, even use it for executables and much more.
Actually. this is barely passable for a container format as it's 
basically a tiny static fileid header.
The binary data/file can easily be added/attached to the end of the 
header, with no need to edit the header in any way.
(i.e no crc checksum or size... all that is left to the actual 
filetype/formats themselfs)


just some examples:

Core Dump (Intel Byte Order)
MPEG-1 Layer-3 (MP3)
Windows Media Video (WMV)
JPEG Image
ETC.

Yeah! The description itself becomes the actual BINID.
Hopefully web browsers will catch on quickly,
as this would allow a fast and easy way to id files that don't have a 
proper mime type.
So in the future a mime type application/octet-stream won't be as 
mysterious any longer.


Another advantage I forgot to mention, is that since it's basically a 
file ID slap on header,
it is so easy to add it on the fly. I.e a webserver could even add it to 
the start of a file/stream
that is sent to the browsers in case the file don't originally have a 
BIN header...


(C) 2010 Roger Hågensen.

***

I really should re-write it all as it's kind of messy, an early draft as 
I said,
and was first scribbled down over half a decade ago, but the BINID 
standard is solid enough,
and I'm coding in support for it (and using it) in some upcoming 
projects as well as all future ones.


Some form of central registry would be needed though where new strings 
could be registered, looked up/searched/listed etc.
Software would not need to contact such a registry in any way, as I 
mentioned above the BINID itself tells you what it is,
so if you software does not support the filetype you can just show it to 
the user or even send them to Google or Bing or Yahoo to query the format,
or a official registry lookup if such is available for direct queries 
like that.
For now, I'm that registry, but ideally it should be some

Re: [whatwg] Video with MIME type application/octet-stream

2010-09-10 Thread Roger Hågensen

 On 2010-09-09 09:24, Philip Jägenstedt wrote:
On Thu, 09 Sep 2010 02:15:27 +0200, David Singer sin...@apple.com 
wrote:



On Wed, Sep 8, 2010 at 3:13 PM, And Clover and...@doxdesk.com wrote:

Perhaps I *meant* to serve a non-video
file with something that looks a fingerprint from a video format at 
the top.


Anything's possible, but it's vastly more likely that you just made 
a mistake.


It may be possible to make one file that is valid under two formats.  
Kinda like those old competitions write a single file that when 
compiled and run through as many languages as possible prints hello, 
world! :-).


For at least WAVE, Ogg and WebM it's not possible as they begin with 
different magic bytes.




Then why not define a new magic that is universal, so that if a proper 
content type is not stated then a sniffing for a standardized universal 
magic is done?


Yep, I'm referring to my BINID proposal.
If a content type is missing, sniff the first 265 bytes and see if it is 
a BINID, if it is a BINID check if it's a supported/expected one, and it 
is then play away, all is good.
If a content type is given, then just in case sniff the first 265 bytes 
and see if it is a BINID, if it is a BINID check if it's a 
supported/expected one, and it is then play away, all is good.
If a content type is missing, and the sniffing of the first 265 bytes 
shows it is not a BINID or not a supported one, then it can only be 
treated as unknown binary and would fail (though in the case of a 
unsupported BINID the user would be shown what the BINID is so they 
won't be fully stuck if they miss a particular codec or the browser 
doesn't support it).
If a content type is given, and sniffing the first 265 bytes shows it's 
not a BINID or not a supported one, then treat it as per the context 
(video or audio) and hope the video or audio codec layer is able to find 
out what it is (what should happen currently right?).


It would be very easy to add support for something like BINID as it can 
be output at the start of a file or stream as the server sends it, a 
script could even output it or it could be at the start of the actual 
file itself,
and in the case of live streaming a server could easily add it to the 
start of the stream even if it's mid-stream. Even a wrongly configured 
webserver wouldn't be able to mess up the handling of this.
The benefit is that the browser would see that, Oh, this is a BINID and 
it's Webm, I'll pass this on to the video codec then.
Or if audio and the browser sees it is a BINID and it's MP3 it would 
pass it to the mp3 audio codec.
In time something like BINID might even propagate elsewhere beyond just 
video and audio.


I'm not saying that BINID must be used, but at least something very 
close to it (as unknown formats can be shown to a human user and make 
sense and be searchable), and maybe the first 8 bytes should be 
constructed slightly differently?.
Oh and although I haven't tested this, I suspect that most current 
codecs would ignore the first 265 bytes when they sniff for the start of 
the data anyway so a BINID would be partially backwards compatible,

and in any case certainly easy to patch in support for quite easily.
And the best part is that the browser could easily strip or skip past 
the BINID when passing the data to the OS or codecs (if such do not 
support BINID at all), or if saving the audio or video locally per user 
request.


Something like BINID (short for Binary Identification actually) is 
needed, and there is nothing wrong with HTML5 and video audio 
standard defining it,
it wouldn't be the first time a web standard has been adopted elsewhere 
later, it would surely see adoption outside of this, I certainly would 
use it elsewhere.


I invented BINID for a reason, because .*** file extensions just isn't 
good enough, and sniffing binary files is a real pain, the same pain as 
the video and audio discussion here is pointing out right now.


So if sniffing is bad, but sniffing can't be avoided, then why not 
simply standardize the sniffing by defining a universal, simple and end 
user friendly (the BINID can be displayed to the user, even if 
unknown/unsupported),
and the sniffing would be limited to the first 265 bytes (in the case of 
the BINID proposal), and this limited sniffing can't determine what 
something is and the context and extra info (like content type) does not 
clarify what it is or what to do with it then simply fail and inform the 
user, it doesn't have to be more complicated than that.


As simple as possible, but no simpler. Isn't that the ideal mantra of 
all coders here?


Remember, I'm not saying you must use BINID (but hey it's there and 
fleshed out already), if you must change the name, do so, if you must 
change the 8 byte sequence, do so, just make sure it has a max length, 
and the ID is humanly disaplayable if the format is unsupported. Just 
make it into an RFC or something, and spec it in the HTML standard that 
it must be supported, 

Re: [whatwg] Video with MIME type application/octet-stream

2010-09-11 Thread Roger Hågensen

 On 2010-09-11 03:40, Silvia Pfeiffer wrote:
[snip...]


And yeah, this kinda stretched beyond the scope of HTML5 specs,
but you'd be swatting two flies at once, solving the sniffing
issue with video and audio, but also the sniffing issue that
every OS has had for the last couple of um... decades?! (poke your
OS/Filesystem colleagues and ask them what they think of something
like this.)
Then again, HTML5 is kinda a OS in it's own right, being a app
platform (not to mention supporting local storage of databases and
files even), so maybe it's not that far outside the scope anyway
to define something like this?

-- 
Roger Rescator Hågensen.

Freelancer - http://EmSai.net/



Is there a link to your BINID proposal? From reading this I wonder: 
Would it entail having to re-write all existing files with an extra 
identifier at the start?


Silvia.



http://www.emsai.net/projects/binid/details/
(it really need to be rewritten as it's way to wordy and repetitive to 
explain something so simple, I was planning to rewrite the document 
later this fall but...)


And to answer your question, unfortunately yes, but that is the only way 
to solve the issue.
Some current fileformats would allow such a binary id header to be added 
without any issues (as they scan past ID3v2 or similar meta information 
anyway).
Most existing software would have no issues adding a check for such a 
binary id, in the long run it will save CPU cycles also.
Certain streaming/transfer protocols could be updated too, and this is 
where video and audio could leap ahead.


The thing is as I said, is that a browser could easily strip off the 
binary id before passing it on, so a codec or a OS filesystem or local 
software would be completely unaware,

but in time they too would support it (hopefully).
A serverside script (PHP or Python for example) could easily add the 
binary id to the start of a file or stream when sent to the browser, or 
even added to the file during transcoding.
so even if the server or .htaccess is set to only 
application/octet-stream proper file format identification would be 
still possible by browser only checking the binary id header.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] VIDEO Timeupdate event frequency.

2010-09-11 Thread Roger Hågensen

 On 2010-09-11 05:23, Eric Carlson wrote:

On Sep 10, 2010, at 8:06 PM, Biju wrote:

On Fri, Sep 10, 2010 at 7:05 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com  wrote:

Incidentally: What use case did you have in mind, Biju ?

I was thinking about applications like
https://developer.mozilla.org/samples/video/chroma-key/index.xhtml
( https://developer.mozilla.org/En/Manipulating_video_using_canvas )

Now it is using setTimeout so if processor is fast it will be
processing same frame more than on time. Hence wasting system
resource, which may affect other running process.

   Perhaps, but it only burns cycles on those pages instead of burning cycles on 
*every* page that uses avideo  element.

If we use timeupdate event we may be missing some frames as timeupdate
event is only happen every 200ms or 250ms, ie 4 or 5 frames per
second.

   Even in a browser that fires 'timeupdate' every frame, you *will* miss 
frames on a heavily loaded machine because the event is fired asynchronously.

And we know there are videos which a have more than 5 frames per second.

   So use a timer if you know that you want update more frequently.


Hmm! Once you get up to around 60FPS (1000ms/60=16.6...) you are 
getting close to 15ms per frame,
and unless the OS is running at a smaller timer period that is all the 
precision you can get.
I believe Windows Media Player is using 5ms periods, and the smallest 
period advisable on a modern Windows system is 2ms,
1ms is most likely not consistently achievable on any typical OS (there 
will be fluctuations) that is not a real time OS (few end user OS are 
these days)


This would have to be synced to the display refresh rate instead. (no 
point processing frames that are not displayed/skipped anyway),

I can't recall any browsers exposing vsync. (does any?)


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Video with MIME type application/octet-stream

2010-09-13 Thread Roger Hågensen

 On 2010-09-13 15:03, Mikko Rantalainen wrote:

2010-09-11 01:51 EEST: Roger Hågensen:

  On 2010-09-09 09:24, Philip Jägenstedt wrote:

For at least WAVE, Ogg and WebM it's not possible as they begin with
different magic bytes.

Then why not define a new magic that is universal, so that if a proper
content type is not stated then a sniffing for a standardized universal
magic is done?

Yep, I'm referring to my BINID proposal.
If a content type is missing, sniff the first 265 bytes and see if it is
a BINID, if it is a BINID check if it's a supported/expected one, and it
is then play away, all is good.

 From the what could possibly go wrong department of thought:

- a web server blindly prefixes files with BINID if it knows the file
suffix and as a result, a file ends up with a double BINID (server
assumes that no files contain BINID by default)
- a file has double BINID with contradicting content ids
- some internal API assumes that caller wants BINID in the stream, the
caller assumes that the stream has no BINID - as a result, the caller
will pass content with BINIDs embedded in the middle of stream.

Basically, this sounds like all the issues of BOM for all binary files.

And why do we need this? Because web servers are not behaving correctly
and are sending incorrect Content-Type headers? What makes you believe
that BINID will not be incorrectly used?


Because if they add a binary id then they obviously are aware of the 
standard.
Old servers/software would just pass the file through as they are 
unaware so content type issues still exist there,
eventually old servers/software are rotate out until most are binary id 
aware.

This is how rolling out new standards work.
A server would only add a binary id if none exist and it's certain (by 
previous sniffing) that it's guess is correct,
though I guess the standard could state that if no binary id exist on a 
file then none should be added by the server at all (legacy behavior?)
And what I meant with the server adding it I meant services like Youtube 
(if Youtube transcode a video to MP4 then the server knows it's 
delivering just that),
likewise with streaming radio or video or similar, a regular webserver 
would have no right (or point) in modifying a file served than it does a 
.zip or .mp3 that a user downloads,
we are talking about streaming here mainly right? (where a short max 
length sniffing would be a huge benefit)



(If you really believe that you can force content authors to provide
correct BINIDs, why you cannot force content authors to provide correct
Content-Types? Hopefully the goal is not to sniff if BINIDs seems okay
and ignore clearly incorrect ones in the future...)


I do not see why web authors (or users at all) would need to mess with 
the binary id at all,

it's authoring software or transcoding software that would add them.

My BINID proposal is just that, a proposal for a binary id, it does not 
define how servers and browsers should handle it
as that is a different scope altogether. Something like a binary id 
would need a proper RFC writeup or similar.



I'd like to specify that the only cases an UA is allowed to sniff the
content type are:

- Content-Type header is missing (because the server clearly does not
know the type), or
- Content-Type is literal text/plain, text/plain;
charset=iso-8859-1, text/plain; charset=ISO-8859-1 or text/plain;
charset=UTF-8 (to deal with historical mess caused by IIS and Apache), or
- Content-Type is literal application/octet-stream

(In all these cases, the server clearly has no real knowledge. If a file
is meant for downloading, the server should use Content-Disposition:
attachment header instead of hacks such as using
application/x-download for Content-Type.)
Yes! But if the UA in those cases also checked for a binary ID (and 
found such) there would hardly be any ambiguity.

For any other value of Content-Type, honor the type specified in HTTP
level. And provide no overrides of any kind on any level above the HTTP.
Levels above HTTP may provide HINTS about the content that can be used
to aid or override *sniffing* but nothing should override any
*explicitly specified Content-Type*. [This is simplified version of the
logic that the Mozilla/Firefox already applies:
http://mxr.mozilla.org/mozilla-central/source/netwerk/streamconv/converters/nsUnknownDecoder.cpp#684]

And for heavens sake, do not specify any sniffing as official.
Instead, explicitly specify all sniffing as UA specific and possibly
suggest that UAs should inform the user that content is broken and the
current rendering is best effort if any sniffing is required.


Any sniffing would be as a fallback only if the UA suspects the content 
type is wrong (i.e. video of type text for example) or similar,
and it would not hurt to have some standardized behavior in those cases 
that sniff for something simple like a short binary id rather than parse 
potentially several kilobytes of the stream (which was where this 
discussion took

Re: [whatwg] Video with MIME type application/octet-stream

2010-09-14 Thread Roger Hågensen

 On 2010-09-13 15:55, Nils Dagsson Moskopp wrote:

Mikko Rantalainenmikko.rantalai...@peda.net  schrieb am Mon, 13 Sep
2010 16:03:27 +0300:


[…]

Basically, this sounds like all the issues of BOM for all binary
files.

And why do we need this? Because web servers are not behaving
correctly and are sending incorrect Content-Type headers? What makes
you believe that BINID will not be incorrectly used?

(If you really believe that you can force content authors to provide
correct BINIDs, why you cannot force content authors to provide
correct Content-Types? Hopefully the goal is not to sniff if BINIDs
seems okay and ignore clearly incorrect ones in the future...)

This. BINID may be a well-intended idea, but would be an essentially
useless additional layer of abstraction that provides no more
safeguards against misuse than the Content-Type header.

The latter also required no changes to current binary file handling —
which for BINID would need to be universally updated in every
conceivable device that could ever get a BINID file.


Yeah! That is the one shorterm drawback, while the longterm benefit is 
that file extensions and content type would be redundant (as the files 
themselves would inform what they are, in a standard way).
Oh well! I can always dream that some form of binary id will come about 
in the next decade or so I guess...*laughs*


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Video with MIME type application/octet-stream

2010-09-14 Thread Roger Hågensen

 On 2010-09-14 08:37, Julian Reschke wrote:

On 13.09.2010 23:51, Aryeh Gregor wrote:

...

And for heavens sake, do not specify any sniffing as official.
Instead, explicitly specify all sniffing as UA specific and possibly
suggest that UAs should inform the user that content is broken and the
current rendering is best effort if any sniffing is required.


This is totally incompatible with the compelling interoperability and
security benefits of all browsers using the exact same sniffing
algorithm.
...


Again, there's more than browsers. And even for video in browsers, 
the actual component playing the video may not be part of the browser 
at all.


So there's *much* more that would need to implement the exact same 
sniffing.


Has anybody talked to the people responsible for VLC, Windows Media 
Player, and Quicktime?


Best regards, Julian




Good question, I can only speak for my self as a developer but I imagine 
that anything that allows a media player to stop sniffing sooner in a 
file is very welcome indeed
as that saves resources (memory, disk access, faster initialization of 
the codec and user related interface feedback, etc.)
Legacy files will always be an issue obviously, but there is no reason 
to let future files remain just as difficult, eventually legacy files 
will vanish or be transcoded or have their beginning patched to take 
advantage of it.
(in the case of my proposal one could easily add it by hand using a hex 
editor, so it's certainly not difficult to support in that regard.)


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Video with MIME type application/octet-stream

2010-09-16 Thread Roger Hågensen

 On 2010-09-16 15:17, Mikko Rantalainen wrote:

2010-09-13 16:44 EEST: Roger Hågensen:

  On 2010-09-13 15:03, Mikko Rantalainen wrote:

And why do we need this? Because web servers are not behaving correctly
and are sending incorrect Content-Type headers? What makes you believe
that BINID will not be incorrectly used?

Because if they add a binary id then they obviously are aware of the
standard.

And because Apache developers were obviously aware of the Content-Type
header they implemented it correctly?... I
also fail to see future where server software vendors provide perfect
implementations.

We can dream can't we? *smiles*

Old servers/software would just pass the file through as they are
unaware so content type issues still exist there,
eventually old servers/software are rotate out until most are binary id
aware.
This is how rolling out new standards work.
A server would only add a binary id if none exist and it's certain (by
previous sniffing) that it's guess is correct,

How about we add a new parameter to Content-Type header instead? For
example, the server could send a file with a header such as

Content-Type: text/plain; charset=iso-8859-1; accuracy=0.9

and a conforming user agent should assume that there's 90% possibility
that the given content type is correct. If accuracy is 1.0 (100%) then
sniffing MUST NOT be done. If the sniffing the UA is going has 95% hit
rate the results from such sniffing should probably be used instead of
HTTP provided content type if server provided accuracy is less than
0.95. I'll make it explicit that any sniffing done by UA should have a
probability of error attached to the result. A sniffing result without
probability for error has no value because otherwise a literal
text/plain is a good heuristic for any file (see also: Apache).

This way server administrators could opt-out from any sniffing and an
incompetent server administrator could specify global accuracy of 0.1 or
something like that. Hopefully new web servers would then either provide
no default accuracy at all or specify some low enough value that allow
for sniffing.

My point is that there's no need for BINID. My suggestion above is
compatible with existing servers, content and UAs, as far as I know. In
addition, it would provide a way to declare that the given content type
should be trusted even if UA thinks that honoring the content type
causes problems for viewing the content.


Now we're getting somewhere, I really like this proposal.
Actually, with your idea a binary id would complement this as a server 
supporting it could provide accuracy=1.0 in that case.


So I have to say that your accuracy parameter seems quick to add/support 
(both in http header, and in HTML5 meta and other appropriate places?)


And I doubt the Apache Foundation will have much issues supporting this 
either.


I guess we'll have to see what the rest thinks in this list but... a 
solution this slick..

it certainly has my vote, nice work Mikko *thumbs up*.

--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Proposal: add attributes etags last-modified to link element.

2010-09-19 Thread Roger Hågensen

 On 2010-09-20 02:37, Aryeh Gregor wrote:

2010/9/19 Julian Reschkejulian.resc...@gmx.de:

So it's a workaround that causes a performance optimization. It wouldn't be
necessary if the linked resource would have the right caching information in
the first place.

Sure it would.  You can currently only save an HTTP request if a
future Expires header (or equivalent) can be sent.  A lot of the time,
the resource might change at any moment, so you can't send such a
header.  The client has to check every time, and get a 204, even if
the resource changes very rarely.  If you could indicate in the HTML
source that you know the resource hasn't changed, you could save a lot
of round-trips on a page that links to many resources.



Describing it that way sounds odd, as that would be explicitly 
indicating which resources are still valid,

imagine the bloat if the document links to hundreds of others.

It would be better to define this as explicitly indicating which 
resources are NOT valid any longer,

with most sites/web applications this would only be a select few links.

I like the idea though as it'll allow a page to tell the browser that 
Oh BTW! If you happen to have this link cached, it was last updated on 
.. You might wanna re-check that if you got a older copy, despite 
what the cache copy's expire is.


Some thought need to be given to this though. This will only be same 
domain right? If not then it could be partly used for a DoS. (if a 
popular site is compromised and changed to link to a ridiculous amount 
files on other sites it could get nasty right?)



--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Html 5 video element's poster attribute

2010-09-20 Thread Roger Hågensen

 On 2010-09-20 05:09, Robert O'Callahan wrote:
On Sun, Sep 19, 2010 at 10:57 PM, Shiv Kumar sku...@exposureroom.com 
mailto:sku...@exposureroom.com wrote:


I'm glad to see that people do see the need to change (or specify
in more detail) the behavior of the poster at least insofar as it
disappearing before the video is played. As far as I know, every
major browser (IE 9 beta, Firefox, Safari, Chrome and Opera) do this.


As Monty discovered, Firefox does not make the poster disappear until 
the video is played. Feel free to file bugs against any browsers that 
behave differently; just because the spec allows something doesn't 
mean it's a good idea!


Tightening up the spec to require the poster to remain until the video 
is played does sound like a good idea.


We do need a spec change to allow the poster to be shown when the 
video has ended, if that is the most commonly desired behavior.


Rob
--
Now the Bereans were of more noble character than the Thessalonians, 
for they received the message with great eagerness and examined the 
Scriptures every day to see if what Paul said was true. [Acts 17:11]


The proper behavior should be that...
if there is a start poster then it must be displayed while any 
pre-loading takes place, if there is no pre-loading or auto-play then it 
must remain displayed until the user press play/pause/stop.
if there is a start poster then it must be removed if the user press 
play/pause/stop or pre-loading has progressed far enough for autoplay to 
start.


I'd also like to add that...
If the user pauses the video during play then a paused poster must not 
be shown as the user most likely intends to study the paused frame of 
the video, if there is a paused poster then it must be toggleable 
between paused poster and frame by the user as they please, a small 
symbol or control may be shown during the paused frame to indicate there 
is a paused poster available.


And I'd also like to add that...
If there is a end poster then it must be displayed when the user stop 
the video, or if the last frame of the video is reached then the end 
poster but be shown.


Finally I'd like to add that...
There may be one or more posters, the start/pause/end posters may be the 
exact same poster or different posters.
The 3 different types of posters must be scriptable to allow rotation 
between multiple different posters.



Does this sound like something that would satisfy both developers and 
users, without annoying either?


Personally I do not see much point in a paused poster but I guess if 
it's scriptable it could act as a info card or maybe repeat the chapter 
title or something like that!


Does posters support hotzones at all? To allow clickable 
items/links/symbols (urls?). Just curious!



--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Html 5 video element's poster attribute

2010-09-20 Thread Roger Hågensen

 On 2010-09-20 05:27, Chris Pearce wrote:
Right, so you want to be able to toggle the poster back on (when the 
media is paused or ended) but after playback has started.


I wonder if these are separate use cases, e.g. whether users would 
want to display a different image from the poster image in these 
cases. i.e. I wonder if we need to provide an attribute to specify an 
image to display when paused and another new attribute for an image to 
display when playback has ended. I wonder if that's overkill through.


No no no! Read my previous post why a paused poster is bad idea unless 
done exactly as I suggested there.
A paused poster should under no circumstance steal the paused frame, 
the user may actually want to look closer at the pause frame, if a 
paused poster force itself to be displayed the user will be pretty 
pissed. (I certainly would be)
The video streaming service Voddler is an annoying example of this, 
pause the movie in their player and an ad is shown, although I 
understand why they wish to show an ad, it does makes it impossible to 
pause and look at the still frame of the video.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] File Upload Progress Event (Upload Progress)

2010-09-20 Thread Roger Hågensen

 On 2010-09-20 10:16, Olli Pettay wrote:



I do think browser UI for large uploads is terrible and needs to
be fixed.


I agree!


Yeah, the UI is terrible, but that is about browser implementations and
not about any specification.


Well! There is nothing preventing the specs from providing a minimum UI 
guideline that should be followed by UAs.


Heck, MicroSoft has huge Styleguides on how to do UI stuff that 
developers should follow.

I do not see why the HTML specs can't provide similar guidelines.

Now if a UA has what they believe are a better UI then that's fine, but 
as a minimum they should at least implement the minimum UI as outlined 
in the specs for example.
This way all UIs for HTML will have a common baseline, which can only 
improve usability for the users, which is the key point regardless right?.


So things like upload/download and video/audio etc. really should 
have a guideline to ensure a baseline UI that is consistent for user.


So a form submission if it takes too long (a guideline for what too 
long is should also be specced  as an advisory) should provide a 
progress report either in the form of a progress bar or ETA or a 
combination of the two, and the should support the ability to let the user

pause/continue (that depends on server features I guess) or even abort.

If this is clarified then maybe we wont need a dozen different 
uploaders that may or may not work in this or that browser.
I've seen web form uploading, flash based uploading, java based 
uploading, javascript based uploading, or even browser plugins,

I think the browser could do the uploading better and more safely,
thus the other methods can be used as fallback for older browsers or 
alternatives in case the user prefers using one of them instead of the 
built-in implementation.


There's an old saying, if it's worth doing, it's worth doing right, 
downloading has gotten a lot better, heck Opera supports .torrents which 
is brilliant if a website provides webseeds as an alternative to just a 
direct download,
uploading is really shaky, I Google Chrome actually has a upload 
progress, Firefox does not (or rather it does bit it's broken?)


Interesting read 
http://michaelkimsal.com/blog/why-do-browsers-still-not-have-file-upload-progress-meters/

Some of the comments might be worth looking over too.
Firefox upload progress bug 
https://bugzilla.mozilla.org/show_bug.cgi?id=249338


I think what Google Chrome does is adequate. I haven't tested multiple 
uploads at the same time, but am I right in assuming that the upload 
progressbar in Chrome is per page ?
So that the user could do uploads to two different sites and flick 
between tabs to check the progress?
What I haven't noticed Chrome having is a upload overview window, so 
that you could see the progress of all current uploads, similar to how 
you can see all current downloads in say Firefox.


I've also missed the ability of being able to queue downloads (and 
uploads), usually the darn browser tries to download everything at once, 
being able to set a max simultaneous file  limit for downloads and 
uploads.


Using the search terms: browser upload progress bar
gives over a million hits on both Google and Bing.
the terms: upload progress bar
gives around half a million hits.

And the first few pages are all pretty much about showing progress while 
uploading files with the browser, with various solutions.


I know how painfull it is to upload stuff without a progress. I recently 
released 3 albums on indieTorrent.org and their old (current) upload 
uses a basic HTML form.
Luckily they advised  using Chrome since it showed upload progress (this 
was first time I discovered Chrome had this), which let me stay sane as 
I uploaded 58 audio tracks in lossless FLAC format.
I can't even imagine how frustrating it would be to upload a huge video 
file with a normal upload form and no progress info.


So Chrome has upload progress, Firefox has a broken one, what does 
current/upcoming IE, Opera and Safari browsers have?


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Form element invalid message

2010-09-20 Thread Roger Hågensen

 On 2010-09-21 00:38, Shiv Kumar wrote:


Scenario 1:

We now have the option define if an element is required and the form 
will validate the value such elements before submission. That's a step 
in the right direction. However, it so happens different 
implementation do different things in the case when the validation 
return false.


Some browsers, have no visual indication (probably due to lack of 
support at this time)


Some browsers will outline the field in question

Some will pop up a message under the field in question. The message is 
something cryptic like You have to specify a value.


Scenario 2:

That's one aspect I'd like to talk about. The other aspect is that 
typically, you don't want to show only one error as a time to the end 
user. You want to show them all validation errors after trying to 
submit the form one time (this is the common practice as well), rather 
than forcing them to submit a form multiple times to discover 
validation issues one by one. As you can imagine this is a nightmare 
for the end-user.


For the first scenario I'd like to propose that we have a 
validationMessage attribute (or some other name) that allows web 
developers to specify a more appropriate (based on the type of input 
data required and/or the input type such as text, url, email etc.), 
user friendly/business friendly and creative error message rather than 
some unknown message (as different vendors will likely have their own 
verbiage).


For the second scenario I guess the spec should be clear about 
validating all fields? I'm not sure what the spec for this is (I can't 
seem to find anywhere that details the validation process for forms).


What are your thoughts on the above?




Hmm! Like a error= attribute or something like that, which is shown 
instead of a generic built in browser message?


Take this example http://www.quirksmode.org/dom/error.html


I assume you wish to be able to do this instead:
input size=20 name=email id=email error=A valid email must be 
entered! type=email required


and that if it is: input size=20 name=email id=email error= 
type=email required

or if it is input size=20 name=email id=email type=email required
then the built in browser error message for type=email should be shown?

This still leaves the question of where/how to display the error.
Some might want a Balloontip happening  when the field looses focus 
(this is how the Windows OS does it) and saves the most IMO, the error 
text could be shown then.


If the browser does not behave like that but only when pressing 
submitting then marking the fields and perhaps provide a mouseover tip 
explaining the error (the error text could be shown then)


Or the error text could be shown like in that example, but that wold 
require a errorfor or similar to how label for behaves.



I think it would be just best to spec it so that a Balloontip with the 
browser error message (or the input field's custom error message) is 
displayed at the field loosing focus, and optionally keeping the field 
highlighted until the input is acceptable, thus allowign the user to 
fill out the rest of the form and fix the error last, and not allow 
submission unless all errors are fixed obviously, and if no fields been 
filled and the user tries to submit then all required fields would be 
highlighted, maybe a default OS error ding should be played too.


I believe this is the best UI implementation as it's instant feedback based.


--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] File Upload Progress Event (Upload Progress)

2010-09-20 Thread Roger Hågensen

 On 2010-09-21 00:59, Nils Dagsson Moskopp wrote:

21.09.10 Roger Hågensenresca...@emsai.net:


Well! There is nothing preventing the specs from providing a minimum
UI guideline that should be followed by UAs.

UAs compete on interface, too. As long as the standards are open and
consistent, why should implementations be constrained more than
absolutely necessary? Keep in mind that HTML is already one of the most
important file formats and may be for decades to come.

I recommend filing bugs with UAs to get this deficiencies cleared up.


Please not my wording. should, in that UAs should follow the guideline.
I think Google Chrome's upload progress bar is a perfect example of what 
a baseline guideline is of how browsers should implement it.
Now if browsers add a more evolved upload progress then that is awesome 
obviously.



--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] [IndexedDB] Simultaneous setVersion from multiple pages

2010-09-22 Thread Roger Hågensen

 On 2010-09-22 21:56, ben turner wrote:

Sorry folks, this went to the wrong list! Please ignore.

-Ben

On Wed, Sep 22, 2010 at 12:55 PM, ben turnerbent.mozi...@gmail.com  wrote:

Hi folks,

While implementing the latest setVersion changes I came across this problem:

Let's say that a site is open in two different windows and each
decides to do a setVersion request at the same time. Only one of them
can win, obviously, and the other must end up calling close() on
itself or the setVersion transaction will never run (and all database
activity will basically hang at that point).

Jonas and I decided that this situation should result in the losing
database's request having its error handler called with DEADLOCK_ERR.
Does that sound reasonable to everyone?

-Ben



Hehe! That's ok.
But to to make a suggestion, may I say that simply returning a busy / 
please try again error would work just as fine? Each open window are 
acting as if they are two different connections, so treat them as such. 
What does other DB's like MySQL do? My guess is a busy/try again error 
right?




--
Roger Rescator Hågensen.
Freelancer - http://EmSai.net/



Re: [whatwg] Proposal: add attributes etags last-modified to link element.

2010-12-09 Thread Roger Hågensen

On 2010-12-08 20:44, Ian Hickson wrote:

On Mon, 20 Sep 2010, Roger Hågensen wrote:

It would be better to define this as explicitly indicating which
resources are NOT valid any longer, with most sites/web applications
this would only be a select few links.

Doing that would require knowing what the browser's cache contains.


Actually it would help the browser to display content faster and with 
less bandwidth use,

as the html document would have last-modified for link elements,
the browser then just checks if the linked element is cached and if it 
is, is the last-modified different.


While currently the browser would make a last-modifed http header 
request for the link element.
A link or script with a href is less likely to change than the html 
contents on the majority of sites,
so being able to hint to the browser that this css or that javascript 
was not changed saves the browser the (multiple) roundtrips to check if 
the last-modified of the css or js file.


So a last-modifed just lets the web author tell the browser cache that 
the link is stale or not stale.

So it's:
1. http header or http get of html, if not cached or stale or cache 
heuristics thinks that last-modified should be re-checked.

2. do the same with all hrefs, sources etc. in the html document.
vs
1. same as 1 above but last-modified hinting of href and src allows the 
browser to skip step 2 (in well authored or well made template based 
pages obviously).


Damn. I think I skewed this whole topic away from it's original subject 
to last-modified being supported by all link/href/src etc. in html in 
general.
Which may not be a bad idea really, as a last-modified=timestamp_here 
(timestamp is http://www.ietf.org/rfc/rfc2822.txt )
would only be a few bytes vs a http header call or a full http get call 
and added to that the latency/delay in addition.

Shortening last-modified to modified might be something to consider as well.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Bluetooth devices

2010-12-14 Thread Roger Hågensen

On 2010-12-14 16:12, Bjartur Thorlacius wrote:

On 12/13/10, Diogo Resendedrese...@thinkdigital.pt  wrote:

Bjartur, I think you misunderstood our point. The idea is to have a way of
accessing this kind of devices (not necessarily by bt or usb). The
difference of this kind of devices is they're not keyboard, mics, headphones
or cameras.


I still don't grasp how that could be useful. Please provide an example.
So you've got a non-kb, mouse, headphone or camera device, say a
permanent storage drive. There's no use in directly accessing the
device. If the app is a video stream filter, it can declare that it
takes a video stream as an input. The app only cares about the stream
being of MIME type video and potentially the encoding, not whether
the stream comes from a disk, camera, ethernet or tape.

Applications should not request keyboard access. They don't have to
care about keycodes and keyboard layouts. That's what OSes are for.
They request text. I fail to see what's so different about other
devices. In fact, applications shouldn't have to account for the fact
that there's some such thing as devices at all.


I have to agree with this, applications should be device agnostic.

If there is a particular type of devices that provides some form of data 
not currently supported
then the standards should be extended to support that, but the device 
handling should still be kept with the drivers,
if apps start talking directly (bypassing drivers/OS/Browsers/APIs) then 
that is just one huge bugfest waiting to happen.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Bluetooth devices

2010-12-16 Thread Roger Hågensen

On 2010-12-15 18:02, Aryeh Gregor wrote:

On Tue, Dec 14, 2010 at 10:12 AM, Bjartur Thorlacius
svartma...@gmail.com  wrote:

I still don't grasp how that could be useful. Please provide an example.
So you've got a non-kb, mouse, headphone or camera device, say a
permanent storage drive.

No, not something so general-purpose.  Say it's some type of device
where the market is so small that standardization is infeasible --
maybe it's only useful in a particular specialty, and there are only
one or two low-volume vendors.  Or maybe it's some new type of device
where the market is uncertain and nothing has been standardized yet.
Given that there's no standard high-level way to interact with the
device, it might be desirable to have *some* way to interact with it,
necessarily generic and low-level.  Probably along the lines of
sending and receiving binary messages.

At least that's the general idea I get.  I can't give any specific
examples, but I don't think mass-market stuff like permanent storage
drives is what we're talking about here.  (We already have filesystem
APIs in the works anyway, right?)  Of course, more specific real-world
use-cases would be necessary before anyone would consider speccing
something like this.



Something that specific would be better implemented as a browser plugin 
that wrap OS API or a OS driver's API functionality,
if it becomes popular then one or more browser developers would probably 
be interested in supporting it without the need for a browser 
plugin/wrapper,
at which point one just need to follow the guidelines that Ian posts 
here quite frequently to get it standardized.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Processing the zoom level - MS extensions to window.screen

2010-12-18 Thread Roger Hågensen

On 2010-12-18 18:58, Charles Pritchard wrote:

On 11/24/2010 10:23 AM, Boris Zbarsky wrote:

On 11/24/10 4:13 AM, Charles Pritchard wrote:

 And, these aren't great lengths. It's about 6 lines of javascript.

Uh... That depends on how your drawing path is set up. If I understand
correctly what you're doing, you have to get the DPI ration (call 
it N),

change the canvas width/height by a factor of N, and change all
coordinates in all your drawing calls by a factor of N, right?


You're correct, I grab DPI, lets call it xN and yN, I change the canvas
width height.
Then I run .scale(xN, xY) before my drawing calls. They're completely
agnostic
to the change.


Ah, I see.  I assumed you were actually trying to draw the fonts at 
the right size for the viewer, see, as opposed to doing an image 
upscale of text rendered at a smaller size.




I frequently use scale(n,n) and scale(1/n,1/n) styles, as well as 
translate, to set the offsets and ratio of my fillStyle

when it's a pattern or gradient.

Transformations are widely used by feature-rich canvas apps.
font = (fontSize * fontScale) + 'px sans-serif';  is by no means foreign.

While  translate can be used as a short-cut, for while-loops,
its most important purpose is offsetting the fill style when painting 
on textures.


Wouldn't a global (for the canvas) flag that sets the pixels/values to 
pseudopixels that are automatically translated (and thus DPI 
aware/scaled) make things a lot easier?
Windows does this, and so does the recent versions of MacOS and Linux 
GUIs as well.


In some Windows programs I make I get the OS DPI and calculate a 
modifier, this modifier is applied to all sizes so that 300px is scaled 
by say 1.07 if the user/OS is set at 102.72 DPI (well Windows would show 
this as 103 since it doesn't support float DPI, but my programs do)
But it would have been much easier if the scaling was automated under 
the hood and I could use just pseudo/virtual pixels, currently I have to 
wrap or apply scaling, .NET should handle it for you.
HTML (or rather CSS) has the em, so canvas should also be able to do 
the same right?



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Device Element

2010-12-28 Thread Roger Hågensen

On 2010-12-28 09:53, Silvia Pfeiffer wrote:

How about making a  concrete proposal as to what it should look like?
If Google was to implement it and turn it into a concrete proposal, I
wouldn't have a problem with it either. As it is right now the spec
for usb/RS232 is useless IMHO.

Silvia.


Yeah! And not to mention the security bomb.
I don't like the idea of a web app accessing my USB stick without my 
permission.

So that means that all browsers would need to ask the user for permission.
That is at the Browser level.

Then there is the OS and/or driver level which all those wanting this in 
this list so far is forgetting.
OS priviledge levels. An administrator (school, work, library, 
fire/police/hospital, or home network) might have set the OS to not 
allow a regular user to access say a USB stick or other USB or serial 
device with their regular user account.

Windows Vista+, Mac OS X, and Linux does this.

Unless something was blocked by an admin, then anything available in 
usermode is available to anything else in usermode.


So if all browsers supported an arbitrary USB/serial device API like 
maybe get device config, set device config, read data, write data, those 
are the basics right? (any more than that and it's no longer generic)


And they would also need to allow the user to explicitly enable which 
device should be exposed.
Maybe the browser could when asked by a web app to access a device, 
simply show a prompt informing the user that this/that app wants device 
access,
then show all devices the OS presents (Readable/Writable state etc. 
admin disabled ones are not listed etc.) in a list unless the web app 
asked for a specific device, in which case only list the matches or 
exact match.
If the user allows the webapp access, then and only then does the webapp 
get access to the device/or devices the user specified.
The webapp should also be marked as being secure (HTTPS) or not, and the 
user should be able to set the browser to ignore non-secure webapps 
(HTTP) etc (can webapps be signed with a certificate at all?).


Don't get me wrong, I understand those of you advocating so hard for 
this in the list, but the issue is that weather data and test stats 
although similar are different enough that a generic API is needed,
and a generic API needs a lot of security precautions as I'm sure many 
here may have a USB harddrive hooked up to the system, the last thing 
you want is for some webapp that seems like it's just some microphone 
voice FX toy suddenly barge through your harddrive right?

Or worse, that weather app starts poking around your microphone, or webcam.
A lot of people has certain devices hooked up, since USB is so versatile 
there is anything from:
recording devices (mic, cam, etc) to output devices (speakers, 
headphones, mini displays/embed keyboard screens, picture frames etc), 
to networking (network cards/routers, controllers for household electrics),

and who knows what else.

So the remark someone made that the security trade is worth it? Nah-ah. 
Nothing on the net should ever have direct access to any 
input/output/storage device or similar at all.
Any webapp (I use webapp as I consider HTML, Java, Flash in the same 
boat in this particular issue) should go through 3 layers of security.
The Browser layer (the listing/prompt I described above), The User layer 
(if the OS supports it, let the user dictate which software can use 
which devices), and then the OS layer (admin settings, intranet, driver 
config etc).


I know some people here are drooling at the idea of driverless USB 
devices that a webapp talks to directly, but it's never going to happen.
The OS (or admin configuration) still control which devices are 
available, even if they are HID.
And no browser would allow blind access to the OS's devices, a few major 
scandals and people would flee from that browser like crazy. (I think 
almost every major browser dev here has been though such a crappy event 
and it ain't fun.)


Now, I'm no USB expert, but isn't it possible for a USB device to 
provide a user level driver when being plugged in?

If so then do that for the device (userlevel USB HID device driver?)
Then provide a url to the webapp. The browser will/should ask the user 
if it's ok for webapp z to access the blah device, user clicks yes 
and off ya go.

Doesn't sound that overly complicated to me (from a user standpoint).

As a dev I know that an admin can (and should be able to) disable 
userlevel drivers (or ability to use them) etc. for some regular users 
in the OS.
Likewise a school, library or public system or public service system 
might want to config the browser to not allow webapps to access hardware 
directly,
in which case the browser would either turn up with a box saying No 
Devices Found or This Device is not allowed on this system or something..


If we allow webapps to tunnel straight through the Browser, the User, 
the OS, the Drivers, and access the Device directly you are 

Re: [whatwg] Device Element

2010-12-28 Thread Roger Hågensen

On 2010-12-28 03:16, Seth Brown wrote:

I also believe that the working group should make the device element
spec a high priority. If you don't google will probably implement
their own version for chrome OS(it will be necessary in a browser
based OS model).

Thanks,
Seth Brown


Not really! Chrome OS will do Webapps the same way as Firefox, Safari, 
Opera, and IE will do it.
What probably Chrome OS will do that the browsers don't do is Apps 
kinda like how there are Android Apps, and iPhone Apps.
Webapps still work on Android and iPhone right? So it will hardly be any 
different on Chrome OS.
An App will always have more control than a Webapp, and a driver 
will always have more control than a App, especially if the driver 
lives in a different privilege level.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Low Memory Event

2011-01-01 Thread Roger Hågensen

On 2011-01-02 03:27, Kornel Lesiński wrote:
On Sun, 02 Jan 2011 00:53:48 -, Charles Pritchard 
ch...@jumis.com wrote:


ArrayBuffer and Canvas use contiguous memory segments. You don't need 
a complex GC pass to let those ones go.

For my use cases, those are the two types I'm working with.

Keeping them around helps the speed of my app, letting them go
cuts down on memory usage.


Maybe better solution would be to add purgeable flag to canvas (i.e. 
allow browser to clear canvas at any time) or some way to create/mark 
weak references? (i.e. a reference to object that can be changed to 
null if browser is in low-memory situation).


Although, I'm not convinced that handling of low memory in JS is 
necessary. Browsers already have some ways to free memory, e.g. by 
freeing all bitmaps for img or simply by unloading whole pages.


Amount of available memory, even on mobile devices, increases 
dramatically each year. It's possible that by the time this feature 
gets specified, implemented, released and installed on significant 
number of devices it will be irrelevant. On my desktop computer I 
often have 100 tabs open and memory is not an issue, and my mobile 
phone has 1/16th of that RAM already.




I think this is starting to get off topic, as we're now into OS memory 
allocation territory.
If the browser is told by the OS or the browser feels it need to 
conserve memory it can do whatever it pleases, OS stability trumphs web 
page/app/script/whatever, and should always do so.


Charles, you initially said you where worried about this since you used 
undo buffers.
Why not simply add undo buffers to the Canvas spec? That way the browser 
can start tossing away the oldest undo buffers automatically when it 
starts getting memory anorexic.
And depending on the browser implementation and the OS and hardware 
support, on some systems the Canvas undo buffers could even be in 
graphics memory.
It's wrong for Canvas to have undo stuff in active memory, most graphics 
programs store the undo on disk or it's paged out to a swapfile at the 
least.
So if you have to make your own undo buffers for Canvas, then I'd say 
that Canvas is lacking and might need undo buffers as part of it's spec.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Pressure API?

2011-01-05 Thread Roger Hågensen

On 2011-01-05 02:39, Ian Hickson wrote:

On Wed, 20 Oct 2010, Jens M�ller wrote:

now that device orientation, geolocation, camera etc. have been spec'ed:
Is there any intent to provide an API for pressure sensors?

This might well be the next hip feature in smartphones ...

Oh, and while we are at it: Humidity probably belongs to the same group.

I haven't added these features to the spec for now, since the use cases
for it aren't that compelling and so it's probably best to wait a while
longer, allowing browser vendors to implement more of the stuff we have
already added.


May I suggest that things like geolocation, temperature, pressure, 
humidity, altitude, etc. are all classified as part of a Environment 
API set?
it should make classification of future tech, as well as 
searching/looking up and documenting them a lot easier I hope!
Audio In-Out and Camera/Photo/Video all belong under a umbrella Media 
API set after all as well?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-05 06:10, Boris Zbarsky wrote:

On 1/4/11 10:51 PM, Glenn Maynard wrote:

On Tue, Jan 4, 2011 at 10:53 PM, Boris Zbarskybzbar...@mit.edu  wrote:

Note that you keep comparing websites to desktop software, but desktop
software typically doesn't change out from under the user (possibly 
in ways
the original software developer didn't intend).  The desktop apps 
that do
update themselves have a lot of checks on the process precisely to 
avoid
issues like MITM injection of trojaned updates and whatnot.  So in 
practice,
they have a setup where you make a trust decision once, and then the 
code

that you already trusted verifies signatures on every change to itself.


HTTPS already prevents MITM attacks and most others


I've yet to see someone suggest restricting the asking UI to https 
sites (though I think it's something that obviously needs to happen).  
As far as I can tell, things like browser geolocation prompts are not 
thus restricted at the moment.



the major attack vector they don't prevent is a compromised server.


Or various kinds of cross-site script injection (which you may or may 
not consider as a compromised server).



I thnik the main difference is that the private keys needed to sign
with HTTPS are normally located on the server delivering the scripts,
whereas signed updates can keep their private keys offline.


Or fetch them over https from a server they trust sufficiently (e.g. 
because it's very locked down in terms of what it allows in the way of 
access and what it serves up), actually; I believe at least some 
update mechanisms do just that.


That's not a model web apps can mimic: all ways to execute scripts, 
in both

Javascript files and inline in HTML, would need to be signed, which is
impossible with templated HTML.


Agreed, but that seems like a problem for actual security here.


You don't really know that an installer you download from a server is
valid, either.  Most of the time--for most users and most
software--you have to take it on faith that the file on the server
hasn't been compromised.




Considering the fact that StartCOM ( https://www.startssl.com/ ) offers 
free domain based certificates that all major browsers support now 
(IE/Microsoft was a bit slow on this initially),
there is no longer any excuse not to make use of https for downloading 
securely or logging in/registering (forums etc), or using secure web 
apps.
So leveraging https in some way would be the best solution here, and all 
the https code is allready in the browser code bases anyway.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-05 01:07, Seth Brown wrote:

I couldn't agree more that we should avoid turning this into vista's UAC.


The issue with UAC is not UAC.
UAC (especially the more dilligent one on Vista) merely exposed 
programmers and software expecting raised priviledges while they 
actually did not need them.
Linux has had UAC pretty much from day one so programmers and software 
has played nice from day one.
And UAC is not really security as it does not protect the user, UAC is 
intended to ensure that a user session won't fuck up anything else like 
other accounts or admin sessions or the OS/kernel.

UAC protects the system from potentially rogue user accounts.
So it's a shame that UAC's introduction in Vista brought such a stigma 
upon it as I actually like it.


Myself I have a fully separate normal user account (rather than the 
split token one that most here probably uses) so I actually have to 
enter the admin password each time,
but I do not find it annoying, and I actually develop under this normal 
user account.
only system updates or admin stuff need approval, and the odd software 
(but I try to avoid those instead).
Running software or installing software need to bring up any UAC at all, 
if it does it is simply lazy coding by the developers,

and any webapp stuff should also follow the same example in this case.

UAC is meant to help isolate an incident and prevent other parts of a 
system from being affected, or other users/accounts,

so a webapp should be secured under those same principles.
Considering all the issues with cross site exploits and so on it's 
obvious that the net is in dire need of some of those core principles,
so please do not so easily dismiss UAC due to how it's perceived, but 
rather judge it by what it actually is instead.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-04 22:59, Seth Brown wrote:

That being said. Granting access to a particular script instead of an
entire site sounds like a reasonable security requirement to me. As
does using a hash to verify that the script you granted permission to
hasn't changed.

-Seth


A hash (any hash in fact, even secure ones) can only guarantee that 
two pieces of data are different!
A hash can NEVER guarantee that two pieces of data are the same, this is 
impossible.
A hash can only be used to make a quick assumption that the data 
probably are the same,
thus avoiding expensive byte by byte comparison in cases where the 
hashes differ.
If the hashes are the same then only a byte by byte comparison can 
guarantee the data are the same.

Any cryptography expert worth their salt will agree to the statements above.

HTTPS which is continually evolving is a much better solution than just 
relying on hashes and plain http,
I cringe each time I see a secure script that is delivered over http 
which purpose is to encrypt the password you enter and send it to the 
website.
HTTP authentication however isn't so bad if only the damn plaintext 
basic support was fully deprecated AND disallowed,
then again now that you can get domain certificates for free that are 
supported by the major browsers HTTP Authentication is kinda being 
overshadowed by HTTPS, which is fine I guess.


Just please don't slap a hash on it and think it's safe, that's all 
I'm saying really.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-07 Thread Roger Hågensen

On 2011-01-06 14:09, timeless wrote:

I'm kinda surprised that servers and CAs don't have better support for
reminding admins of this stuff.

I know for mozilla.org, nagios is responsible for warning admins.

The odd thing (to me) is that CAs make money selling certs, so one
would expect them to want to sell the renewed cert and get that new
booking by selling the new cert say 3-6 months before the old one
expires. And thus they're actually being customer oriented, providing
a useful service (possibly telling the customer about expired certs
they issued which are still running...).


This is why I like StartSSL.com so much (besides the free domain and 
email certs), is that the pay certs
are actually for the authentication/certification process, the actual 
certs themselves are free, and you can issue as many certs as you need 
for a certain amount of time.
Besides being cheap they also notify you a little while before the certs 
run out.


I know, I know, I'm almost sounding like a ad here, but StartCom the 
company behind startssl.com is leading by example here and I wish other 
CA's followed suit.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-17 Thread Roger Hågensen

On 2011-01-17 18:36, Markus Ernst wrote:

Am 17.01.2011 17:41 schrieb Jeroen Wijering:
We are getting some questions from JW Player users that HTML5 video 
is quite wasteful on bandwidth for longer videos (think 10min+). This 
because browsers download the entire movie once playback starts, 
regardless of whether a user pauses the player. If throttling is 
used, it seems very conservative, which means a lot of unwatched 
video is in the buffer when a user unloads a video.


Could this be done at the user side, e.g. with some browser setting? 
Or even by a stop downloading control in the player? An intuitive 
user control would be separate stop and pause buttons, as we know them 
from tape and CD players. Pause would then behave as it does now, 
while stop would cancel downloading.


I think that's the right way to do it, this should be in the hands of 
the user and exposed as a preference in the browsers.
Although exposing (read only?) the user's preferred buffer setting to 
the HTML App/Plugin etc. would be a benefit I guess as the desired 
buffering could be communicated back to the streaming server for example 
for a better bandwidth utilization.




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-17 Thread Roger Hågensen

On 2011-01-18 01:30, Boris Zbarsky wrote:

On 1/17/11 6:04 PM, Boris Zbarsky wrote:

 From a user's perspective (which is what I'm speaking as here), it
doesn't matter what the technology is. The point is that there is
prevalent UI out there right now where pausing a moving will keep
buffering it up and then you can watch it later. This is just as true
for 2-hour movies as it is for 2-minute ones, last I checked.

So one question is whether this is a UI that we want to support, given
existing user familiarity with it. If so, there are separate questions
about how to support it, of course.


I checked with some other users who aren't me, as a sanity check, and 
while all of them expected pausing a movie to buffer far enough to be 
able to play through when unpaused, none of them really expected the 
whole movie to buffer.  So it might in fact make the most sense to 
stick to buffering when paused until we're in the playthrough state 
and then stop, and have some other UI for making the moving available 
offline.


-Boris



A few other things to think about are the following:

Unbuffering:
It may sound odd but in low storage space situations, it may be 
necessary to unbuffer what has been played. Is this supported at all 
currently?


Skipping:
A lot of times I hit play, the movie buffers, fine. Then I skip to the 
middle maybe, but I can't since it hasn't buffered to that point yet.
I'm forced to wait for it to buffer up to that point before I can skip 
there. Which is a waste of time for me and more importantly a waste of 
bandwidth for both me and the server.



Solution?
I think that the buffering should basically be a moving window (I hope 
most here are familiar with this term?),
and that the size of the moving window should be determined by storage 
space and bandwidth and browser preference and server preference,
plus make sure the window supports skipping anywhere without needing to 
buffer up to it, and avoid buffering from the start just because the 
user skipped back a little to catch something they missed (another 
annoyance).
This is the only logical way to do this really. Especially since HTTP 
1.1 has byterange support there is nothing preventing it from being 
implemented, and I assume other popular streaming protocols supports 
byterange as well?


And I agree on the offline UI, a way to say... right click and choose 
Save for offline play (and possibly a Save button if the player GUI 
on the page decides to present it obviously).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-20 Thread Roger Hågensen

On 2011-01-20 19:16, Zachary Ozer wrote:

== New Proposal ==


I like this. It seems you laid out everything to ensure a balanced 
buffer, kinda like a moving window buffer which I pointed out earlier.
So as far as I can see, your proposal looks pretty solid, unless there 
are any implementation snafus. (looks at the Chrome, Safari, Opera, 
Firefox guys in the list (hmm, where's the IE guys?))
I really like the way you described the state3, and I think that would 
be my personal preference for playback myself. I assume JW player would 
be very quick at supporting/using it?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:15, Gregory Maxwell wrote:


I don't like keyframe seeking as the default. Keyframe seeking
assumes things about the container, codec, and encoding which may not
be constants or even applicable to all formats. For example a file
with rolling intra may have no keyframes,  and yet are perfectly
seekable.  Or if for some reason a client can do exact seeking very
cheaply for the request (e.g. seeking to the frame immediately after a
keyframe) then that ought to be permitted too.

I'd rather say that the default should be an implementation defined
accuracy, which may happen to be exact, may differ depending on the
input or user preferences, etc.


Accurate seeking also assumes things about the codec/container/encoding.
If a format does not have keyframes then it does have something 
equivalent.
Formats without keyframes can probably (I might be wrong there) seek 
more accurate than those with keyframes.


With keyframes the logical is that if the seek goes to 14:11.500 or an 
exact frame number,

then a keyframe based format would ideally be seeked to the exact keyframe,
or the first keyframe before the say B seeked B frame(s).
B frames contain too little info and may need pixels from the keyframe 
(or I or P frame etc.)


Any speccing on this should simply be based on the ideal or best-effort 
seeking that the 10 most popular
and the 10 oldest (but still in use) formats are able to. (and some 
formats are probably in both categories as well)

And just spec based on that.

But I guess that there could be high and low resource modes.
If the system/browser is in a low resource state then it makes sense to 
go keyframe (or equivalent)

and just do rough seeks,
but if in a high resource mode then keyframe + microseek (just made 
that up) for accurate seeking should be used.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:04, Philip Jägenstedt wrote:
Concretely: Add seek(time, flags) where flags defaults to nothing. 
Accurate seeking would be done via seek(time, accurate) or some 
such. Setting currentTime is left as is and doesn't set any flags.


Hmm. I think the default (nothing) should be synonymous with 
best-effort (or best) and leave it to the 
browser/os/codec/format/etc. as to what best effort actually is.
While accurate means as accurate as technically possible, even if it 
means increased resource use. I can see online video editing, subtitle 
syncing, closed caption syncing, and audio syncing being key usage 
examples of that.
And maybe a simple flag for when keyframe or second seeking and 
similar is good enough, preferring lower resource seeking.
So best ( default) and accurate and simple, that covers most 
uses right?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] File API Streaming Blobs

2011-01-21 Thread Roger Hågensen

On 2011-01-21 21:50, Glenn Maynard wrote:

On Fri, Jan 21, 2011 at 1:55 PM, David Flanaganda...@davidflanagan.com  wrote:

Doesn't the current XHR2 spec address this use case?
Browsers don't seem to implement it yet, but shouldn't something like this
work for the original poster?

He wants to be able to stream data out, not just in.

It's tricky in practice, because there's no way for whoever's reading
the stream to block.  For example, if you're reading a 1 GB video on a
phone with 256 MB of memory, it needs to stop buffering when it's out
of memory until some data has been played and thrown away, as it would
when streaming from the network normally.  That requiests an API more
complex than simply writing to a file.


Hmm! And I guess it's very difficult to create a abstract in/out 
interface that can handle any protocol/stream.
Although an abstract in/out would be ideal as that would let new 
protocols to be supported without needing to rewrite anything at the 
higher level.




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:57, Silvia Pfeiffer wrote:

On Sat, Jan 22, 2011 at 8:50 AM, Roger Hågensenresca...@emsai.net  wrote:

On 2011-01-21 22:04, Philip Jägenstedt wrote:

Concretely: Add seek(time, flags) where flags defaults to nothing.
Accurate seeking would be done via seek(time, accurate) or some such.
Setting currentTime is left as is and doesn't set any flags.

Hmm. I think the default (nothing) should be synonymous with best-effort
(or best) and leave it to the browser/os/codec/format/etc. as to what
best effort actually is.
While accurate means as accurate as technically possible, even if it means
increased resource use. I can see online video editing, subtitle syncing,
closed caption syncing, and audio syncing being key usage examples of that.
And maybe a simple flag for when keyframe or second seeking and similar is
good enough, preferring lower resource seeking.
So best ( default) and accurate and simple, that covers most uses
right?


Not really. I think simple needs to be more specific. If the browser
is able to do frame accurate seeking and the author wants to do
frame-accurate seeking, then it should be possible to get the two
together, both on keyframe boundaries and actual frame boundaries
closest to a given time.

So, I think what might make sense is:
* the default is best effort
* ACCURATE is time-accurate seeking
* FRAME is frame-accurate seeking, so to the previous frame boundary start
* KEYFRAME is keyframe-accurate seeking, so to the previous keyframe

Cheers,
Silvia.




Hmm, that sounds good, though I think that this would be more intuitive:
* default is best effort (if the interface for seeking isn't that 
accurate which can happen with small screen devices, or the author 
doesn't care or need accuracy, best effort is what happens today anyway)

* TIME (accurate seeking, millisec fraction supported)
* FRAME (accurate seeking, previous/next depending on seek direction)
* KEYFRAME (keyframe seeking, previous/next depending on seek direction)

The default/best effort, may be any of TIME or FRAME or KEYFRAME or even 
a combo of TIME and FRAME, it all depends on the 
OS/Browser/Device/Format/Codec/Stream.
An author must be able to test/check if TIME or FRAME or KEYFRAME is 
available, if none are available then only the default best effort is 
available.
If the author just chooses the default, but the browser actually 
delivers TIME or FRAME or KEYFRAME accuracy, then that should be relayed 
in some way so the author can display the correct units to the user 
visually or even convert them if possible,
like for example default/best effort seek is used but the actually 
seeking is FRAME then the author could convert and display that as a 
TIME value instead, as time is less confusing for average users than 
frame numbers.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 23:48, Gregory Maxwell wrote:


It seems surprising to me that we'd want to expose something so deeply
internal while the API fails to expose things like chapters and other
metadata which can actually be used to reliably map times to
meaningful high level information about the video.


Well you would never seek to a chapter or scene change anyway,
the author would instead preferably get a list of index/chapter/scene 
points which point to a TIME or FRAME and seek using that value instead.


I was thinking that in my other post I mentioned the flags default and 
TIME and FRAME.
But essentially the browser always does best effort, so the flag TIME or 
the flag FRAME should only indicate that the author wish to either use 
TIME or FRAME when seeking,
it is entirely up to the browser etc. if seeking actually occurs that 
way or not. It's just that TIME or FRAME is the base being used.

In which case KEYFRAME could just be dropped from my other post really.

So if the author uses/wish to use the flag TIME but the browser only 
presents FRAME then the author can still use time, or they could use 
FRAME but convert that to time for the user.
If TIME can be millisec. (i.e: 00:45:10.958 the 23rd frame of minute 45 
and second 10 if 24fps) Then TIME is basically synonymous with FRAME 
which would be 1343.
I assume that we won't run into issue with this normally. (who'd 
actually have 1000+ fps? and if that is the case then FRAME must be used 
for super highspeed/slowmo etc.)

So under normal use TIME and FRAME would be the exact same thing.

This means the flags would only be:
* default (TIME or FRAME)
* FRAME (frame must be used/supported as TIME is not accurate, 1ms 
accuracy needed.)




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-22 01:27, Silvia Pfeiffer wrote:



It seems surprising to me that we'd want to expose something so deeply
internal while the API fails to expose things like chapters and other
metadata which can actually be used to reliably map times to
meaningful high level information about the video.


Chapters have an API:
http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#dom-texttrack-kind-chapters
.

However, chapters don't have a seeking API - this is indeed something
to add, too. Could it fit within the seek() function? e.g.
seek(chaptername) instead of seek(time)?

Silvia.


The issue with that is that if chapter or index info does not exist the 
seek will fail.
At least with TIME and FRAME you are guaranteed to seek (and even if 
frame is keyframe you'd still end up at a frame near).


To me only TIME makes sense right now as HH:II:SS.MS 
(hours:minutes:seconds.milliseconds) and FRAME if 1ms for rare cases 
where video is more than 1000fps.
Benefit of TIME is it's framerate agnostic, so 00:15:20.050 would be the 
same wether the FPS is 24 or 30.
Which is ideal in the case of framerate changes due to being bounced 
up/down to a higher or lower quality stream while seeking or during 
buffering.
I saw the spec mentioning doubles, I'm assuming that TIME would be a 
double where you'd have: seconds.fraction (which would even handle FPS 
in the thousands)
So I think that focusing on TIME and really pushing that would benefit 
all in the short and long run. an author can easily calculate and use 
FRAME from TIME anyway for the few users that would actually need to 
work with that.
Myself I've done some video editing, but I've done more audio editing 
than I can recall, and I've never missed using frames for audio or 
video, I prefer time and millisecond fractions myself, I sync audio on 
timestamp and not frames for example.


So maybe just let the flag be default and nothing else, but as mentioned 
previously, leave it an enum just in case for the future (I'm thinking 
possible future timing standards that might appear, though it's hard to 
beat doubles really).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Onpopstate is Flawed

2011-02-02 Thread Roger Hågensen

On 2011-02-02 23:48, Jonas Sicking wrote:

I think my latest proposed change makes this a whole lot better since
the state is immediately available to scripts. The problem with only
sticking the state in an event is that there is really no good point
to fire the event. The later you fire it the longer it takes before
the page works properly. The sooner you fire it the bigger risk you
run that some script runs too late to get be able to catch the event.

/ Jonas



Yeah it's a shame it can't be atomic.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-04 Thread Roger Hågensen

On 2011-02-05 04:39, Boris Zbarsky wrote:

On 2/4/11 7:42 PM, Adam Barth wrote:

interface Crypto {
   Float32Array getRandomFloat32Array(in long length);
   Uint8Array getRandomUint8Array(in long length);
};


The Uint8Array version is good; let's do that.

For the other, what does it mean to return a random 32-bit float?  Is 
NaN allowed?  Different NaNs?  -0?  Infinity or -Infinity?  Subnormal 
values?


Looking at the webkit impl you linked to and my somewhat-old webkit 
checkout, it looks like the proposed impl returns something in the 
range [0, 1), right?  (Though if so, I'm not sure why the 0xFF bit is 
needed in integer implementation.)  It also returns something that's 
not uniformly distributed in that range, at least on Mac and sometimes 
on Windows (in the sense that there are intervals inside [0, 1) that 
have 0 probability of having a number inside that interval returned).


In general, I suspect creating a good definition for the float version 
of this API may be hard.


Not really, usually it is a number from 0.0 to 1.0, which would map to 
say the same as 0 to whatever max 64bit is.
Depending on the implementation, the simplest is just to do 
(pseudocode)   float=Random(0,$)/$

A Float64Array getRandomFloat64Array() would also be interesting.
In fact the 32bit and 64bit and uint8 could all be generated from the 
same random data source, just presented differently, uint8 would be the 
raw'est though,

and 32bit float is pretty much just truncation of a 64bit float.
But with either float there would never be NaN -0 or Infinity or 
-Infinity. Only the range 0.0 to 1.0 must be returned.
And yes, float issues of rounding and almost correct but not quite 
will also be an issue here.


Float random does not make much sense in crypto. In normal random stuff 
I do see it usefull but not crypto.
Then again, look at the potential use cases out there. Does any use 
float? Or do they all use uint/raw?

If they do not use float then just do not include float at all in crypto.

Right now I can only see random floats being of use in 
audio/video/graphics/games/input/output/etc. But not in crypto. (the 
only key and nonce data/values I've ever seen has been raw/uint or 
an integer or string. never a float)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-06 03:34, Boris Zbarsky wrote:
The context in which I've seen people ask for cryptographically secure 
Math.random are cases where one script can tell what random numbers 
another script got by examining the sequence of random numbers it's 
getting itself.  But I was never told what that other script was 
doing, only that it wanted its random numbers to be unguessable.


Hmm! A hostile script/cross-site exploit?
But if a script is running that close to another script, isn't the 
guessing of the other script's random numbers the least of your worries?
The bad script is already inside the house anyway, but just in the 
other room right?


It kinda reminds me of Raymond Chen at MicroSoft. Just Google the 
followingsite:msdn.com It rather involved being on the other 
side of this airtight hatchway

Kind reminds me of some of those stories.
I assume they are worried about two tabs or an iframe in a page, and a 
bad script is trying to figure out the random numbers another script has.


This is just my oppinion but... If they need random number generation in 
their script to be cryptographically secure to be protected from another 
spying script...
then they are doing it wrong. Use HTTPS, issue solved right? I'm kinda 
intrigued about the people you've seen asking, and what exactly it is 
they are coding if that is an issue. *laughs*
Besides, isn't there several things (by WHATWG even) that prevents such 
spying or even makes it impossible?


I have yet to hear of any actual panic regarding this, the same issue 
is theoretically know with EXE's as well.
But with the multithreaded and multicore CPU's, clock variations, and so 
on, trying to exploit the pattern in say a Mersienne Twister PRNG by 
pulling lots of random numbers

would either A. not work or B. cause a suspicious 100% cpu use on a core.
And don't forget that browsers like Chrome runs each tab in it's own 
process, which means the PRNG may not share the seed at all with another 
tab (I'm guessing pretty surely that each tab HAS it's own seed).

Besides, social engineering has a much higher success rate than this so...

Would be nice if some crypto/security experts popped their heads in 
about now though, in particular about the float question in previous 
posts :)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-05 11:10, Adam Barth wrote:

On Fri, Feb 4, 2011 at 9:00 PM, Cedric Viviercedr...@neonux.com  wrote:

getRandomValues(in ArrayBufferView data)
Fills a typed array with a cryptographically strong sequence of random values.
The length of the array determines how many cryptographically strong
random values are produced.


We had same discussion when defining readPixels API in WebGL.

Advantages :
1) this allows to reuse the same array over and over when necessary,
or circular buffer, instead of trashing the GC with new allocations
everytime one wants new random bytes.
2) this allows to fill any integer array directly (Float*Array might
need more specification here though as Boris pointed out - could be
disallowed initially)
3) this avoids exposing N methods for every type and makes refactoring
simpler (changing the array type does not require changing the
function call)

(and also better matches most existing crypto APIs in other languages
that are also given an array to fill rather than returning an array)

Oh, that's very cool.  Thanks.

Adam


I must say I like this as well. Having used RandomData(*buffer,length) 
in PureBasic makes more sense to me (then again I like procedural 
unmanaged programming with a sprinkle of ASM and API stuff so...)


But getRandomValues(in ArrayBufferView data) seem to indicate that each 
byte (value) is random, limited to an array of 8bit data?.

Now if that is the intention then that's fine.

But wouldn't getRandomData(in ArrayBufferView data) be the ideal? As 
there could be from 8bit of random data to whatever the max size of an 
array is in steps of 8bits (and you can always mask/truncate by hand for 
exact bits)


But other than that little nitbit, filling an array/buffer instead of 
returning one? Good idea!



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-06 05:07, Cedric Vivier wrote:

On Sun, Feb 6, 2011 at 11:34, Roger Hågensenresca...@emsai.net  wrote:

But getRandomValues(in ArrayBufferView data) seem to indicate that each byte
(value) is random, limited to an array of 8bit data?.

In the context of typed arrays, a value depends of the type of the
ArrayBufferView. ArrayBufferView are interchangable using the same
ArrayBuffer (the actual underlying bytes).
Passing an Uint8Array will give you random Uint8 values at each index
of the array, passing an Int32Array will give you random Int32 values
at each index of the array as well.


Ah ok, so just fill the buffer/destination with random data. That sounds 
as good as and as flexible as one possibly can get.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-06 Thread Roger Hågensen

On 2011-02-06 04:54, Boris Zbarsky wrote:

On 2/5/11 10:22 PM, Roger Hågensen wrote:


This is just my oppinion but... If they need random number generation in
their script to be cryptographically secure to be protected from another
spying script...
then they are doing it wrong. Use HTTPS, issue solved right?


No.  Why would it be?


Oh right! The flaw might even exist then as well, despite https and http 
not being mixable without warning.




I'm kinda intrigued about the people you've seen asking, and what 
exactly it is

they are coding if that is an issue. *laughs*


You may want to read these:

https://bugzilla.mozilla.org/show_bug.cgi?id=464071
https://bugzilla.mozilla.org/show_bug.cgi?id=475585
https://bugzilla.mozilla.org/show_bug.cgi?id=577512
https://bugzilla.mozilla.org/show_bug.cgi?id=322529


 [snip]



And don't forget that browsers like Chrome runs each tab in it's own
process, which means the PRNG may not share the seed at all with another
tab


Well, yes, that's another approach to the Math.random problems.  Do 
read the above bug reports.


-Boris



Outch yeah, a nice mess there.

Math.random should be fixed (if implementations are bugged) so that 
cross-site tracking is not possible, besides that Math.random should 
just be a quick PRNG for generic use.
The easiest fix (maybe this should be speced?) is that Math.random must 
have a separate seed per Tab/Page, this means that even an iframe would 
have a different seed than the parent page.
If this was done, then those bugs could all be fixed (apparently). And 
it wouldn't hurt to advise Mother or Mersenne or similar as a minimum 
PRNG.
Maybe seed should be speced in regards to tabs/pages etc, would this 
fall under WHATWG or the JS group?


But anyway, those bugs does not need actual crypto quality PRNG, so it's 
a shame their fixing is hampered by a fix vs new feature discussion.

I can't help but see these two issues as completely separate.
1. Fix the seeding of Math.random for tabs/pages so cross-site tracking 
is not possible.
2. Add Math.srandom or Crypto.random or Window.random a cryptographic 
PRNG data generator (which could map to OS API or even RNG Hardware).



Hmm. What of the name of this thing?
I think it would be better to ensure it is not named random but 
srandom or s_random or c_random to avoid any confusion with 
Math.random

How about cryptrnd, anyone?

I'd hate to see a bunch of apps using cryptographically secure random 
numbers/data just because it was called random,

while in all likelyhood they'd be fine with Math.random instead.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Session Management

2011-03-02 Thread Roger Hågensen

On 2011-03-02 18:42, Bjartur Thorlacius wrote:



Just see what happens when users login to a site, then navigate to
another and authenticate to the latter, and then logout from the
latter. In that case, they're still authenticated to the former site.
In theory, this shouldn't be a problem, as users should clear all UA
data before granting anyone else access to the UA data store, but in
ill-managed public terminals, that may not be the case.

Yes but do they? Theory is nice but can't a site aid a user in this?


If neither the sysadmin, nor the user, clear the credentials - who will?
This specifically is probably the main use case for expiring auth tokens.



Three Ways...

Method #1:
Browser timeout. For legacy reasons the browser could default to within 
a sensible min/max timeout.
Once the timeout is triggered, the HTTP Authentication is ended, and the 
the user has to log in again.

Like say maybe 30 minutes to 60 minutes.
This can easily be done right now for all current browsers. No UI 
changes or any real code changes at all.


Note:
Ideally the user should be able to adjust the default timeout within 
some sensible min/max range,

but this would require a UI change/addition.

Method #2:
A second way to logout from a HTTP Authentication would be to end the 
HTTP Authentication when the LAST tab or window is closed that is using 
the authentication to that site/directory.


Note:
It's a shame one can not use javascript to let the webdesigner provide a 
button or url with javascript:window.close() or similar.
Perhaps a javascript:crypto.httpauth_closesession() or similar could 
be added in the future.


Method #3:
The server (or serverside script, like PHP or similar) sends the 
following to the browser:

header('HTTP/1.0 401 Unauthorized');
header('WWW-Authenticate: Close realm=My Realm');
*PS! the auth stuff is much longer here obviously, this was just to 
show the use of Close*


Note:
If Method 1 or 2 is used the browser should probably send the following 
to the server:

GET /private/index.html HTTP/1.1
Authorization: Close username=something
*PS! the auth stuff is much longer here obviously, this was just to 
show the use of Close*



I think that Method 3 is the real key piece here, on it's own it allows 
the server to timeout the client/user AND notify the client that this 
has happen.
combined with Method 1 and 2 it is now possible for either the client or 
browser to end the http authentication session and notify each other, 
and let the user know as well.
Method 3 alone would not need a UI change, it would simply instruct the 
browser to clear it's auth session, the page content itself could hold a 
message from the server to the user that they are now logged out.


Explained as easily as possible, the closing is exactly the same as 
serverside WWW-Authenticate: Digest and clientside Authorization: 
Digest but
instead of the word Digest it is replaced with Close, the rest of the 
auth should otherwise be just like a normal Digest auth to ensure it's 
not a fake close.
just doing WWW-Authenticate: Close might be an issue with future 
improvements beyond Digest method, so maybe WWW-Authenticate: Close 
Digest  would make more sense.
Just avoid calling it Digest Close as that could be confused with a 
normal Digest.
Close is just an example, End or Quit or Clear could just as 
well be used, the word doesn't matter, the hint brings from the server 
to the browser is the vital key though.


It is basically the server saying to the browser that those session 
credentials are no longer valid, please stop spamming me with them 
*laughs* at which point the browser clears the auth session,
and starts talking to the site with a clean slate again. If something 
like Method 3 was implemented then I'm pretty sure that the devs of 
phpBB, vBulletin and who knows how many CMS devs out there would be 
happy to support this.


Sidesubject:
Hopefully the old WWW-Authenticate: Basic is fully deprecated soon as it 
is no different from plaintext html login forms (almost all forums and 
websites out there that do not use SSL/certificates).
WWW-Authenticate: Digest should be minimum requirement. I'm not sure but 
I believe that Opera did fix some of the issue with Basic being fallen 
back to, no idea how all browsers lay on this currently.
It would be tempting to fix the Basic issue and security hole by 
instead changing things so that it's called: WWW-Authenticate2: Digest 
and WWW-Authenticate2: Close Digest where Basic is not allowed at all,
this would prevent exploits that try to sneak Basic into the header and 
make the browser use plain text instead.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Intent of the FileSystem API

2011-03-02 Thread Roger Hågensen

On 2011-03-02 02:31, Tatham Oddie wrote:

Glenn,

That's an XP path you've provided.

On Vista or 7 it'd be:

C:\Users\tatham.oddie\AppData\Local\Google\Chrome\User 
Data\Default\Storage

Microsoft explicitly did work in Vista to reduce the lengths of those base 
paths.

Now, the Google component of the path is actually the longer part.


In this case couldn't it just be made to be 
C:\Users\tatham.oddie\AppData\Local\Google\Chrome\Storage ?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Session Management

2011-03-03 Thread Roger Hågensen

On 2011-03-03 10:44, Dave Kok wrote:

Op 02-03-11 22:11:48 schreef Roger Hågensen:

Method #3:
The server (or serverside script, like PHP or similar) sends the
following to the browser:
 header('HTTP/1.0 401 Unauthorized');
 header('WWW-Authenticate: Close realm=My Realm');
 *PS! the auth stuff is much longer here obviously, this was just
 to show the use of Close*

Note:
If Method 1 or 2 is used the browser should probably send the
following

to the server:
 GET /private/index.html HTTP/1.1
 Authorization: Close username=something
 *PS! the auth stuff is much longer here obviously, this was just
 to show the use of Close*

May I point out that the HTTP is outside the scope of the HTML5 spec.
Also the HTTP is stateless. This requires both parties keep state which
breaks the statelessness property of the HTTP. I, for one, prefer to
preserve the statelessness property of HTTP.


Please appreciate the notion that HTML5 is broader then just browsing
the internet. - Dave Kok

And indeed it is. HTTP Authentication (especially Digest) is far from 
stateless,

it's state chances with every single nonce number change.
It's basically a constant (but very cheap cpuwise) 
handshake/hmac/diffie-hellman agreement.
Also if you are thinking about the HTTP status codes, those are beyond 
stateless,
but if you insist, then simply re-use the 403 with some minor tweaks so 
it acts as a logoff,

because re-using 401 would break the statelessness as you say.

I'm surprised you advocate ajax/XMLHttpRequest and allow to close from a 
form,

that would open up to some security issues.
The beauty of HTTP Digest Authentication is that the password is never 
sent as plaintext or in any form that can compromise the user's password.
Only the user themself (and thus indirectly the browser) or the server 
should be able to initiate a session close of Auth Digest,
allowing it to close from a script is just bad, and... dare I say it, 
counter to the statelessness of HTTP *laughs*


At least we agree on one thing, that where HTTPS is not available or 
where the site owners have either not discovered or is too lazy or 
unable to take advantage of StartSSL.com which is free,
then HTTP Digest Authentication should almost be a requirement for any 
site that need login credentials. (like forums, shops etc.)
Funny how many stores only pull out the HTTPS stuff when paying for 
things you buy (or use a specialist service), but not at all when you 
log in to your account with them otherwise. *sigh*


Heck, I even have https on my own little site, my hoster provided the ip 
for free, they set up the certificate etc, for free, excellent service. 
(I only pay the hoster a small yearly fee, domeneshop.no for you 
Norwegians out there)
and combine that with startssl.com and my total cost of securing the 
communication with my site should I ever need it or others need 
it??? PRICELESS, since it was absolutely free, not a single cent paid.
But... a lot of web hotels or hosters out there do not allow you to do 
SSL or it costs extra, or they can not give you a ip or it costs extra, 
and, and, and.
So I have sympathy for those unable to. but hey with the CA that 
provides free domain/server certs there is no excuse if you ARE able to,
and programmingwise it's less work too, Auth Digest needs some extra 
massaging from PHP to work nicely in a integrated way but even then 
the logout issue still exist (and even if you log out the sie is still 
spammed by the browser with login credentials all the time)
I've never really worked with the Apache auth_digest stuff, but it's 
probably even more restricted than doing it yourself via PHP.


And don't forget that you complain that my suggestions messed with HTTP 
which HTML5 had no business to mess with,
yet you yourself suggested XMLHttpRequest and some ajax stuff to 
close/end a HTTP authentication?
This already proves that HTML5 isn't just HTML + CSS + Javascript + lots 
of other stuff, but we can also add + HTTP
Now if this Auth Digest is so important for web apps, then shouldn't 
WHATWG work together with (um what is the HTTP group called?)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] A moment of thank you to the browser devs.

2011-03-24 Thread Roger Hågensen

http://html5test.com/

Now that Firefox 4 is out as well as IE9 those tests are starting to 
look real good.
With the WebM plugin for IE9, WebM is now possible in all the big 4 
(Opera, IE, FF, Chrome) browsers (haven't tested Safari but like Chrome 
it uses Webkit so should be similar).
Opera does lag a little on the Elements tests (IE9 now support section 
and article elements, as does Chrome and FF), but there is a Opera beta 
I haven't tested but I assume that will be added for the next final release?


So I'd just like to thank all the browser devs for the awesome work 
being done to fast track HTML5.
Now if just the users would be as quick at updating their old browsers, 
then we could get rid of a lot of the old junk on the net and go for 
HTML5 design exclusively.


So who knows, maybe 2011 will become the HTML5 year?

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Timing API proposal for measuring intervals

2011-07-08 Thread Roger Hågensen

On 2011-07-08 12:32, Mark Callow wrote:


On 08/07/2011 11:54, James Robinson wrote:

True.  On OS X, however, the CoreVideo and CoreAudio APIs are specified to
use a unified time base (see
http://developer.apple.com/library/ios/#documentation/QuartzCore/Reference/CVTimeRef/Reference/reference.html)
so if we do end up with APIs saying play this sound at time X, like Chris
Roger's proposed Web Audio API provides, it'll be really handy if we have a
unified timescale for everyone to refer to.

If you are to have any hope of synchronizing a set of media streams you
need a common timebase. In TV studios it is called house sync. In the
first computers capable of properly synchronizing media streams and in
the OpenML specification it was called UST (Unadjusted System Time).
This is the monotonic uniformly increasing hardware timestamp referred
to in the Web Audio API proposal. Plus ça change. Plus ça même. For
synchronization purposes, animation is just another media stream and it
must use the same timebase as audio and video.

Regards

 -Mark


Agreed, and the burden of providing monotonic time lies on the OS (and 
indirectly the MB, HPET etc. or audio card or GPU clock or whatever the 
clock source is.)
So Browsers should only need to convert to/from (if needed) Double and 
OS highres time format (which should be there via a OS API in a modern OS).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] NRK (Norwegian Broadcasting) focuses on HTML5

2011-12-14 Thread Roger Hågensen

Just some trivia/news of some interest to HTML5 supportes:

NRK (similar to what BBC is in England) has decided to focus on HTML5.


http://www.digi.no/885011/nrk-gaar-for-html5

http://translate.google.com/translate?hl=ensl=nou=http://www.digi.no/885011/nrk-gaar-for-html5ei=CRfpTtexJqaA4gTX4NXmCAsa=Xoi=translatect=resultresnum=1ved=0CCQQ7gEwAAprev=/search%3Fq%3Dhttp://www.digi.no/885011/nrk-gaar-for-html5%26hl%3Den%26safe%3Doff%26biw%3D1920%26bih%3D886%26prmd%3Dimvns

http://nrk.no/ (website)
http://nrkbeta.no/ (testbed for new techs)

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Endianness of typed arrays

2012-03-28 Thread Roger Hågensen

On 2012-03-28 12:01, Mark Callow wrote:


On 28/03/2012 18:45, Boris Zbarsky wrote:

On 3/28/12 2:40 AM, Mark Callow wrote:

Because you said JS-visible state (will) always be little-endian.

So?  I don't see the problem, but maybe I'm missing something...

The proposal is that if you take an array buffer, treat it as a
Uint32Array, and write an integer of the form W | (X  8) | (Y  16)
| (Z  24) into it (where W, X, Y, Z are numbers in the range
[0,255]), then the byte pattern in the buffer ends up being WXYZ, no
matter what native endianness is.

Reading the first integer from the Uint32Array view of this data would
then return exactly the integer you started with...

So now you are saying that only the JS-visible state of ArrayBuffer is
little-endian. The JS-visible state of int32Array, etc. is in
platform-endiannesss. I took your original statement to mean that all
JS-visible state from TypedArrays is little-endian.

Regards

 -Mark



Getting rather messy this isn't it?
arrayBuffer should be native endianess (native as to the JS engine), 
anything else does not logically make sense for me as a programmer.


xhr.responseType = 'arraybuffer' on the other hand is a bigger issue as 
a client program (browser) could be little endian but the server could 
be big endian.
So in this case it would make sense if xhr.responseType = 'arraybuffer' 
and xhr.responseType = 'arraybuffer/le' was the same and 
xhr.responseType = 'arraybuffer/be' was for big endian/network byte order.


Personally I think that arrayBuffer should be native, and that 
xhr.responseType should be ambiguous, in other words let the 
implementers make sure of the endianess.
A client can easily ask for a desired endianess from the server by using 
normal arguments during the query, or possibly a xhr.responseEndian='' 
property if that makes sense at all.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] A plea to Hixie to adopt main

2012-11-09 Thread Roger Hågensen

On 2012-11-07 23:41, Ian Hickson wrote:

On Thu, 8 Nov 2012, Ben Schwarz wrote:

What does concern me, as a web builder, *every day*, is how I markup the
content in-between a header and a footer.

If you just want it for styling purposes, div is perfect.


article
headerh1, h2, p/header
div class=content/div
footertime, a.permalink/footer
/article

Exactly like that (or even without the class, if you just have one per
article you can just do article  div to select it).



I've begun to do this a lot now, the less I have to use class= or id= 
for styling the better.
In one of my current projects I'm basically only using id= for actual 
anchor/indedx use, and no class= at all.
In fact except the few id= for index shortcuts the stylingin is all done 
in the .css and the only css referencve in the html document is the 
inclusion of the css link url.
I guess you could call it stealth css as looking at the html document 
does not reveal that css is used at all (except the css link in the html 
header)

I wish more webauthors would do this, makes for very clean html indeed.
Now back to the topic (sorry for getting sidetracked).

As to the main thing, the only time I'd ever be for adding that to 
HTML markup was if it would be specced per the following.


main and /main encloses the content of a document, can be used in 
place of a div or article but can only occur once in a document.
If more than one instance of main and /main block is encounctered, 
then parsers should only accept the first and ignore any others.
If no main then the content in the document itself is considered the 
main content.


Maybe it's just me, but wouldn't a main sort of be a synonym for 
body almost? *scratches head*



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] A plea to Hixie to adopt main, and main element parsing behaviour

2012-11-09 Thread Roger Hågensen

On 2012-11-08 10:51, Steve Faulkner wrote:

What the relevant new data clearly indicates is that in approx 80% of cases
when authors identify the main area of content it is the part of the
content that does not include header, footer or navigation content.


It also indicates that where skip links are present or role=main is used
their position correlates highly with the use of id values designating the
main content area of a page.



I'm wondering if maybe the following might satisfy both camps ?

Example1:
!doctype html
html
head
titletest/title
/head
divdiv before body/div
bodybody text/body
divdiv after body/div
/html

Example2:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
footerfooter after body/footer
/html


A html document ALWAYS has a body. So why not adjust the specs and free 
the placement of body,

thus allowing div and header and footer blocks before/after.
Curretly http://validator.w3.org/check gives warning, but that is easily 
fixed by allowing it.
The other issue is how will older browser handle this (backwards 
compatibility) and how much/little work is it to allow this in current 
browsers?


I'd rather see body unchained a little than having main added that 
would be almost the same thing.
And if you really need to layout/place something inside body then 
use a article or div instead of a main.


body already have a semantic meaning that's been around since way back 
when, so why not unchain it?
As long as body and /body are within html and /html it shouldn't 
matter if anything is before or after it.


Only issue that might be confusing would be
Example3:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
articlearticle outside body/article
footerfooter after body/footer
/html

In my mind this does not make sense at all.
So maybe Example2 should be used to unchain body a little.

Example2:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
footerfooter after body/footer
/html

Example4:
!doctype html
html
head
titletest/title
/head
body
headerheader before body/header
divbody text/div
footerfooter after body/footer
   /body
/html

Example 4 is how I do it on some projects, while what I actually wish I 
could do is Example 2 above.
Maybe simply unchaining body enough to allow one header and one 
footer outside (but inside html) would be enough to satisfy people's 
need?
I wondered since the start why header and footer could not be 
outside body, it seems so logical after all!


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] header body footer

2012-11-09 Thread Roger Hågensen

Starting a new subject on this to keep the email threads more clear:

Suggestion is that the following should be possible,
this would allow body to act as if it was a main.

!doctype html
html
head
titleheader and footer outside body/title
style
body {border:1em solid #7f2424;}
header {border:1em solid #247f24;}
footer {border:1em solid #24247f;}
/style
/head !-- I wish the head/head tags was called meta/meta 
instead. --

header
This is the header of the document!
/header
body
This is the main content of the document!
/body
footer
This is the footer of the document!
/footer
/html


As can be seen in most modern browsers. Semantically the content appear 
correctly.
The only issue is the css styling of body as body is treated as the 
parent of header and footer.
I'm not sure how much work or not it would be to allow 1 
header/header and 1 footer/footer outside body and let those 
have html as their parent instead,
but if it's not too much work then this could fix the lack of a 
main, and it would avoid the need of using a extra div or similar 
inside body just for styling.


Any other pros (or cons)?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Priority between a download and content-disposition

2013-03-19 Thread Roger Hågensen

On 2013-03-18 13:50, Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:

It's currently unclear what to do if a page contains markup like a
href=page.txt download=A.txt if the resource at audio.wav
responds with either

1) Content-Disposition: inline
2) Content-Disposition: inline; filename=B.txt
3) Content-Disposition: attachment; filename=B.txt

People generally seem to have a harder time with getting header data
right, than getting markup right, and so I think that in all cases we
should display the save as dialog (or display equivalent download
UI) and suggest the filename A.txt.

You mention `audio.wav` but that is not part of your example. Also note
that there are all manners of other things web browsers need to take in-
to account when deciding on download file names, you might not want to
e.g. suggest using desktop.ini, autorun.inf or prn to the user.

That aside, it seems clear to me that when the linking context says to
download, then that is what a browser should do, much as it would when
the user manually selects a download context menu option. In contrast,
when the server says filename=example.xpi then the browser should pick
that name instead of allowing overrides like

   a href='example.xpi' download='example.zip' ..

which would cause a lot of headache, especially from third parties. And
allowing such overrides in same-origin scenarios seems useless and is
asking for trouble (download filenames broken after moving to CDN).


The expected behavior from a href='example.xpi' download='example.zip' 
... is that it is a download hint
A UI of some sorts should appear where the user has the option to 
download. (for example a requester with Run Now and Save As or Print or 
Share or Email and similar).
download= attribute is  just a browser hint, a user (and thus the 
browser) can (and should be able to) override this behavior if desired 
(in options somewhere, maybe under a Applications tab?)


If the server provided file-type matches that of the href (i.e. they are 
both .xpi), or are identical then the download attribute filename hint 
should be the default.


If the server provided a file-type that conflict with the href then the 
browser need to use some logic to figure out which of the three to display.
If the server provided a filename is different than the href then the 
browser need to use some logic to figure out which of the three to display.
If download attribute has a full or relative url then href (or server) 
should be used instead.


What is the best logic to use?
Both href and download are put there by either the author of the page or 
some automated system (forum/blog software/CDN/who knows...)
href and download in that respect should be equally trusted (or is it 
distrusted?)
What the server says always trumps href and download, and href (or 
server) always trumps download if href and server match in file-type.

The only exception is situations where the content is generated in some way.
a href=example.php?type=csv download=report1.csvDownload Report 1 
as CSV/a
a href=example.php?type=xml download=report1.xmlDownload Report 1 
as XML/a


Now the server might categorize it as text/html, I've seen this by 
mistake on servers before (not properly configured),

or the script did not set the proper content type when creating the headers.
So in this case the download hint is very helpful.

How many web scripts extensions out there is there? .php .asp .cgi .py 
.???

What about this then?
a href=example.com/reports/1/?type=xml 
download=report1.xmlDownload Report 1 as XML/a
and with the server type text/html by mistake, how to handle that 
then? Whom to trust?
The server may (or may not) redirect to a url of 
example.com/reports/1/index.php?type=xml or 
example.com/reports.php?id=1type=xml

it may also simply remain example.com/reports/1/?type=xml
Or what if it is a href=example.com/reports/1/xml/ 
download=report1.xmlDownload Report 1 as XML/a


A URL is simply a way to point to some content, what to do with it is up 
to the browser and the user.
One would hope the server serves it as the right type but this is not 
always true.
The page author may not even have control or the ability to add 
filetypes to the server configuration. (webhotels for example)
The download attribute indicate the authors desired behavior for 
clicking the link.


So let's break it down (from a more or less browser's point of view):

1. The user clicks the link, there is a download attribute so we will 
show a dialog with a Save As (and possibly other alternatives, dependent 
on browser and OS features and user options).
2. If there is no download attribute/no filename hint, then use href and 
try to make a user friendly filename out of that.
3. Listen to what the server says (in the HTTP header), does it say it 
is  a .xml ? If yes then that is good, if not then treat it as if it was 
binary for the moment.
4. Make sure the text displayed is along the line of: Download 

Re: [whatwg] Priority between a download and content-disposition

2013-03-19 Thread Roger Hågensen

On 2013-03-19 15:31, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net schrieb am Tue, 19 Mar 2013
14:31:15 +0100:


[…]
What should be shown if there is an issue/conflict?

Maybe:
Download https://example.com/reports/1/xml/; as report1.xml ?
WARNING! File identified as actually being an executable! (*.exe)

At least here on Debian GNU/Linux, executables have no file extension.
Besides that, what would be the MIME type of windows executables?


application/octet-stream as far as I know for most exes and misc 
binaries on most platforms, and windows exe's start with MZ as the 
very first two bytes.





Or:
Download https://example.com/reports/1/xml/; as report1.xml ?
NOTE! File identified as not being a xml, appears to be text. (*.txt)

So, what about polyglots?
http://linux-hacks.blogspot.de/2009/02/theory-behind-hiding-zipped-file-under.html


Data hiding? (well close to it anyway) That is way beyond the scope of 
this, also I doubt you could do that with a executable on most platforms.
And if a .jpg turns out to have a zip attached then it just a .jpg with 
a zip attached, it's as simple as that.



The key though is showing: Download url as file.ext ?
And in cases where a quick file header scan reveals a possible issue
(or simply wrong fileformat extension) either a notice or warning
text in addition.
But this is only if the user actually hose Save As in the download
dialog, they might have chosen Share on facebook or Print or
Email to... or even Open
a similar but different dialog would obviously be needed in that case.

I find all of this approach insanely complex for a negligible benefit.



How so? All the information is mostly there. (HTTP header from server is 
always fetched be it HEAD, GET, POST calls, and a browser usually 
fetches the beginning of a file to sniff anyway).
The suggested name and extension would be in the download attr, and href 
is as it's always been.


Today if you download/run a link to a exe you do get asked if you really 
want to run this. (and this is a browser UI not a OS UI).

What is so complex about simply adding : as file.ext ?
to that UI which is already there?

In cases where the download attr and href and the server header and 
browser sniffing all agree it looks no different (nor behaves any 
different) than it does today when you right click and choose Save As
What is so complex about just suggesting some consistency in behavior 
with a improvement to boot?


And if you refer to the Share on/at... or Email to... or Print or 
Open then those are dialog options that do exist today or will, and was 
just used as examples and is not otherwise part of this in any other way.


Maybe there is a language barrier here and I'm not explaining this 
correctly, in which case I apologize for that. Let me know if anything 
in particular needs clarification.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Hide placeholder on input controls on focus

2013-03-20 Thread Roger Hågensen

On 2013-03-20 10:18, Markus Ernst wrote:
The problem is that some users do not even start to type when they see 
text in the field they focused. Thus I strongly believe that some 
visible hint at the _focusing_ moment would be helpful for these 
users. If the Opera and IE behaviour of totally hiding the placeholder 
is considered as suboptimal, the placeholder could be blurred, made 
semi-transparent or whatever; but I am sure that something should 
happen when the control gets focus, and not only when the user starts 
typing.


Have it dim/vanish at not just onfocus but onmouseover as well? (and 
TAB, but that should be the same as onfocus usually)

I agree that this would be beneficial.

Here is an example: (go to http://htmledit.squarefree.com/ or someplace 
similar or save it locally as .html and test it that way)


style type=text/css
/** css start */

input::-webkit-input-placeholder
{ /* WebKit browsers */
color: red;
}

input:hover::-webkit-input-placeholder
{ /* WebKit browsers */
opacity:0.5;
text-align:right;
}

input:focus::-webkit-input-placeholder
{ /* WebKit browsers */
opacity:0.0;
}

input:placeholder
{ /* future standard!? */
color: red;
}

input:placeholder:hover
{ /* future standard!? */
opacity:0.5;
text-align:right;
}

input:focus:placeholder
{ /* future standard!? */
opacity:0.0;
}

/** css end */
/style

!- * html start * -
input name=first_name id=first_name placeholder=Your first 
name... type=text

!- * html end * -


I only did webkit! (and what I assume will be the standard?)
Reason I did not add any css for IE 10 or Firefox 19 is that they fail 
(at least I could not easily get this to work in those browsers), Chrome 
25 handles this just fine.
Other than me playing around a little with the right align to visually 
move the placeholder text out of the way, I assume this is how you 
would like it to look/behave Markus?


So maybe a placeholder opacity of 0.5 on hover, and opacity of 0.0 on 
focus would be a suitable browser default. (web authors should still be 
able to style the behavior like I just did)


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Hide placeholder on input controls on focus

2013-03-21 Thread Roger Hågensen

On 2013-03-21 14:02, Markus Ernst wrote:

Am 21.03.2013 12:10 schrieb James Ross:
Just as an added data-point (that I only noticed today) - Windows 7's 
placeholder implementation in the Start menu and Explorer's search box:
  - Focusing the input box with tab/Control-E or autofocus when 
opening the Start menu does *not* hide the placeholder.

  - Control-A or clicking in the textbox hides the placeholder.


I was not aware of the possibility to distinguish between clicking in 
a textbox and other ways to focus it. This behaviour seems to be very 
user-friendly to me.




As far as I know there are hover, focus, and modified (are there 
others?). The events varies whether it's in a browser (and which 
browser) and which OS (and which GUI API).


Ideally a browser chrome should follow the OS styleguide to provide a 
consistent OS user experience.
And with HTML5 stuff should behave consistently in all HTML5 supporting 
browsers.
But that's just my opinion on where the line should be drawn. There 
are after all things like the context menu stuff that crosses the GUI 
boundaries.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Forcing orientation in content

2013-05-03 Thread Roger Hågensen

On 2013-05-03 08:29, Gray Zhang wrote:
 Not sure if WHATWG is doing anything, but in the W3C there 
ishttps://dvcs.w3.org/hg/screen-orientation/raw-file/tip/Overview.html 
in the Web Apps group

 ...

 How would it behave if my web app requests orientation locking but is 
placed in an `iframe` element and the parent browsing context is 
locked in another orientation?


The logical behavior would be that the parent element takes precedence, 
and the child (the iframe in this case) retains it's aspect ratio if 
possible.



R.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Forcing orientation in content

2013-07-14 Thread Roger Hågensen

On 2013-07-13 06:17, Glenn Maynard wrote:


  Changing orientation is disruptive.

I can hardly imagine how obnoxious Web browsing would be on a mobile
device, if every second page I navigated to decided to flip my device to a
different orientation.  This feels like the same sort of misfeature as
allowing pages to resize the browser window: best viewed in 800x600 (so
we'll force it), best viewed in portrait (so we'll force it).



I have a tablet here that does that with a few apps.
And one of them actually does it within itself during certain parts of 
the program.


And I can testify it's annoying as hell. For those curious it was a 
banking app. And the main menu is forced/locked. But the rest like 
account activity etc. is not.
And you can imagine how stupid it is when you have to rotate the tablet 
each time you go back to the main menu.


I find responsive and scalable design (so it looks good) on multiple 
aspects ratios and multiple PPIs is a must for modern coding.


Please note I have not said orientation at all above, instead I said 
aspect ratio as that is the key here. Any device (unless it's square) 
has only two aspects.
There really is no up or down. Again this is from experience with my 
tablet. It is rectangular and when I pick it up i pick it up. And which 
ever edge faces me becomes down.
And I prefer a wide aspect ratio normally, but for parts with listings I 
prefer a narrow aspectratio.


My suggestion is that a webpage or web app signal to the browser what 
it's preferred aspect ratio (and resolution) is by using existing CSS 
viewport features.

But the browser is under no obligation to enforce anything.

If a rotation lock is really that desired, then the browser MUST provide 
a user toggle-able option that is off by default and is named something 
along the lines of: Allow Pages/Apps to disable rotation.
But. At the same time a similar option would also be needed called: 
Always lock Pages/Apps to Horizontal (or Vertical) orientation.


Now I have not looked at many tablets and phones, and certainly not 
their option screens, so I have no idea if some or several of them 
already have one of these options.


My advise is that if a page or app is aspect limited to simply keep it 
aspect limited (use the current CSS stuff to help inform the browser 
about that).
Let the user rotate the screen to whatever works best for them. For all 
one might know their device might be huge and have a very high PPI, you 
can never know.
There are people who prefer to have a monitor rotated 90 degrees, or put 
two browser windows side by side.
And as has been said, certain devices may have orientation detection 
turned off, or the device may not even have that feature at all.


Myself I think ideally that page rotation locking should be a user 
choice and put in the browser context menu so the user can just click 
and select if they wish to lock the rotation (for that page).
Also if a page really looks better rotated 90 degrees then the user will 
quickly figure that out anyway, by *gasp* rotating their display.
And by not allowing web pages/apps to force the orientation we also 
encourage better design.
HTML5 + CSS + Javascript is all about being fluid and dynamic and 
adaptable and being scalable and fail/fallback gracefully.

It would be silly to take a step backwards from that.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Controlling the User-Agent header from script

2014-10-13 Thread Roger Hågensen

On 2014-10-13 15:53, Anne van Kesteren wrote:

Per XMLHttpRequest User-Agent has been off limits for script. Should
we keep it that way for fetch()? Would it be harmful to allow it to be
omitted?

https://github.com/slightlyoff/ServiceWorker/issues/399

A possible attack I can think of would be an firewall situation that
uses the User-Agent header as authentication check for certain
resources.


That's a server security issue and not a browser one, attackers would 
never use a nice browser for attacks anyway,
what point is there in background checks for security guards if the 
window is always open so anyone can get in? ;)


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Controlling the User-Agent header from script

2014-10-13 Thread Roger Hågensen

On 2014-10-13 16:16, Nils Dagsson Moskopp wrote:

Anne van Kesteren ann...@annevk.nl writes:


Per XMLHttpRequest User-Agent has been off limits for script.

Reporting UA “Mozilla/4.0 (MSIE 6.0';DROP TABLE browsers;--u{!=})”
broke hilariously many sites when I did have set it as my default UA
string, even though I think it conforms to RFC 2616, section 14.43.

Again, that's a server security issue and not a browser one, attackers 
would never use a nice browser for attacks anyway,
what point is there in background checks for security guards if the 
window is always open so anyone can get in? ;)


Also, a script being able to set a custom XMLHttpRequest User-Agent 
would be nice.
Not necessarily replace the whole thing but maybe concatenate to the end 
of the browser one?
That way a webmaster would be able to see that the request is from 
script Blah v0.9 when it really should be Blah v1.0 for example.
I always make sure that any Software I make uses a custom User-Agent, 
same goes for any PHP scripts and so on, ditto if I use CURL, that way 
the logs on the server will provide some insight.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-15 Thread Roger Hågensen

On 2014-10-15 18:10, Tab Atkins Jr. wrote:

On Wed, Oct 15, 2014 at 8:59 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:

For the XSS attacker, couldn't they just use 
`theInput.removeAttribute(writeonly); alert(theInput.value);`?

Or is this some kind of new un-removable attribute?

Doesn't matter if it is or not - the attacker can still always just
remove the input and put a fresh one in.

Nothing in-band will work, because the attacker can replace arbitrary
amounts of the page if they're loaded as an in-page script.  It's
gotta be *temporally* isolated - either something out-of-band like a
response header, or something that has no effect by the time scripts
run, like a meta that is only read during initial parsing.

~TJ



There is also legitimate needs for being able to edit the password field 
from a script.
I have a custom login system (that is currently in public use) that 
takes the password, does a HMAC on it (plus a salt and some time limited 
info).
This allows login in without having to send the password in the clear. 
It's not as secure as https but it's better than plaintext.


A writeonly password field would have to be optional only or my code 
would break.
And I'm not the only one. SpiderOak.com also uses this method (they use 
bcrypt on the password to ensure that SpiderOak has Zero-Knowledge).


Any limitations on form manipulations should be based on same origin 
limitations instead, such that only a javascript with the same origin as 
the html with the form can read/write/manipulate the form.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] Passwords

2014-10-15 Thread Roger Hågensen
Was Re: [whatwg] Proposal: Write-only submittable form-associated 
controls.


On 2014-10-16 01:31, Eduardo' Vela Nava wrote:

If we have a password manager and are gonna ask authors to modify their
site, we should just use it to transfer real credentials, not passwords..
Passwords need to die anyway.


And use what instead? At some point a password or passphrase (aka 
sentence) is needed.

Password managers need a password to lock their vault as well.

Password/phrases are free, all other methods requires something with a 
cost.
Biometrics requires scanners, and good ones (that can't be fooled by by 
breathing on a printed out fingerprint) are expensive.

There are USB sticks, and Smart cards (which require a card reader)
Audio require a microphone (and can be heard by others) my voice is my 
passport verify me.
You could use a app but this means you need a smart phone (which I don't 
have and probably do not plan to get any time soon, no need for one).
There is SMS but a phpBB based forum site isn't going to shell out cash 
for SMS based login or similar.
Biometrics have other issues, the voice may change (your voice changes 
throughout the day), your fingerprints change based on moisture, and you 
can damage them, there are diseases and medicines that can affect them. 
The retina may change as you get older, even your DNA may get damaged 
over time.


Also credentials (certificates) are not free if you want your name in 
them. (you can get free email/identity ones from StartSSL.com and a few 
other places but they are tied to your email only).
Installing certificates are not always easy either, and then there is 
the yearly or so renewals, and you can throw away old certs or you will 
b unable to decode encrypted emails you got archived.


A regular user will feel that all this is too much noise to deal with.
They could use something like Windows Live as a single sign on and tie 
that to the Windows OS account, but only sites that support signon with 
Live can take advantage of that.
And a password (or a portable certificate store, or biometric of sorts) 
is still needed for the Windows OS on that machine anyway.


I mentioned StartSSL above, the cool thing they do is they hand out 
domain only verified certificates, so any website can have free https, 
why the heck this isn't a thing more than it is I don't understand, 
each time I see a login to a site or forum that is http only I always 
ponder why the heck they aren't using HTTPS to secure their login. But I 
digress.


Single word passwords need to go away, if a attacker finds out/guesses 
it in for one site, changes are the same pass is used on multiple sites 
as is or with minor variations. Passphrases is the solution to some of 
this problem as it will make dictionary attacks much more expensive. 
There are still sites that enforce a 8 character password which is 
insane, people should be allowed to enter any password they are able to 
enter on their keyboard, be it one character or long sentences, with or 
without numbers or odd characters, the more restrictions there are on 
the password input the easier it is to guess/crack. The only 
restrictions that does no harm would be to ask for passphrases instead.


Also http logins with plaintext transmission of passwords/passphrases 
need to go away, and is a pet peeve of mine, I detest Basic 
HTTP-Authentication which is plaintext.
Hashing the password (or passphrase) in the client is the right way to 
go, but currently javascript is needed to make that possible.
If a password field could have a hash attribute that would be progress 
in the right direction. input type=password hash=bcrypt or 
something similar perhaps a comma to separate method and number of 
rounds, alternatively just input type=password hash and use a 
browser default instead (in this case the server side need to support 
multiple methods of hashing and the hashed password need a prefix to 
indicate method, salt and rounds if any.


There is some new crypt stuff, but again that needs javascript to be 
utilized, having a password be hashable by the browser without the need 
for any scripts to do so would be the best of both worlds in my opinion.
For example if a hostile script had access to the form and hte password 
field, the password would have been hashed before it was put in the 
password field anyway, sure they might be able to snoop the hash but the 
hash could be using a unique salt (or should rather) and would be 
useless to re-use.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Passwords

2014-10-18 Thread Roger Hågensen

On 2014-10-17 17:09, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


Also http logins with plaintext transmission of passwords/passphrases
need to go away, and is a pet peeve of mine, I detest Basic
HTTP-Authentication which is plaintext.

Note that Basic Auth + HTTPS provides reliable transport security.


This precludes that a site has a certificate, and depite someone like 
StartSSL giving them out free, sites and forums still do not use HTTPS.

Also, Basic Auth is also plaintext so the server is not Zero Knowledge.




Hashing the password (or passphrase) in the client is the right way to
go, but currently javascript is needed to make that possible.

Do you know about HTTP digest authentication?
http://en.wikipedia.org/wiki/Digest_access_authentication

Yes, and it's why I said Basic HTTP Authentication, Digest is the 
better method of HTTP Authentication.
And I know that very well and it's very underdeveloped, there is no 
logout possible (you stay logged in until the browser session is ended 
by the user),
and styling the login is not possible and it's not as easy to implement 
with AJAX methods.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-07 Thread Roger Hågensen

On 2014-11-03 17:42, Anne van Kesteren wrote:

https://wiki.whatwg.org/wiki/Sharing/API has a sketch for what a very
minimal Sharing API could look like.



I have often pondered the same when seeing a Share button or icon on a 
webpage.
Some solutions have a single icon that pops up a menu, while other sites 
has a row of the most common social sites.


In retrospect however I realize that any Share API would be no different 
than how people currently share or bookmark things.
A worthy goal would be to help developers de-clutter websites from all 
those share icons we see today, so if this could be steered towards that 
it would be great.


There are two ways to do this that I'd recommend.

A link element in the header, maybe call it link rel=share 
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if 
present) should be used, although I guess in a way rel=share will 
probably replace the need to use rel=canonical in the long run.


Then browser devs can simply utilize that info in their own Share UI 
(which presumably is tied into the accounts set up on the device/machine 
in some way).
A browser UI could provide a nice looking and device friendly way to 
add/edit/remove social services that have sharing capabilities (Google+, 
Facebook, Twitter, Skype, etc.)


If the share link is missing this does not mean the page can not be 
shared, in that case it should be treated as a page is normally treated 
today, the share link is just a browser hint as to the ideal link to use 
when sharing this page.


Also note that using the link element allows the possibility of using 
hreflang to present multiple share links (one international aka 
English and one in the language of the page), or use media to provide 
multiple share links for different types of devices.


There already is  a link rel=search  so a rel=share just makes sense 
IMO.
It certainly will get rid of the clutter of share icons and buttons on 
websites (over time), those can be a pain to click on using touch 
devices (without zooming first), a browser Share UI could easily be 
hidden on the edge and make se of swipe left from edge or swipe right 
from edge (or top/bottom etc.) or use gestures to open a Share UI.
Some of those share icons may fail to list the social network the user 
prefer (like email for example) but if that is all setup in the browser 
then the user can share it at one (or multiple) social services just the 
way they like it.


Also note that title can be applied to such a share link as well, thus 
providing a suggested title the browser can choose (or not) to use when 
sharing it.
Any icons/logo is either taken from the icon/logo of the current page or 
from the href linked page (and whatever icon/logo that may have).


Existing services like AddThis or ShareThis (two of the more popular 
ones I believe?) should be able to access the link rel=share params 
via javascript (to access hreflang and media and title) so they will 
still remain competitive solutions; I aløso believe there are browser 
plugins for these two services as well and the browser can/could provide 
the rel=share link to those types of plugins.


Also note that there can be multiple link rel=share and that if 
allowed when speced that rel=share could be allowed to be global, that 
way the links to be shared could be inline in the document thus part of 
the content and useable by the user which is always ideal.



Anyway, I'll shut up now before I veer way off topic here.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-09 Thread Roger Hågensen

On 2014-11-07 20:01, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


A link element in the header, maybe call it link rel=share
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if
present) should be used, although I guess in a way rel=share will
probably replace the need to use rel=canonical in the long run.

I do not understand. Why should one invent a rel value (“share”) that
conveys the same semantics as an already existing one (“canonical”) ?



Three reasonings:
1. HTTP (301) redirects are advised over rel=canonical, Matt Cutts at 
Google has posted about that in the past as far as I recall. And it 
makes sense as the bots don't need to parse the page to get the 
canonical url.
2. Bookmarking should be of the current page the user has displayed, if 
they bookmark the page and a different url is bookmarked I'd consider 
that undesired behaviour (in the eyes of the user) unless a UI informs 
them or gives them an option.
3. rel=share has already has been invented though I'd hardly call 5 
letters an invention.


rel=share also shows clear intent.

A bookmark may be user specific or private to that user.
A canonical (or HTTP 301) indicate to the browser or bot that the page 
is over there and not here.

A share is intended to to be, well shared.

It semantically makes sense, at least to me.
rel=bookmark and rel=canonical and a rel=share are all hints.

A search engine for exasmple if it sees a rel=share link that is 
different from say the canonical url (either via HTTP 301 or current 
page or rel=)canonical) should probably ignore it as such a share link 
may have a share tracking url with a reference ID in it.


Also, rel=share is in the wild, I had a url to a list of rel= 
occurrences on the web but ironically I did not bookmark i/note it down. 
While it was low on the list, it was there.


Anyway, this is one place where the rel=share idea is mentioned. 
https://wiki.mozilla.org/Mobile/Archive/Sharing


There is also a rel=share-information floating around out there, but 
the search engines aren't making it easy for me to search this stuff 
(I'm probably using the wrong syntax/markup). But I found it referenced 
here https://code.google.com/p/huddle-apis/wiki/AuditTrail


There is a rel=share example use on page 5 of 
https://tools.ietf.org/id/draft-jones-appsawg-webfinger-00.txt

Used exactly as how I described it.

Here is a example of rel=share-link being used 
https://github.com/engineyard/chat/blob/master/views/index.jade


And rel=share is used in an example here 
https://code.google.com/p/huddle-apis/wiki/Folder#Response
And stated specifically here 
https://code.google.com/p/huddle-apis/wiki/Folder#Sharing_a_folder


As I see it there share is not the same as bookmark or canonical.
There may be some overlap with rel=share and a normal link though (if 
rel=share is used outside the html head).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-11 Thread Roger Hågensen

On 2014-11-10 10:35, Anne van Kesteren wrote:

On Fri, Nov 7, 2014 at 7:38 PM, Roger Hågensen resca...@emsai.net wrote:

A worthy goal would be to help developers de-clutter websites from all those
share icons we see today, so if this could be steered towards that it would
be great.

That is what the proposal is doing. The share button would be part of
the browser UI. Perhaps you misunderstood it?

(However, it is unclear whether all domains are interested in such a migration.)




I must have miss-understood, I saw window.close() mentioned and I 
thought this was a javascript API suggestion for yet another way to 
sharing things.


I looked a bit close now and wonder if this is related in any way to 
https://wiki.mozilla.org/Mobile/Archive/Sharing ?


Do you plan to go for a OpenShare route (modeled after OpenSearch) or 
something simpler like I mentioned earlier?


If all a web author need to do is slap a rel=share on a a tag or a 
link tag in the head and then have it automatically appear/listed in a 
browser Share UI for that page then that would be ideal in my oppinion.
Something like a OpenShare could build further on this hopefully, but 
for wide adoption the simpler the better.
Also OpenSearch is for searching an entire site or parts of it, wile a 
OpenShare would be just for one page or link so that would be overkill 
and it would cause another HTTP request to occur which is a waste IMO.


I'm also curious if any browsers actually do something if multiple 
rel=bookmark exist in a page (head and body), are they taken into 
account in the Bookmark UI at all? I certainly can not recall eve seeing 
this happen.


A quick test in Chrome, Firefox, Opera, IE here with the following in 
head:

link href=http://example.com/test3; rel=bookmark title=Test 3
link href=http://example.com/test4; rel=bookmark title=Test 4

And the following in body;
a href=http://example.com/test!; rel=bookmark title=Test 1Click 
Here1/a
a href=http://example.com/test2; rel=bookmark title=Test 2Click 
Here2/a

a href=http://example.com/test0; title=Test 0Click Here0/a

The result is the same, if I use the Browser UI bookmark then the head 
links are ignored, and if I right link the body a link then I'm not 
given a bookmark choice at all, just copy the url or save it.


If bookmark is so ignored perhaps it would be best to take bookmark (and 
to some extent canonical) and roll that into a rel=share standard 
which is defined/tied to this activities/intent proposal?


Note! Firefox allows right clicking any URL and choosing to Bookmark it, 
and IE does the same but it called Favorites there instead, in either 
case I assume that rel=bookmark is ignored and the title is also ignored 
as the test0 link which does not specify rel bookmark is treated 
identically to them. Opera and Chrome does not seem to allow right 
clicking a URL and bookmark it. As I do not have Safari I have no idea 
what it does in these cases.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-12 Thread Roger Hågensen

On 2014-11-11 23:31, Markus Lanthaler wrote:

On 7 Nov 2014 at 20:01, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


A link element in the header, maybe call it link rel=share
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if
present) should be used, although I guess in a way rel=share will
probably replace the need to use rel=canonical in the long run.

I do not understand. Why should one invent a rel value (“share”) that
conveys the same semantics as an already existing one (“canonical”) ?

I also have to admit that I struggle to see what value adding a rel=share 
link to a page adds!? If you look at how people share links (they copy and paste what's 
shown in the browser's address bar) then I wonder why anything at all is needed on the 
page to be shared... The story is obviously different for Share Web APIs or share 
endpoints as they are called in https://wiki.whatwg.org/wiki/Sharing/API  (Facebook, 
Reddit, bitly etc.).


Then a rel=share could be used to provide hints for those (in the form 
of a OpenShare standard similar to OpenSearch?).
But good point nonetheless, rel=bookmark is very underused as well, 
probably because it's original intent was superseded by people 
bookmarking http://example.com/somepage#section
I just checked WHATWG HTML5 and rel=bookmark isn't there at all (I 
didn't check W3C HTML5 though).



The most interesting question however is why (desktop) browsers haven't added a 
share button till now..


Wish I knew. As I mentioned in another post just bookmarking a url is 
not fully supported still. (right click a URL in Opera and Chrome, I see 
no Bookmark option there, Firefox and IE does however).



Anyway, my point was (probably muddled by me) that a Sharing API may 
just encompass the whole sharing path, which as you said above starts 
with people copying/dragging/right clicking a address bar or URL.
Once that URL is captured (and any possible hints) then it's passed to 
the Share API, and I feel it is important that the initial user step is 
also covered. (as that is not documented at all currently right?)


Which brings another issue, how far is too far? Should the naming be 
standardized as well?


Right click a URL on a page and what do you see?
Chrome shows Copy link adress
Firefox shows Bookmark This Link and Copy Link Location
IE shows Add to favorites... Copy shortcut
Opera Copy link address

Right click a page and what do you see?
Chrome shows nothing
Firefox shows nothing
IE shows Add to favorites... Create shortcut
Opera Add to Speed Dial Add to bookmarks Copy address

Right click a address field and what do you see?
Chrome shows Copy
Firefox shows Copy
IE shows Copy
Opera Copy

Very confusing and inconsistent.

I'd like to see the following:
Right click a URL on a page and see Copy Link Bookmark/Share Link...
Right click a page and see Copy Link Bookmark/Share Link...
Right click a address field and see Copy Link Bookmark/Share Link...
For touch screens/devices holding the finger for x amount of time would 
equal a right click.


Copy Link will simply copy to the clip board. Drag and Drop behaves 
the same as Copy Link.

Bookmark/Share Link... will present a Share API.

Opera has a neat thing when you bookmark a page, you are given a option 
of either a normal bookmark or a Speed Dial bookmark (tiny icon), and it 
also lets you choose the look of your bookmark (site logo, page 
thumbnail or text), by the looks of it very easy to add other forms of 
bookmarks or sharing to that UI (Facebook an Twitter etc.)


To me there is no difference between a bookmark of a link or sharing a 
link, a bookmark is simply you sharing with yourself.


I also wonder if a standardized icon/symbol should exist for a 
Bookmark/Share button on the surrounding UI of a browser.
Opera has a heart symbol, Firefox has a star and clipboard/list thingy, 
IE has a star, and Chrome has a star.


A star has been used for Favorite/Bookmark for quite a while.
So what about Bookmark/Share ? Does a book with a star make sense or 
is that too cluttered? Or is Opera on trend with their heart?



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-13 Thread Roger Hågensen

On 2014-11-13 18:11, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


I just checked WHATWG HTML5 and rel=bookmark isn't there at all (I
didn't check W3C HTML5 though).

The section on the bookmark link type in WHATWG HTML can be found here:
https://html.spec.whatwg.org/multipage/semantics.html#link-type-bookmark

The section on the bookmark link type in W3C HTML can be found here:
http://www.w3.org/TR/html5/links.html#link-type-bookmark



I have no explanation for the missing it's entry in the WHATWG spec, I 
could have sworn I did search for bookmark.
As to the W3C, I suspect I searched the wrong document (wouldn't be the 
first time I've done that).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-13 Thread Roger Hågensen

On 2014-11-13 18:19, Nils Dagsson Moskopp wrote:
AFAIK, all of these interface details lie outside the scope of the 
HTML specification (and rightly so, IMHO). If you need a standard 
symbol for bookmarks I suggest to use U+1F516 BOOKMARK, which looks 
like this „“. 


Then don't spec it but advise or suggest it.  Even the bookmark example 
at 
https://html.spec.whatwg.org/multipage/semantics.html#link-type-bookmark 
says A user agent could determine which permalink applies to which part 
of the spec
thereby acting as a advisory hint/best practice suggestion (note the use 
of could).


I also tested the example code (with doctype html obviously) and the 
browser behaviouir is still the same, rel=bookmark is simply ignored. 
In that case shouldn't rel=bookmark be removed from the WHATWG HTML 
spec to reflect actual use?




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-13 20:20, Evan Stade wrote:

Currently this new behavior is available behind a flag. We will soon be
inverting the flag, so you have to opt into respecting autocomplete=off.



I don't like that browsers ignore HTML functionality hints like that.

I have one real live use case that would be affected by this. 
http://player.gridstream.org/request/
This radio song request uses autocomplete=off for the music request 
because a listener would probably not request the same bunch of songs 
over and over.


Some might say that a request form should use a different input type 
like... well what? It's not a search input is it? There is no 
type=request is it?
in fact the request field is a generic text field that allows a short 
message if needed.


PS! Please be aware that the form is an actual live form, so if you do 
enter and submit something be aware that there might be a live DJ at 
that point actually seeing your request.



Why not treat autocomplete=off as a default hint so if it's off then 
its off and if it's on then it's on but allow a user to right-click (to 
bring up the context menu for the input field) and toggle autocomplete 
for that field.


I checked with Chrome, IE, Opera, Firefox, the context menu does not 
show a choice to toggle/change the autocomplete behavior at all (for 
type=text).


Also the reason the name field also has autocomplete=off is simple, if 
somebody uses a public terminal then not having the name remembered is nice.

Instead HTML5's sessionStorage is used to remember the name.


Perhaps that could be a solution, that if autocomplete=off is to be 
ignored by default then at least let the text cache be per session (and 
only permanently remember text  if autocomplete=on ?).



Also do note that the type of field in this case is type=text.

Also, banks generally prefer to have autocomplete=off for credit card 
numbers, names, addresses etc. for security reasons. And that is now to 
be ignored?



Also note that in Norway this month a lot of banks are rolling out 
BankID 2.0 which does not use Java, instead they use HTML5 tech.
And even todays solution (like in my bank) login is initiated by 
entering my social ID number, which is entered into a input field with 
the text with autocompelete=off.
Now my computer I have full control over but others may not (work place 
computer, they walk off for a coffee) and someone could walk by and type 
the first digit 0-9 and see whatever social id numbers had been entered.




(or did I missread what you meant with autofill here?)


--
 Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 02:49, Glenn Maynard wrote:

On Thu, Nov 13, 2014 at 7:17 PM, Roger Hågensen resca...@emsai.net wrote:

I have one real live use case that would be affected by this.

http://player.gridstream.org/request/


Unfortunately, even if a couple pages have a legitimate use for a feature,
when countless thousands of pages abuse it, the feature needs to go.  The
damage to people's day-to-day experience outweighs any benefits by orders
of magnitude.


Punishing those who do it right because of the stupidity of the many, 
can't say I'm too thrilled about that.





This radio song request uses autocomplete=off for the music request
because a listener would probably not request the same bunch of songs over
and over.


(The use case doesn't really matter to me--the abuse is too widespread--but
this is wrong.  If I request a song today, requesting it again tomorrow or
the next day is perfectly natural, especially if my request was never
played.)



No it's inherently correct for the use case as listeners tend to enter 
things like:


Could you play Gun's'Rose?
Love you show, more rock please?
Where are you guys sending from?


  Also, banks generally prefer to have autocomplete=off for credit card
numbers, names, addresses etc. for security reasons. And that is now to be
ignored?


Yes, absolutely.  My bank's preference is irrelevant.  It's my browser, not
my bank's.  This is *exactly* the sort of misuse of this feature which
makes it need to be removed.


Then provide a way for the user (aka me and you) to toggle autocomplete 
for individual fields then.

That way I could toggle autocomplete off for the request field.

You wouldn't take away somebody's wheelchair without at least providing 
them a chair would you? (yeah stupid metaphor, I know, it sounded better 
in my head, really.)






Also the reason the name field also has autocomplete=off is simple, if
somebody uses a public terminal then not having the name remembered is nice.


This is another perfect example of the confused misuse of this feature.
You don't disable autocompletion because some people are on public
terminals--by that logic, every form everywhere would always disable
autocomplete.  This must be addressed on the terminal itself, in a
consistent way, not by every site individually.  (Public terminals need to
wipe the entire profile when a user leaves, since you also need cache,
browser history, cookies, etc.)



Point taken.


What about https://wiki.whatwg.org/wiki/Autocompletetype
Couldn't a type=chat be added then?

That live example  above was just one.
What about web chat clients that use input type text? Do you really want 
autocomplete forced on always for that?
If a user can't toggle autocomplete on/off per field as they want 
themselves then a type must exist where autocomplete is off by default.


Is that too much to ask for? (as both a user and developer)

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 02:49, Glenn Maynard wrote:
Unfortunately, even if a couple pages have a legitimate use for a 
feature, when countless thousands of pages abuse it, the feature needs 
to go. The damage to people's day-to-day experience outweighs any 
benefits by orders of magnitude.

  Also, banks generally prefer to have autocomplete=off for credit card
numbers, names, addresses etc. for security reasons. And that is now to be
ignored?

Yes, absolutely.  My bank's preference is irrelevant.  It's my browser, not
my bank's.  This is *exactly* the sort of misuse of this feature which
makes it need to be removed.



By default ignoring autocomplete=off (unless the user crawls into the 
browser settings, possibly under advanced settings somewhere?)

then those who miss-use it today will continue to do so.

Take the following example (tested only in Firefox and Chrome).
http://jsfiddle.net/gejm3jn1/

Is that what you want them to start doing?
If a bank or security site wishes to have input fields without 
autocomplete they can just use textarea.

Are you going to enforce autocomplete=on for textarea now?

Why not improve the way autocomplete works so there is a incentive to 
use it the right way? (sorry I don't have any clever suggestions on that 
front).



My only suggestion now is:
Default to autocomplete=off working just as today.
Provide a setting under Privacy settings in the browser (global). There 
are also per site privacy settings possible so (site specific).
Then add a contexts menu to all input field where autocomplete can be 
enabled/disabled. (Spellcheck already does this for example in most 
browsers).





--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 03:57, Ben Maurer wrote:

If the site sets autocomplete=off could you disable the saving of new 
suggestions? One of the main use cases for turning off autocomplete is to 
disable the saving of sensitive or irrelevant information. If the user is 
filling in an address or cc num it's likely they have the opportunity to save 
that on other sites.


Looking at https://wiki.whatwg.org/wiki/Autocompletetype
I see credit cards has it's own subtype, this would allow some 
granularity (possibly tied to the security/privacy preferences of the 
user set in the browser).


Then there is this http://blog.alexmaccaw.com/requestautocomplete
(hmm, that name/email in the example image there looks very familiar... :P )

Now that is a very good incentive to actually use autocomplete, it saves 
me from having to start typing into every fields to trigger the 
autocomplete popuplist.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



  1   2   >