Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-31 Thread Roger Hågensen

On 2018-03-25 09:08, Richard Maher wrote:
Splitting things into a whole bunch of APIs and then trying to get permissions working for all of them 
is going to be a huge pain IMO.


There is no "going to be" Roger. You are not embarking on a green-field 
development or brain-storming "The Web". We are where we are. I don't 
care if people want to change things but it is DEFINITELY out-of-scope 
for the Background Geolocation implementation.


I was a unaware you where the moderator of the Background Geolocation 
specs or similar. In that case my apologies, I had no intention of 
pushing things into the specs that do not belong there (it's bad enough 
in politics).


But I don't understand what you mean by "We are where we are" or why you 
are against brain storming and I had to lookup the greenfield term but I 
am unsure if you mean untapped market, new software projects, 
undeveloped land or if you are referring to an area in Manchester (yes 
I'm being an asshat on that last one).


I'm also unsure if you are being sarcastic or are actually trying to 
dictate what I should or should not do. If you are upset that things 
veered off topic then I'm sorry about that.


To me it looks like you want a commercial backend solution for Uber or 
Dominos or similar (not end users). Surely these can run apps on tablets 
or smartphones? Browsers are pretty good but for operational software 
you want reliability and a consumer browser despite the amazing efforts 
of the developers so far is not that stable yet. If "always on in the 
background" is key to a business then a OS run background service is 
exactly what is needed.


If you need certain functionality implemented and your project is 
waiting on this then I suggest you make a OS native app instead before 
your competitors sale past you.


Looking through the WHATWG history I do not like what I see so I'm going 
to stop communicating regarding this topic, and most likely future 
topics by you, you come across as excessively abrasive and I'd rather 
not have to deal with that. You've said in the past that you felt you 
have been stifled or censored by others, well now you have succeeded in 
stifling me. I'm having difficulties communicating with you, I can't 
even imagine trying to work on code or standards implementations with you.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-24 Thread Roger Hågensen

On 2018-03-25 07:17, Richard Maher wrote:
  If that makes sense for your App the background-fetch already caters 
for posting location updates to a fleet manager.

...
  If delayed Batch Processing is acceptable to your site and you don't 
want geofences then good luck.

...

  TravelManager register() and unRegister()


I had to look up fleet manager and travelmanager. These are business 
terms. I did not consider fleet management, nor did I consider targeted 
traveling advertising/upselling (travelmanager?).


I was looking at this from a non-commercial user standpoint as well as 
scientific standpoint. Hence why I mentioned "crowd sourcing" weather 
data and geolocation from smart devices via a webapp, or a health and 
fitness training app (pulse, distance, altitude etc), or 
health/wellbeing (baby monitoring, pet monitoring).


But regardless of the intened use or purpose. Splitting things into a 
whole bunch of APIs and then trying to get permissions working for all 
of them is going to be a huge pain IMO.


A general purpose permission system and a general purpose sensor api 
(that can gather gps and various other things like ambient temperature 
or pulse) would be a better long term goal. It would also be less likely 
to screw up security this way.


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-24 Thread Roger Hågensen

On 2018-03-25 07:17, Richard Maher wrote:


This would allow the browser to record locations changes with
reasonably
accuracy *without* waking up service workers.


  If you don't like how ServiceWorkers cater for the Fetch API than 
please take it offline with Jake and W3C.


Um. You missquoted, that wasn't me who said that.


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-24 Thread Roger Hågensen

On 2018-03-24 22:32, Andy Valencia wrote:

There are lots of apps using long-polling which would also like
to have some explicit (standards based) answers to their needs
to run when not the current tab--messaging and telemetry apps,
for instance.

And here we are thinking about a hand crafted solution for GPS backgrounding.

We're all well aware of the behaviors which make browsers adopt such
defensive measures.  Are we looking at enough use-cases to think about some
sort of general authorization for background resource consumption, rather than
continuing the point solution approach?


Good point. And my example of a environment API was flawed as I just 
realized that a heart (pulse) sensor might be highly beneficial to tie 
into time and GPS and temperature and other sensor data.
All this stuff could be lumped into a sensor API (or is there a existing 
one?)


Another related example would be a loudness tracker, which would use the 
Microphone to record and calculate a loudness.
One particular such use could be a baby monitor, letting you see how 
much noise/crying the baby does, maybe add in temperature or moisture 
sensor data if available. Or replace baby with dog or cat to detect 
barks or meowing while the pet owner is out of the house.


A sensor api and a background permission api should probably be separate 
as I'll assume there might be other non-sensor uses that a user might 
want to do background processing.
Perhaps a server uptime app, having the app reliably prod a server would 
be such a use case.


Obviously a dedicated native app could do this much better, but it's a 
lot quicker to throw something together as a web app. And there is 
little to no need to roll out updates unlike native apps.


I guess this is a chicken and an egg situation. You won't see use cases 
unless it's possible to actually implement them. It's not as much a 
"Should this be possible?" question as it is a "Why isn't this 
possible?" question.


I've kinda derailed this topic and we've got a three headed hydra now.
Geolocation background tracking.
Environment/general sensor API with background logging.
Background task permission API for sensors or other general purposes.

Regardless of use cases or apis is tied into this, one thing is clear. 
The user must either implicitly or explicitly initiate or agree to 
background processing. Some form of UI would be needed to give oversight 
over background browser tasks too.
While I did angle towards smart phones here there is no reason why a 
desktop can't also run background webapps, and there battery capacity is 
a non-issue (usually).


Sorry for murkying the waters further on this.

--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-24 Thread Roger Hågensen

On 2018-03-24 21:15, Philipp Serafin wrote:

If this problem is specific to the "track a route" use-case, and the
use-case is sufficiently widespread, would a dedicated "route recording"
API make sense?

E.g., a web page could ask the browser to continously record location
changes and - at some time at the browser's discretion - push a list of
recorded changes to the page.


Hmm! It might.
certainly it makes sense to cache location coords since the device may 
not have a internet connection during the entire time.


In practice it would only need a internet connection at the time of data 
submission to the site of the webapp, the rest of the time it could be 
"offline".



This would allow the browser to record locations changes with reasonably
accuracy *without* waking up service workers.


THis part I'm unsure of. Should it be a webapp or a client feature with 
a API for webapps?



It would also provide some hooks for privacy controls: A browser could show
a status indicator whenever it's in "GPS recording" mode. It could also
notify the user when it's about to push the recorded route to the page and
possibly even show the route for confirmation.


I certainly see the charm and practicality in a webapp asking the client 
(browser) to start logging GPS coords (it must be user initiated at some 
point though, like a button/link click).

And then the same when stopping it.

A _start function and a _stop function and a _get function would be all 
that is needed.


The _stop function should be self explanatory. The _start function would 
allow a argument of milliseconds (or is seconds enough granularity?), 
which specifies the interval the client should use to record the current 
GPS and other info.


The _get function of such a API would just return a JSON array of GPS 
objects, with cords, height and timestamp of the reading, with future 
expandability for including stats like pressure and moisture and 
temperature (can't think of anything else).
For a cyclist/runner/driver/boater the coords might be useful (to get 
distance and route traveled). For a camper or woodsman or farmer or who 
knows what else the moisture and temperature and pressure and elevation 
may be valuable (the GPS coords would be almost identical for each 
reading though).


I'm not sure if this would fit under a geolocation API though, perhaps 
more under a environmental API (where GPS/elevation is just one of many 
data).


Since the user explicitly (or implicitly) press start there is no need 
to ask permission.
But there should be a possibility to ask for site permisssion and have 
the webapp autostart, this would allow to run the wwebapp in a browser 
24/7 and have it send "live" data to a server. This could make a 
smartphone a temporary weather station (a smartphone could have a 
external "weather sensor box" connected for example, providing the 
sensor input for this API, the browser would just see it as OS provided 
sensor data).


Sure a Raspberry Pi or some other IOT with some scripting can do this 
better, but just plopping a smart device to a large battery pack or 
mains power and leaving it over the night sending live data to a server 
could be useful. Hundreds if not thousands of these round the world 
could supplement weather research/sites with that data.



I'd suggest this as a way to detect if such a API is available
if ("environment" in navigator) {
  /* environment API is available */
} else {
  /* environment API IS NOT available */
}
It would really need to be it's own thing instead of adding to the 
geolocation, there should be no issues with both coexisting.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] rendering for case min == max

2018-03-24 Thread Roger Hågensen

On 2018-03-19 12:49, Anne van Kesteren wrote:

On Mon, Mar 19, 2018 at 11:13 AM, Mikko Rantalainen
<mikko.rantalai...@peda.net> wrote:

The spec should specify one way or the other for this corner case.


Agreed, we're tracking this in
https://github.com/whatwg/html/issues/3520. If anyone would like to
help clarify the prose in the form of a pull request or wants to make
a strong case for Firefox's behavior, that'd be much appreciated.
Is it possible the Firefox devs assumes that 0,0,0 infers this to be a 
"unknown" progress state?
On Windows UI this tends to be shown as a a full but "pulsing" progress 
bar, in a looping animation.
But I've seen UI design that also fills up the progress then clears it, 
in a looping animation.


Thouigh one could easily "animate" this via just setting the values, so 
even if 0,0,0 was speced to always show nothing one could still do the 
"unknown"! behaviour.


Personally I think 0,0,0 should not only be empty but the actual 
progress bar itself should not be drawn as it's in a non-state if you 
know what I mean.

Which probably will be changed by some javascript moments later.

Treat 0,0,0 as it being there but just non-visible maybe?

Although I'm almost tempted to say that 0,0,0 should throw a warning in 
the dev console log that a valid range must be set and that the value 
must be within (inclusive) that range.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Popular Background Geolocation question on StackOverflow

2018-03-24 Thread Roger Hågensen

On 2018-03-19 00:25, Richard Maher wrote:

FYI This question on StackOverflow has now had over 1000 views: -
https://stackoverflow.com/questions/44233409/background-geolocation-serviceworker-onmessage-event-order-when-web-app-regain

Please explain why nothing is happening.

It has a accepted solution.

But one key issue is that browers throttle down or even pause inactive 
windows/background tabs.
Partly blame digital currency mining for this, blame the rest on bad 
programmers running full tilt when they don't need to and DDOS trojans 
for the rest.


I haven't looked this up, but is there a way to ask the user for 
permission to run as a background app without performance restrictions?

That is the only way I forsee this working across all browsers.





--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] META and bookmarking

2018-02-17 Thread Roger Hågensen
Add a link in the header and use rel=canonical which tells the browser 
that the link is the correct url.


If all modern browsers actually uses the canonical url when you bookmark 
a page I have no idea, ideally they should though. File a bug report 
with the browsers if they don't.


Do note that search engines consider a slightly different meaning for 
rel=canonical though.


I'm curious though why you do not want the users to bookmark the subpage 
but the portal page instead.


Maybe rethink the design so that the portal (aka landing page) actually 
does not auto redirect. Present a clickable image (href + image) that 
the user can click or tap on. That way they can bookmark the portal page 
before clicking onwards.


Also make sure the users can click a back button on the subpage to go 
back to the portal page (call it Index or Home or Portal or something).


If you do not control these subpages yourself (they are somebody elses 
sites) then no browser devs will ever let you hijack the bookmarking of 
those. In this case you have a Directory/Indexing/Portal site and the 
only solution is to wait for the user to click before you send them to 
the foreign site. That way they can bookmark the portal site.


If the destination url contains a referral id of sorts then it's the 
destination site's responsibility to remove it if it's single use.



I dare say that in the "majority" of cases if you are unable to do 
something on the web today it's because you are doing it the wrong way.
I say "majority" because there are bound to be cases where this is not 
true. But in your case you might need to re-evaluate the way you do the 
redirect.


Now, I haven't seen your portal site so I'm just making assumptions here 
so I apologise beforehand in case my assumptions are way off.


RH

On 2018-02-17 20:55, Andy Valencia wrote:

The problem is if you like the site and decide to bookmark it--
including a home screen bookmark on mobile.  You're off on
a transient URL, which is not the right one to bookmark.  On
a desktop browser you can go into the extended dialog and
hand-modify the URL (some users could, others not so much).
On mobile, it can be difficult--on some devices even impossible.
...
Thanks,
Andy Valencia


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Further working mode changes

2017-12-18 Thread Roger Hågensen

On 2017-12-18 11:47, Anne van Kesteren wrote:
> Last week we made some further refinements to the way the WHATWG
> operates and are pleased that as a result Microsoft now feels
> comfortable to participate:
>
>https://blog.whatwg.org/working-mode-changes
>https://blog.whatwg.org/copyright-license-change

I'd like to express my thanks to you and everyone else involved for the 
work you do on this. It's appreciated.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


[whatwg] Max-bandwidth (was Re: HTML : FEATURE SUGGESTION)

2017-10-16 Thread Roger Hågensen

On 2017-10-14 17:03, Uday Kandpal wrote:

may suggest you to kindly bring a new feature like unique resolution to be
set for all the video being loaded on demand to lowest possible resolution
so that unnecessary advertisement and videos do not consume the space
irrespective of any adware remover or ad blocker.


I'm not sure if having web specification dictate artificial resolution 
or bandwidth limitations is a good idea.


Also, browser will/can negotiate with the server. Either through the 
server starting "low" then increasing bandwith/resolution until freame 
start to drop.


Or through the use of alternative image resolutions in web pages


Also kindly bring in a button for the user to change the multimedia
resolution (image/video/applet/flash and other plugins) dynamically the
same way Youtube provides for videos. It may require a change in the
protocol for servers to convert the image to low resolutions before
sending, but it can be reduced to the scope of HTML processiong or xml
processing engine.


This can be done today through cookies or localStorage.



I have heard that the android developers were saving their multimedia
resource details in the XML Resource files.


The Android browser or video app developers?


One part of your suggestion do have merit though which I'll elaborate 
on. Upon connection a browser could pass along a bandwidth hint to the 
server.

Max-bandwidth: 800

Which would indicate that the browser desires the server to not send 
more than 8mbit per second to the browser.
Such a max may or may not be the max of the line of the user, in some 
cases a user may want to ensure that they have 2mbit free on a 10mbit 
line and thus bandwidth limit video/datatransfer to 8mbit.


It could then be up to the browser UI/user settings if this limit is per 
server or global for the browser, if global then the browser could halv 
that 8mbit into 4mbit and 4mbit for the two sites. Or perhaps 6mbit for 
video and 2mbit for non-video.


I'm not aware of any desktop browsers that have such features, I'm 
uncertain about mobile browsers though.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] JavaScript function for closing tags

2017-10-16 Thread Roger Hågensen

On 2017-10-14 10:13, Michael A. Peters wrote:
I use TextTrack API but it's documention does not specify that it closes 
open tags within a cue, in fact I'm fairly certain it doesn't because 
some people use it for json and other related none tag related content.

Looking at https://www.html5rocks.com/en/tutorials/track/basics/
it seems JSON can be used, no idea if content type is different or not 
for that.


Some errors using the tracks in XML were solved by the innerHTML trick 
where I create a separate html document, append the cue, and then grab 
the innerHTML but that doesn't always work to close tags when html 
entities are part of the cue string.


Mixing XML and HTML is not a good idea. Would it not be easier to have 
the server send out proper XML instead of hTML? Valid XML is also valid 
HTML (the reverse is not always true).

And if XML and HTML is giving you issues then use JSON instead.
I did not see JSON mentioned in the W3C spec though.


There does not seem to be a JavaScript API for closing open tags.

This is problematic when dealing with WebVTT which does not require 
tags be

closed.

Where it is the biggest problem is when the document is being served as
XML+XHTML


If a XML document is being served with unclosed tags then it's not valid 
XML, so it's no wonder if that then causes issues.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] HTML inputs directly toggling CSS classes on elements?

2017-09-10 Thread Roger Hågensen

On 2017-09-09 18:41, Alex Vincent wrote:

A few days ago, I dipped my toes into web design again for the first time
in a while.  One of the results is the CSSClassToggleHandler constructor
from [1].  Basically, it takes an radio button or checkbox, and turns that
input into a toggle for a CSS class on another element.


You do have :checked

While I haven't used that with radio or checkboxes much myself it seems 
to at least partially do what you are describing.


https://developer.mozilla.org/en-US/docs/Web/CSS/:checked




--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] header for JSON-LD ???

2017-07-26 Thread Roger Hågensen

On 2017-07-26 16:52, Philipp Serafin wrote:

That sounds like a very expensive solution for a technology that was
supposed to enable bots to consume web pages *without* needing to cut
through all the bloat.



Yeah. As far as I know, content is still king at Google.
So extra weight will be given to whatever is shown to the visitors 
(seeing or non-seeing).


This article by one of the guys behind JDON-LD is also interesting.
http://manu.sporny.org/2014/json-ld-origins-2/

The semantic web was at the bottom of his list when making the 
specification, so JSON-LD is not really designed for semantics.



To me JSON-LD looks like a standardized way to store common data using JSON.

Myself in my apps I would not use JSON-LD as my own apps would use a 
internal API or I'd define a custom API using JSON.


JSON-LD would be overkill for the play history for a web player for 
example, as you'd have to support all of the JSON-LD specs if you want 
to remain compliant. No browser support JSON-LD, so you'd need a larger 
library to handle it. I rely on native browser functionality as possible 
to avoid 3rd party libraries.


Something like JSON-LD would make more sense with something like The 
Internet Archive and so on, as part of a official public API etc.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] header for JSON-LD ???

2017-07-26 Thread Roger Hågensen

On 2017-07-26 07:49, Ian Hickson wrote:

Disrespect of fellow members of the list is unacceptable.

...

Please peruse our code of conduct if the reasoning behind this action is
unclear to you: https://whatwg.org/code-of-conduct

Thanks.


Thank you.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] metadata (and royalty reporting for some weird reason)

2017-04-23 Thread Roger Hågensen
ng media, it'll have
that value.  It's legal to omit fields which have no value.


There is no legal requirement for any player to show certain fields. 
There are physical internet radio boxes (like Roku if I recall 
correctly) that do not even show Shoutcast metadata. If this was a legal 
issue then every single player/radiobox out there as well as all streams 
in the world would be forced to adhere to it.


Remember, soundexchange is not looking at the info the player is 
displaying. They just want logs and money. (they don't really care much 
about the logs or getting the money to the artists, as a unsigned artist 
they have pocketed my money on my behalf without my permission even when 
my music is under a Creative Commons license, but that is another 
discussion best served elsewhere than here.)


getMetadata however would be nice if adopted by all browsers as it would 
allow a webplayer to show info a listener might find interesting without 
having to use XHR (or in the future Fetch) to poll a weburl script that 
grabs the info as XML or JSON from the streaming server (which is how I 
do it for our radio station and our web player currently, the royalty 
reporting is currently via streamlicensing which has admin access to the 
stream server to get song info and listener info.)



Then, metadatachange is added.  While an event handler is active,
then on each detected change of metadata a callback occurs.


Maybe somebody else at Mozilla could test that, but I'm hoping that the 
info is updated when it's changed so a simple polling of getMetadata can 
make it work right now. But this assumes that the MP3 decoder supports 
shoutcast metadata (which is most likely don't, and probably not v2), 
the Ogg Vorbis decoder (and hopefully the Ogg Opus decoder) should 
support the metadata but if it actually presents that to getMetadata I 
have no idea.


If that is the case then the polling could be used temporarily until a 
event is added for such a metadata change.



For the special case of Icecast/Shoutcast where the initial HTTP GET
requires a special header, the change handler must be in place before
the stream is opened.  Thus "

Icecast does not require a special header, the Icecast stream is HTTP 
standard compliant, it is no different from a static vorbis file with 
the proper MIME type (but without a file length obviously as it's a live 
stream).


I do not see why a different event would be needed, just make the 
loadedmetadata event fire for both. With a stream the metadata may be at 
the start or several KB into a stream.



Also note that you should not wish for too much metadata in such streams 
as they bloat the stream and do count towards the bitrate of the stream 
so that it is for example 96kbit of audio AND metadata as opposed to 
just 96kbit of audio.
The best is to provide the minimal of info and then do a asynchronous 
lookup to get more details and cover artwork and then display that.



BTW! If I still can't make you understand that royalty reporting has 
nothing to do with playing the stream in a web browser then I really do 
not know what to do. Such royalty reporting is done at either radio 
level, DJ level, or server level, way before any stream or metadata ever 
reaches a browser. I'd like this to be my last post on this, I'm tired 
of repeating myself and I can only guess how annoying the others might 
find this weird topic now. So I'm not going to respond to any future 
posts on this semi-off-topic.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0). My opinions are my own unless specified otherwise.

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-19 Thread Roger Hågensen
): and not just file:



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-18 Thread Roger Hågensen

On 2017-04-18 10:08, Anne van Kesteren wrote:

There is https://www.w3.org/TR/offline-webapps/


Right, those are about making applications distributed over HTTPS work
when the user is not connected. That idea doesn't necessitate file
URLs and we're still working towards that ideal with Fetch, HTML, and
Service Workers. All browsers seem on board with that general idea
too, which is great.


But being able to access files added to a "subfolder" of said offline 
app won't be possible I assume?


Maybe just adding the ability to ask the user if accessing this or that 
file or this and that folder for indexing (and accessing the files 
within) would be better.


A different open file requester would be needed, and a requester for 
open folder + access contents of folder would be needed. That way the 
file paths can be retrieve an used with , , Fetch and so on.



...they're more independent than that. (And we don't really
appreciate any copying that takes place. It's a lot less as of late,
but it still happens, as documented in e.g.,
https://annevankesteren.nl/2016/01/film-at-11 and
https://wiki.whatwg.org/wiki/Fork_tracking.)


Ok that is a bit of an asshat move. I've got nothing against forking but 
there is obviously a right and a wrong way to do that.
Does the WHATWG and W3C meet/have a common group at all? (for the 
editors) So that cross-group messes can be handled/avoided?


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-18 Thread Roger Hågensen

On 2017-04-17 15:22, duanyao wrote:

This can handle multipage fine as well.
Anything in the folder test.html_files is considered sandboxed under
test.html

The problem is, what if users open `test_files\page2.html`or
`test_files\page3.html`directly? Can they access `test_files\config.json`?
This is to be solve by the "muli-page application" convention. By the
way, the name of the directory is usually `foo_files`, not
`foo.html_files`.


Good point. But why would a user do that when the entry point is the 
test.html?


In this case the browser could just fallback to default behavior for 
local html files.


Alternatively the browser could have some logic that knows that this is 
a page under the test folder which is the sandbox for test.html


Also your example of "test_files\page3.html" and 
"test_files\config.json" ofcourse page3.html could access it, just like 
it could access config.js if not for CORS on XHR and local files.


Actually a lot of the issue here is XHR (and fetch) not being possible 
for local web pages.


The only reason I suggested using the same naming convention for the 
sandbox folder is that (at least on Windows) Explorer deletes both the 
html and folder something users are familiar with. Though I'm sure 
Microsoft could add support for the same to another folder naming 
convention, I can't see that being backported to Windows 8.1/8/7.



I just checked what naming Chrome does and it uses the page title. I
can't recall what the other browsers do. And adds _files to it.

Chrome can be configured to ask for location when saving a page, then
you can name it as you will.
The "xxx_files" convention was introduced by IE or Netscape long ago,
and other browsers just follow it.
...

I have not tested how editing/adding to this folder affect things,
deleting the html file also deletes the folder (at least on Windows
10, and I seem to recall on Windows 7 as well).

There is no magic link between `foo.html` and `foo_files/`, this is just
a trick of Windows Explorer. You can change things by hand in that
directory as you will.


I just confirmed that. just creating a empty .html file and a same named 
folder with _Files at the end does "link" them in Explorer.
Is this unique to Windows or does other platforms do the same/something 
similar?


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-18 Thread Roger Hågensen

On 2017-04-17 19:19, duanyao wrote:

There are always incompatibilities
between browsers, and even once standardized feature can be
deprecated/removed in future, e.g. `window.showModalDialog()`,
`` and ``.

This happens rarely and when it happens it's a very considered
decision involving lots of people. It's usually related to complexity,
lack of use, and security.

Sure. Proprietary OSes don't change thier core API in incompatibe way
for no good reason, too.

I don't expect a local web app tested on major OSes today would stop to
work tomorrow due to a filesystem API change.


It's probably more likely that a online web app will stop functioning 
than a local/offline web app. When it's local there is only the browser 
and OS involved. Online you have the 
OS+Browser+Router+ISP+Proxies+Webserver(+cache)+possibly Serverside 
scripting.



Arguing about the manifest/statement of WHATWG and what is within the 
scope of WHATWG may be irrelevant.


Think of the end user first. If a end user "saves" a online webapp they 
expect it to work offline too. And in my eyes there is no reason why it 
should not.


Now I have not tested this yet but if a html page has links to other 
html pages or files one would assume those files are also saved.


Likewise if a user drags a file from a folder to say a soundbank app and 
then they close it and open it the next day only to find it empty again 
as paths can't be stored they'd think the app is broken (or that html 
apps sucks).



This can be partially fixed by making the user typed in file paths 
manually, but this is very use unfriendly.


That a html "app" can work online, offline, and locally, is one of the 
biggest benefits it has over other languages/program environments.


Microsoft already does something similar with it's UWP apps which can be 
html and javascript based.


Personally I like the idea of a app that has it's source open, issues 
could technically be fixed without having to get the source code (as the 
app is the source code) nor a need to recompile it with the exact same 
developer setup/compiler/IDE. It's also relatively easy to inspect.


Searching Google for "offline webapp discussion group" turns up
https://www.w3.org/wiki/Offline_web_applications_workshop
and that's sadly from 2011.

There is https://www.w3.org/TR/offline-webapps/

Now I know that WHATWG and W3 Working Group is not the same thing,
but if W3C thinks that offline apps are part of the web but WHATWG does 
not then that creates a huge chasm as WHATWG would then ignore all 
offline stuff.


I always assumed that WHATWG was a fast track variant of W3C. 
Brainstorming stuff, getting it tested/used in browsers then seeing what 
sticks to the wall and once things become stable the W3C will hammer it 
in stone. Is that assumption wrong?



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-17 Thread Roger Hågensen

On 2017-04-17 13:53, duanyao wrote:

For single page application, browsers restrict `foo.html`'s permission
to `foo_files/` in the same parent directory. Note that it is already a common 
practice for browsers
to save a page's resource to a `xxx_files/` directory; browsers just need to 
grant the permission
of `xxx_files/`.


I like that idea. But there is no need to treat single and multipage 
differently is there?



d:\documents\test.html
d:\documents\test.html_files\page2.html
d:\documents\test.html_files\page3.html

This can handle multipage fine as well.
Anything in the folder test.html_files is considered sandboxed under 
test.html


This would allow a user (for a soundboard) to drop audio files into
d:\documents\test.html_files\sounds\jingle\
d:\documents\test.html_files\sounds\loops\
and so on.

And if writing ability is added to javasript then write permission could 
be given to those folders (so audio files could be created and stored 
without "downloading" them each time)


I just checked what naming Chrome does and it uses the page title. I 
can't recall what the other browsers do. And adds _files to it.


So granting read/write/listing permissions for the html file to that 
folder and it's subfolders would certainly make single page offline apps 
possible.


I have not tested how editing/adding to this folder affect things, 
deleting the html file also deletes the folder (at least on Windows 10, 
and I seem to recall on Windows 7 as well).
I'm not sure if a offline app needs the folder linked to the html file 
or not.
A web developer might create the folder manually in which case there 
will be no link. And if zipped and moved to a different 
system/downloaded by users then any such html and folder linking will be 
lost as well.


Maybe instead of d:\documents\test.html_files\
d:\documents\test.html_data\ could be used?
This would also distinguish it from the current user saved webpages.



--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] metadata (re-mapping)

2017-04-17 Thread Roger Hågensen

On 2017-04-16 15:36, Delfi Ramirez wrote:

* Sound.load(new URLRequest("07 - audio.mp3"));
* Some old tricks on the issue were done in the past. here the link of
an ECMAScript derivative from the past, if it serves you as a model ID3
tags Get/Receive [6].


That is not a trick, that is Flash ActionScript by the looks of it. 
Flash (and by proxy it's scripting) is pretty much deprecated now.
And if you are suggesting presenting id3 metadata then I'd rather not 
see that, the web developer do not need to know if it's a mp3 ID3 tag or 
Ogg metadata, they only need key value pairs. And the mp3 stream parsing 
code can re-map the most common id3 tags to a more sensible (UTF8/UTF16) 
lowercase key name, it's possible even some of the Ogg tags may need 
re-mapping.


People at Hydrogen audio have tried to remap these into something common
http://wiki.hydrogenaud.io/index.php?title=Tag_Mapping
That is probably the most comprehensive re-mapping guide on the web 
right now.


Using Vorbis Comments as the basis seems sensible (only lowercase 
instead as lowercase compress better with gzip as there are more 
lowercase text than uppercase text in webpages and javascript).



* Sounds: Rings and bleeps of a mobile device or a computer device.
Audio meta tags should be applicable to the webplayer ( one file vs.
multiple files ).


I doubt anyone streams their ringtone from the web.



*  Streaming on the web: As I noticed to this group in my last email,
in our present days there is a growing demand for streaming by
scientific communities to publish audio conferences or talks.


I do not see why this is an issue, as long as metadata can be handled, 
like for example meta data in ogg (files and live streams). Then any 
metadata can be added.


Are you suggesting that the default audio and video player presents a 
info via the UI?
I can see artist (creator) and title, year, copyright/license (for 
example "CC BY") as the 4 minimum pieces of info that would be useful. 
And would work for audio and video.


But for custom UIs or enhanced UIs the webdeveloper would (or at least 
ideally should) be able to pull any metadata from the file/stream and be 
notified if a stream changes/send new metadata.




* The only important issue to consider would be the five seconds
minimum lenght.


I fail to see what this has to do with metadata. And if you are 
suggesting any playback length limiting for audio that is not "licensed" 
then that is not a discussion I wish to be part of as that would be akin 
to censorship. As a independent artist myself I'm not registered with 
any PROs nor do I ever have the intention to do so which would mean my 
own music would be limited to 5 sec playback.



I suspect that you are addressing the wrong group here for most of the 
stuff you are talking about. Any playback length limitations due to 
potentially legal issues is the responsibility of the party that 
actually shares the audio or video, not that of the browser developers 
nor the web standards.


If you want certain metadata tags standardized then please know that 
none really exists of id3. Not even Ogg (Vorbis comments) is "standard".
But a few semi-official ones are found here 
https://xiph.org/vorbis/doc/v-comment.html

(these are also listed on the Hydrogen Audio wiki page)
Also note that Vorbis comment keys should be treated as case insensitive 
so Vorbis comment keys could easily be used as is with no re-mapping 
just a case conversion.
And I'm sure Xiph and Hydrogen Audio would be happy to help 
"standardize" various key names for tags and the tag formatting as well.


This could be as simple as WHATWG creating a table of the most 
important/common keys used. And then the Hydrogen Audio wiki would 
replicate that table and add the mapping advisory between WHATWG and 
Vorbis/id3/MP4 etc.
Xiph could update their page on Vorbis comments to match/include the 
WHATWG key names if they are not already listed.


I have no idea who maintains http://id3.org/ but I'm sure they would 
want to participate too.


--
Unless specified otherwise, anything I write publicly is considered 
Public Domain (CC0).

Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] metadata

2017-04-16 Thread Roger Hågensen

On 2017-04-16 03:01, Delfi Ramirez wrote:

* WIPO stands for _World Intellectual Property Organization_, and the
...
*  Meta-Data: Following the indications of the WIPO ( focusing on a
World Wide Web service ) that now, services like Pandora are not allowed
to stream ( are de facto banned ) in earth places like Africa, Europe,
or The East. May it be due to not meet the legal requirements.


The reason for services like Pandora geoblocking is because the PROs are 
basically trying to carve out regions (like region blocking for DVDs and 
BlueRay), it's greedy and stupid. The EU is working on legislations to 
limit or do away with this for online stuff.


Also, metadata sent to the user/listener has very little to do with 
royalty reporting. The reporting must be done by the webmaster at the 
webserver level or by the DJ at the encoding level.

These logs are then passed independently of what the the listener see/hear.

One thing that could be useful to show to the listener is a copyright 
hint like indicating if the stream is CC BY (Creative Commons 
Attribution) for example.



May I also point out that this has gone very offtopic (I should probably 
be the last person to point this out though).


WHATWG has very little to do with PROs/WIPO/Royalties/rights, a 
different fora should be used for that.


I'd like to get back on topic and to the discussion of passing metadata 
in a stream so that a HTML webplayer can show artist and title (and 
maybe year/album if present) to the listener and have this be 
changed/updated when this info changes in the stream (usually at song 
change but can occur more often to provide special non-song messages as 
well).


Firefox seems to support it (though I have not had the time to test it 
yet) but it is uncertain what formats it works on and if it works for 
streams at all.




--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] metadata

2017-04-15 Thread Roger Hågensen

On 2017-04-15 14:00, Delfi Ramirez wrote:

Some information that may be of use, concerning to the WPI rules for
royalties et al in  files.


I have no idea what/who WPI is.

But StreamLicensing.com (which has a deal with 
ASCAP/BMI/SESAC/SoundExchange)
Only require artist and title, and that artist and title is viewable by 
the listener.
One of the PROs (Performance Royalty Organization) did want album but 
waived that requirement.



Meta elements required

* Title : 100%
* Artist ( Interpreter): 12%
* Time: lenght of the  piece. Royalties are assigned by time
sequences.
* Year: (_Objective Reason: It use to happen that some__  files
have the same name, thus causing a mistake in the attribution to the
artist as it happen in the past_)
* Composer: 20%
* Arrangements: 20%
* Producer: 40%


Artist and title is always required. But I assume that by title you the 
field itself as in it being "Some Artist - Some Song" where 
spacedashspace (" - ") is a separator for artist and title.
As to length, any listened time longer than 30 seconds is counted, and I 
forge the max time.
You also forgot about mentioning ISRC which is a globally unique 
identifier for tracks, radio stations may use ISRC when sending in 
performance logs.


I'm not sure a end listener would need all this meta data though, such 
info should be logged separately by the radio station or by the 
streaming server itself.
The listener would only be interested in (minimum) artist and title, 
album, year and artwork being a bonus. And lyrics being a nice surprise.
Although I'd argue that artist and title (+ album and year) could be 
used to fetch artwork and lyrics using XHR upon user interaction instead.


I'm not going to comment further on the royalty stuff as this is weering 
quite off-topic now.


--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-15 Thread Roger Hågensen

On 2017-04-15 11:58, David Kendal wrote:

On 15 Apr 2017, at 01:09, Patrick Dark <whatwg.at.whatwg@patrick.dark.name> 
wrote:


So if you put this file in the Windows Downloads directory, then it
has read access to all download files even though they aren't related?

Ah, well, that's why you have to ask the user. The prompt should make
clear that this is a possibility -- something like:


Patrick makes a good point.

For example asking a user if it' sok for the HTML document to access 
stuff in "C:\Users\Username\AppData\Local\Temp\" what do you think most 
uses will do?
Just click OK, after all "they" have nothing important in that folder, 
their stuff is in "Documents" instead.



Maybe a html document could have a offline mode parameter of some sort,
if the document is in the temp folder then it is put in a virtual 
subfolder and can only access folders/files under that.


If it is not in the temp folder (or other such similar folder)
then a list of folders need to be provided.

For example
d:\Myhtmlapp\index.html (automatic as the document can access itself)
d:\Myhtmlapp\js\ (the javascript linked in the document is stored here)
d:\Myhtmlapp\css\ (the css linked in the document is stored here)
d:\Myhtmlapp\sounds\ (sounds to be indexed/used by the document, i.e a 
soundboard)


This way a htmlapp will work as a single file document on it's own (as 
it does today) or with specified subfolders. It would not have access to 
anything outside of the specified subfolders or files.
Open file and Save File requesters on the other hand could be allowed 
outside those folders as those are directly controlled by the user.
Indexing/parsing of files in non-app subfolders is another issue that 
will require a different take (listing filenames/sizes/dates).



How to specify subfolders I'm not sure, document header? Or maybe 
leverage the current work on for Offline Webapps which uses a separate file?


Browsers also need to be make sure that a file is not added to the temp 
folder that enables access to sub folders. (The root of the temp folder 
should always be treated as special regardless.)



--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] metadata

2017-04-15 Thread Roger Hågensen

On 2017-04-14 22:23, Andy Valencia wrote:

Ok.  Note that this data structure suffices to encode the baseline
information from Shoutcast/Icecast.  It does not, for instance,
encode "Label", needed to do licensing reporting in the USA.
"Year" is another datum often of interest.


Only "artist" and "title" are required for royalties reporting for 
internet radio.

But "album" and "year" provides additional information that helps.
Commercial radio and TV uses at minimum the artist and title, and if 
lucky the listener (digital radio) and viewer get to also see album and 
year.
Also royalty reporting is done in a earlier stage, what a listener sees 
is not what is logged/given for royalties reporting.



Ogg (Vorbis or Opus) should in theory be easily supported as metadata is 
given in a sidestream right? So is therefore independent of the audio 
stream.


Mozilla has audio.mozGetMetadata()
https://developer.mozilla.org/en/docs/Web/API/HTMLMediaElement

I have no idea if that fires once or each time more metadata is passed 
in the stream.


https://developer.mozilla.org/en-US/docs/Web/Events/loadedmetadata
Only says that it is fired when the metadata is loaded.
I'm assuming it's only at stream start though.

So with a few "tweaks" Firefox could support Icecast Ogg metadata, if 
the browser is compliant with the Ogg standard then support is very easy 
to add.


Shoutcast v1 would require parsing of the audio stream, Shoutcast v2 is 
a little different and can pass info like album and year and artwork.
The only Shoutcast v2 compatible player I'm aware of is the aging 
Winamp, the majority of Shoutcast streams are v1 streams.


So while Firefox almost is able to provide stream meta updates, all the 
other browsers do not though and would require polyfill which as you 
point out has it's own issues with having to reset the stream as the 
buffer fills up.


Maybe support for enabling a large cyclic buffer could be used, 
triggered by a "stream" parameter for html audio maybe.
There would still be a issue with metadata possibly being partly in the 
current buffer and partly in the next buffer so any javascript would 
need to splice that together.



Ogg seems simple enough
https://www.xiph.org/ogg/doc/oggstream.html
And parsing of this metadata should be in the ogg source (libogg?) so 
any browser that supports Ogg should be able to get that metadata.



--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-13 Thread Roger Hågensen

On 2017-04-11 14:50, Philipp Serafin wrote:

Patrick Dark schrieb am Di., 11.
Apr. 2017 um 13:55 Uhr:


[...] The only good reason to distribute an application this way is
because you want it to be confidential [...]


Another use-case would be to develop a HTML app that does not require
internet access.


If you really want a private HTML-based application, you might consider
a password-protected webpage. If the application isn't a throwaway app,
you'll want to do that anyway, so there isn't anything lost from the
upkeep required of maintaining an online server.


Why would I even want to run a server?


These are my concern as well.

Making a Soundboard using HTML(5) is very difficult.
And via file:// you can't add (drag'n'drop or file requester) files as 
the file paths/names are not made available.
So storing filenames in localstorage and then getting that the next time 
the app is started won't work.
Storing the audio in localstorage is just wasteful, and localstorage is 
limited to a total size. A handful of loops/background/sfx will quickly 
eat that up.


Trying to use the audio processing features of modern browsers are also 
a issue as you trigger CORS.


There is also no way to get the filenames of a sub folder relative of 
the html file, that way .wav/.ogg/.flac/.mp3 could have been copied into 
a subfolder and automatically show up in the soundboard when started.


Having end users have to run a server (even a mini/lightweight one) is 
just silly. In that case a native (and much more powerful) Windows 
application could be created instead be it NWS/Electron, or C++.


Having end users poke around in browser advanced options or worse the 
browser flags or command line switches is not something a end user 
should have to do either.



--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Firebase Cloud Messaging (FCM) blows the W3C/IETF Success Prevention Depts out of the water!

2017-03-27 Thread Roger Hågensen

On 2017-03-27 05:50, Richard Maher wrote:

Broadcast Messaging and Topic Based subscription is now available to your
WebApp just like native Apps thanks to FCM.

https://firebase.google.com/docs/cloud-messaging/js/send-multiple

I am absolutely ecstatic about this, as we all should be, and equally
grateful to FCM for having managed to bypass the recalcitrance and sheer
bloody-mindedness of spec-authors to provide functionality that everyone
outside the ivory-towers was begging for.

I thought WhatWG was set up to challenge the delusional elite a la mode de
HTML5? Why the silence?


Maybe because this is a Google API and cloud service rather than a web 
standard added to Chrome, Firefox, Edge, Safari, Opera, Vivaldi etc? 
Unless I'm missing some important detail here!



Anyway rejoice and be glad as Native Apps have one less stick to beat us
over the head with. And you Firefox fans are no longer stuck with Mozilla's
third-rate AutoPush!


I'm not aware of anything called autopush, is this another cloud API?
Or do you mean https://developer.mozilla.org/en/docs/Web/API/Push_API ?


Now if we can only get background geolocation with ServiceWorkers nothing
can stop WebApps: -
https://github.com/w3c/ServiceWorker/issues/745


Considering I'm coding both native and "HTML5" based "apps" there is far 
more that needs to be improved.
There is no way to reliably know how much LocalStorage or IndexDB space 
the web app has, trying to access or list files locally in a folder is 
not possible, something as simple as a editable soundboard can't be made 
if it's run locally (via file: protocol).
While Xinput is supported, DirectInput is not and there is a lot of 
controllers out there that are not Xinput.
Trying to save a file locally is a pain, you have to simulate a 
download. Loading a audio file manipulating it and saving it again is 
not the same as with a native app, instead you end up with a duplicate 
file in the download folder instead of the original files folder.


There is a difference between a Webapp that supports offline and a 
offline "HTML5" app.


Using NWN.js and Electron turns it into a native app anyway, ideally one 
should not have to do this, at least not for "simple" apps.



PS. The cognoscente are once more assembling on April 4-5 for a Japanese
junket on ServiceWorkers to yet again wax bollocks on "offline first" :-(


What is wrong with offline first? If you have a Ohms law calculator and 
your internet is down there is no reason why it should not still work if 
it was saved in the cache or even locally as a .html file and opened in 
the browser while the internet is down. It's rare for the internet to be 
down for long periods of time, but usually it goes down wen it's the 
least convenient and not having apps break and still work is important 
in those cases.



Please lobby the names that can be found in the hall of shame here: -
https://github.com/w3c/ServiceWorker/issues/1053


Hall of shame? It sounds like you have some form of personal agenda here.


--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Subresource Integrity-based caching

2017-03-03 Thread Roger Hågensen
I'd like to apologize to Alex Jordan for mistaking him for James Roper, 
and vice versa mistaking James Roper for Alex Jordan.


In the previous email when I said "your" as in "your suggestion" I meant 
to refer that to Alex, while the hash stuff was meant for James.


I got confused by a email from James with a fully quoted copy of the 
email from before where I quoted Alex but with no text or comments from 
James, and I assumed for a moment it was the same person with different 
emails (work vs private, or a alt which is not unusual).


I hope this confusion won't derail the topic fully.



--
Roger Hågensen,
Freelancer, Norway.


Re: [whatwg] Subresource Integrity-based caching

2017-03-03 Thread Roger Hågensen
e that to other sites pages.


I might however feel comfortable adding a UUID and let the browser fetch 
that script from it's local cache or from a trusted cloud cache.


If you are going to use the integrity attribute for authentication then 
you also need to add a method of revocation so that if for example the 
hashing used is deemed weak/compromised (due to say a so far 
undiscovered design flaw), then only the browsers that are up to date 
will be able to consider those hashes unsafe. Older browsers will be 
clueless and all of a sudden some porn site includes manipulated 
banking.js and whenever a older browser with a stale cache encounter 
that it replaces that and the next time the user goes to their bank the 
browser will happily use a trojan script instead. Te end result is that 
bank's etc will not use the integrity attribute or they will server a 
different versioned script for each visit/page load which kinda nukes 
caching in general.
Remember, you did not specify a optin/optout for the shared integrity 
based caching.


You might say that this is all theoretical, but you yourself proclaimed 
sha1 is no longer safe. Imagine if the most popular version of jquery 
became a trojan, we're talking tens of thousands of very high profile 
sites possible victims of cache poisoning.



Now I'm not saying the integrity attribute is useless, for CDNs it's 
pretty nice. It ensures that when your site uses say awesomescript12.js 
that is awesomescript12.js and not a misnamed awesomescript10.js or 
worse notsoawesomescript4.js
But, at this point you already trust the CDN (why else would you use 
them right?)
Another thing the integrity hash is great for is to reduce the chance of 
a damaged script being loaded (sha512 has way more bits than CRC32 for 
example).
And if I was to let a webpage fetch a script from a CDN I would probably 
use the integrity attribute, but that is because I trust that CDN.
If a browser just caches the first of whatever it encounter and then use 
that for all subsequent requests for that script then I want no part of 
that, it's a security boundary I'm not willing to cross, hash or no 
hash. So a opt-in would be essential on this.


Now many sites have there own CDN I assume these are your focus. But 
many use global ones (sometimes provided directly/indirectly with the 
blessing of the developers of a script). I don't see this a a major 
caching issue. The main issue is multiple versions of a script. Many 
scripts are not always that backward compatible, I have seen cases where 
there are 3-4 versions of the same script on the same site. A shared 
browser cache may help with that if those are the unedited official 
scripts of jquery but usually they may not be. They may also be run 
through a minifer or similar or they have been minified but not with the 
same settings as the official one.


This is why I stress that a UUID based idea is better in the whole. As 
the focus would be on the versions/APIs/interoperability instead. I.e. 
v1.1 and v1.2 have the exact same calls just some bug fixes? They can 
both be given the same UUID and the CDN or trusted cache will provide 
v1.2 all the time.




PS! Not trying to sound like an ass here but could you trim the email 
next time? While I do enjoy hearing my own voice/reading my own text as 
much as the next person there is no need to quote the whole thing. Also 
why did you CC me a full quote of my email but did not write anything 
yourself, did you hit reply by accident or is there a bug in the email 
system somewhere?
Which brings me to a nitpick of mine, if you reply to the list then 
there is no need to also CC me. If' I'm posting to the list then I'm 
also reading the list, I'd rather not have multiple email copies in my 
inbox. Hit the "Reply to list" button instead of "Reply to all" next 
time (these options depends on your email client).



--
Roger Hågensen,
Freelancer, Norway.



Re: [whatwg] Subresource Integrity-based caching

2017-03-02 Thread Roger Hågensen

On 2017-03-02 02:59, Alex Jordan wrote:

Here's the basic problem: say I want to include jQuery in a page. I
have two options: host it myself, or use a CDN.
Not to be overly  pedantic but you might re-evaluate the need for jquery 
and other such frameworks. "HTML5" now do pretty much the same as these 
older frameworks wit the same or less amount of code.




The fundamental issue is that there isn't a direct correspondence to
what a resource's _address_ is and what the resource _itself_ is. In
other words, jQuery 2.0.0 on my domain and jQuery 2.0.0 on the Google
CDN are the exact same resource in terms of content, but are
considered different because they have different addresses.
Yes and no. The URI is a unique identifier for a resource. If the URI is 
different then it is not the same resource. The content may be the same 
but the resource is different. You are mixing up resource and content in 
your explanation. Address and resource is in this case the same thing.



2. This could potentially be a carrot used to encourage adoption of
Subresource Integrity, because it confers a significant performance
benefit.
This can be solved by improved webdesign. Serve a static page (and not 
forget gzip compression), and then background load the script and extra 
CSS etc. By the time the visitor has read/looked/scanned down the page 
the scripts are loaded. There is however some bandwidth savings merit in 
your suggestion.



...That's okay, though, because the fact that it's based on a hash guarantees 
that the cache
matches what would've been sent over the network - if these were
different, the hash wouldn't match and the mechanism wouldn't kick in.

...
Anyway, this email is long enough already but I'd love to hear
thoughts about things I've missed, etc.
How about you miss-understanding the fact that a hash can only ever 
guarantee that two resources are different. A hash can not guarantee 
that two resources are the same. A hash do infer a high probability they 
are the same but can never guarantee it, such is the nature of of a 
hash. A carefully tailored jquery.js that matches the hash of the 
"original jquery.js" could be crafted and contain a hidden payload. Now 
the browser suddenly injects this script into all websites that the user 
visits that use that particular version of jquery.js which I'd call a 
extremely serious security hole. you can't rely on length either as that 
could also be padded to match the length. Not to mention that this is 
also crossing the CORS threshold (the first instance is from a different 
domain than the current page is for example). Accidental (natural) 
collision probabilities for sha256/sha384/sha512 is very low, but 
intentional ones are higher than accidental ones.


While I haven't checked the browser source codes I would not be 
surprised if browsers in certain situations cache a single instance of a 
script that is used on multiple pages on a website (different url but 
the same hash). This would be within the same domain and usually not a 
security issue.



It might be better to use UUIDs instead and a trusted "cache", this 
cache could be provided by a 3rd party or the Browser developer themselves.


Such a solution would require a uuid="{some-uuid-number}" attribute 
added to the script tag.  And if encountered the browser could ignore 
the script url and integrity attribute and use either a local cache 
(from earlier) or a trusted cache on the net somewhere.


The type of scripts that would benefit from this are the ones that 
follow a Major.Minor.Patch version format, and a UUID would apply to the 
major version only, so if the major version changed then the script 
would require a new UUID.


Only the most popular scripts and major versions of such would be 
cached, but those are usually the larger and more important ones anyway. 
It's your jquery, bootstrap, angular, modernizer, and so on.


--
Roger Hågensen,
Freelancer, Norway.



Re: [whatwg] How can a server or serverside script identify if a request is from a page, iframe or xhr?

2016-11-01 Thread Roger Hågensen

On 2016-11-01 11:26, Michael A. Peters wrote:

Any server admin that trusts a header sent by a client for security
purposes is a fool. They lie, and any browser extension or plugin can
influence what headers are sent and what they contain.


Wait, are you saying that ContentSecurityPolicy can't be relied upon?
(regarding me finding CSP see my answer to myself in another message)



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] How can a server or serverside script identify if a request is from a page, iframe or xhr?

2016-11-01 Thread Roger Hågensen

On 2016-11-01 10:42, Roger Hågensen wrote:

I was wondering how can a server or script identify if a request is from
page, iframe or xhr?



I really hate answering myself (and so soon after making a post) but it 
seems I have found the answer at

https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CSP_policy_directives

and the support is pretty good according to
http://caniuse.com/#feat=contentsecuritypolicy


But on MDN it says "For workers, non-compliant requests are treated as 
fatal network errors by the user agent."

But does this apply to non-workers too?

And is there any way to prevent injected hostile scripts?
I guess loading scripts from a specific (whitelisted) url could do the 
trick? Or maybe using strict-dynamic.


Darnit it. I may just have answered my own questions here.


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


[whatwg] How can a server or serverside script identify if a request is from a page, iframe or xhr?

2016-11-01 Thread Roger Hågensen
I was wondering how can a server or script identify if a request is from 
page, iframe or xhr?


Doing this would not prevent any XSS attacks, but it would allow a 
server/server-side script to detect a potential XSS attack.


I could not find any mention of any reliable way to do this currently.

Here is an example of this idea, when the browser fetches the page the 
server sends this as a response header to the browser...


RRS: *

or

RRS: url

or

RRS: iframe

or

RRS: script

And when the browser do a POST it will send one of these (if the server 
sent a RRS header) ...


RRS: url

or

RRS: iframe

or

RRS: script



RRS is short for "Report Request Source/Reported Request Source".
"url" indicate that the request source was a form on the page of the 
requested url.
"iframe" indicate that the request source was from within a iframe on 
the page of the requested url.
"script" indicate that the request source was from a script (via xhr) on 
the page of the requested url.


If a server (or server script) is only expecting a POST from the page 
but get a RRS result of iframe or script then this could be logged and 
reported to the server security supervisor for review.


The server sending "RSS: *" indicate that the request should be allowed 
but reported (might be nice for debugging as well).
If it is "RSS: url" then any requests from a iframe or a script would be 
denied/blocked by the browser (blocking two methods of making a POST)



Now if there exist another way to achieve the same and I just haven't 
found it I'd appreciate if someone pointed me in the right direction.


I'm also a bit unsure what working group (pun intended) a suggestion 
should be directed to if this does not exist yet.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Some Clarification on CML Proposal

2016-10-06 Thread Roger Hågensen

On 2016-10-06 14:15, MegaZone wrote:

How is this any different from PHP, ASP, JSP, .Net, ColdFusion, etc?  You
could implement your CML on the backend and have it 'output'
XML/HTML+JavaScript+CSS for delivery to user agents with compatibility with
everything out there today.
...
There are many server-side options and it really sounds like that's where
CML would fit.  Your developers would write in CML, and the 'engine' would
render that into the appropriate content for delivery to UAs.


Yeah! For example, I'm working on a offline CMS that actually uses 
include/declaration files for all the components of a static site. The 
CMS will grab all that apply templates and "render" the finished html, 
PHP is actually used to power this CMS.



Personally I don't see value in this proposal.

I have to agree, I almost feel like I'm being trolled at this point.
Unless a post or a "diagram" shows up that makes me go "Ah! Now I see!" 
I'm not going to bother responding to any further posts on this subject.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Some Clarification on CML Proposal

2016-10-06 Thread Roger Hågensen

On 2016-10-06 03:45, Jacob Villarreal wrote:
 > Hi,Thanks for responding. I don't think you have the right picture

I'm pretty sure I don't have the right picture (or at least your picture).

 > I was actually proposing a new markup language referred to as 
content markup language. The hypertext part of it isn't that important. 
CML would has the linking capability, it's really nothing. The batch 
infrastructure I was talking about is as simple as text batch files 
containing the source path information of multiple object source files, 
such as bitmaps, jpegs, text files, apps, etc... I think all that is 
required by the HTML team is to create a batch file for every appliance 
that is needed with respect to the two tags, multiple type attributes, 
and subattributes, and the line sequencing shouldn't be too much of a 
problem either.


"all that is required", I think the emphasis here is on "all" as it 
sounds like a full rewrite.


 > I think this solution would completely phase out HTML all together,
 > You shouldn't be so concerned about all the technical html bullshit, 
there are no headers, and footers, only page coordinates, and source files.


You obviously have a dislike for HTML for some reason.

 > I just thought a high def bitmap solution might work well for field 
objects. Basically, taking a bitmap object, and applying a border, text 
space, etc.., for use as the actual graphical object for the input field


This sounds like a graphical background template of some sorts.
Have you actually looked at what you can do these days with CSS3?

 > So I thought the html team might just set it up to work that way. I 
think it's a worthwhile option in the www world. Like I said, I don't 
have much experience doing html


Apparently you have some programming or database experience, but with 
the things you are suggesting it's far more "just add a few things" to 
the existing browsers.


What you are talking about would require a new browser engine. Usability 
(those who are blind or have weak sight) would go out the window. The 
overhead would be immense with everything as bitmaps. (text compresses 
extremely well by comparison). Responsive web design would no longer work.


I suspect what you want already exists but you are unable to see/find 
the way to do it. I'm pretty sure the tools required to do what you want 
already exists.


It sounds like you are trying to invent include files of some sorts 
(similar to .h in C).
Also your focus on bitmaps and coordinates, you do know that CSS allow 
you to define fixed X and Y positions?


 > structuring the mapping with one folder with all of it's objects, 
per every page on the site.  So the objects, whether they be text, or 
image objects are called up from the root of the page for the most part. 
 As far as I know, the header attributes are used on text for font, and 
size, etc..  CML would use the same attribute function on text anyway, 
but you have the option of using text images as content as well.


I don't think yo have any idea of what HTML is/does.
HTML handles the structural and semantic part of a web page, CSS the 
graphical and styling and look, and Javascript the scripting.


 > it's just a more innovative URL solution than html.  Personally, I 
think html is kind of boring in comparison


How innovative this is (I find it just confusing myself) is 
questionable, and your statement about HTML being boring is, well I 
doubt that any language be it markup up scripting or programming is 
anything but boring. They tried to make programming fun once and the 
result was Point'n'Click programming, that never really took off.


 > ...real-time data from ticker data being sent to the form, and store 
it in real-time in the ticker_data.rec destination record as text by 
line sequentially.  The data can then be accessed in runtime 
sequentially ... real-time output to a web app.


This would make local (file://) apps impossible, you seem to describe a 
system that fetched data in real time from a database server.
If this was a stockmarket monitor or sports monitor or airport monitor 
on a wall then I might understand what you are trying to do but even 
then the current HTML + CSS + Javascript solution would be way more 
efficient (and if using Websocket any latency is basically gone, only 
limited by LAN latency).


 > I'm trying to get some information on how to implement some new 
tags/attributes on the backend.


This is far more than just "adding some new tags", you want to add tags 
to discard HTML and CSS and Javascript.


 > correction on the code above
I can't help but feel that your "code" is little more than a variant of 
a link tag.


I'm not trying to be mean or anything, I just can't see what you are 
envisioning.




--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Some Clarification on CML Proposal

2016-10-06 Thread Roger Hågensen

On 2016-10-06 04:42, Jacob Villarreal wrote:

Roger,
I thought I should send you the pdf diagram anyway.  I've attached it 
to this email.  Hope it gets through to you.

Jacob


That was little more than a PDF with bullet points.
It certainly was not a diagram, this is a (example) of a kind of diagram 
http://www.conceptdraw.com/solution-park/resource/images/solutions/fishbone-diagram/Business-Productivity-Ishikawa-Diagram-Factors-Reducing-Competitiveness-Sample24.png




--
Roger Hågensen, Freelancer,http://skuldwyrm.no/



Re: [whatwg] Some Clarification on CML Proposal

2016-10-06 Thread Roger Hågensen

On 2016-10-06 04:42, Jacob Villarreal wrote:

Roger,
I thought I should send you the pdf diagram anyway.  I've attached it 
to this email.  Hope it gets through to you.

Jacob


That was little more than a PDF with bullet points.
It certainly was not a diagram, this is a (example) of a kind of diagram 
http://www.conceptdraw.com/solution-park/resource/images/solutions/fishbone-diagram/Business-Productivity-Ishikawa-Diagram-Factors-Reducing-Competitiveness-Sample24.png




--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Some Clarification on CML Proposal

2016-10-05 Thread Roger Hågensen

On 2016-10-05 08:17, Jacob Villarreal wrote:
> I was proposing a really simple markup language.  I call it content 
markup language (CML).


You do realize that HTML stands for Hyper Text Markup Language right? 
Adding a markup language to a markup language is illogical, it is better 
to improve the existing markup language or create a new markup language 
instead.


> It implements a simple object-oriented content markup infrastructure 
which treats every page element as an object.


This sounds like it might be better served by Javascript which is object 
oriented.


> It consists of a simple batch html infrastructure with only four 
batch file types for forms, fields, menus, and submenus 
(.frm/.fld/.mnu/.smn).   All text objects would be text data.  All 
other objects would be treated in the standard manner, but would be 
applied at the corresponding page coordinate. ... the table, and field 
html elements are bitmap objects ...


This sounds overtly complicated. Also if things are purely bitmaps then 
that would cause issues with screen readers, there are enough issues 
with tables as it is, if they become bitmaps they'll be a huge pain in 
the ass (more than currently).


By the sound of it these file types are container formats, why would you 
put a PNG image file inside a container file? Server side filetype 
negotiation would need to be redesigned to handle this as well.


Perhaps HTML imports is what could be the solution you are seeking (or 
needing), it's still a draft though

http://w3c.github.io/webcomponents/spec/imports/
But Mozilla has decided to not support it (that was in 2014 though).

But there is also Javascript imports
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Statements/import
"The import statement is used to import functions, objects or primitives 
that have been exported from an external module, another script, etc."

And sounds closer to some of the stuff you mentioned by the sound of it.
Chrome an Firefox should support import, and Edge is in the process of 
adding modules.

https://blogs.windows.com/msedgedev/2016/05/17/es6-modules-and-beyond/#PQc593TJJwqRbpO4.97
But the specs are still not complete yet as far as I can tell.

A "modular" web page where a header and footer and menu and other parts 
can reside in different files without the need for server side scripting 
is very close.


In the meantime have you tried iframe with the seamless attribute and 
some javascript?




--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Case-sensitivity of CSS type selectors in HTML

2015-05-08 Thread Roger Hågensen

On 2015-05-07 15:59, Boris Zbarsky wrote:

On 5/7/15 7:16 AM, Rune Lillesveen wrote:

This adds an implementation complexity to type selector matching.
What's the rationale for matching the selector case-sensitively in the
svg case?


The idea is to allow the selector match to be done case-sensitively in
all cases so it can be done as equality comparison on interned string
representations instead of needing expensive case-insensitive matching
on hot paths in the style system.


(Note! This is veering a little off topic.)


One way to cheapen the computational cost is to have partial case 
insensitive matching.


If (character = $0041) And (character = $005A)
character = (character | $0020)
EndIf


Basically if the character is 'A' to 'Z' then the 6th bit is set, 
thereby turning 'A' to 'Z' into 'a' to 'z' this works both for Ascii-7 
and Latin-1 and Unicode (like UTF-8 for example). No need for table 
lookups, it can all be done in the CPU registers.


Other commonly used characters like '0' to '9' or '_' or similar has no 
lower/upper case. And more language specific characters is not ideal for 
such use anyway (people of mixed nationalities wold have issues typing 
those characters).


So there is no need to do full case insensitive matching. Just do a 
partial to lower case normalization of  'A' to 'Z' and then do a 
simple binary comparison.
In optimized C or or ASM this should perform really well compared to 
calling a Unicode function to normalize and lower case the text.


This would mean restricting to 'A' to 'Z', 'a' to 'z', '0' to '9, and 
'_' but all tags/elements/properties/whatever that I can recall seeing 
only ever use those characters.
I certainly won't complain if I can't use the letter 'å' in the code, 
then again I never use weird characters in code in the first place.


How does it look in the wild? If only A to Z is used in xx% of cases 
then restricting to that character range would allow very quick 
lowercasing and thus allow use of fast binary matching.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Case-sensitivity of CSS type selectors in HTML

2015-05-07 Thread Roger Hågensen

On 2015-05-07 13:16, Rune Lillesveen wrote:

Currently, the HTML spec says that type selectors matches case
sensitively for non-html elements like svg elements in html documents
...

This adds an implementation complexity to type selector matching.
What's the rationale for matching the selector case-sensitively in the
svg case?


Isn't SVG based on XML? Which means SVG is probably case sensitive!

I found some info here, not sure if that'll help clarify anything.
http://www.w3.org/TR/SVG/styling.html#CaseSensitivity



PS!
This is why I always make it a rule to type lowercase for anything that 
will possibly be machine read (file names, properties/attributes), it 
also compresses better (lower case letters are more frequent).




--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-09 Thread Roger Hågensen

On 2015-04-09 11:43, Nils Dagsson Moskopp wrote:

Roger Hågensen rh_wha...@skuldwyrm.no writes:


Myself I have to confess that I tend to use caniuse a lot myself. I use
it to check how far back a feature goes in regards to browser versions
and try to decide where you cut the line In other words I'll end up
looking at a featyre and thinking that OK! IE10 supports this feature,
IE9 does not, so minimum IE target is then IE10).


Have you tried progressive enhancement instead of graceful degradation?
I usually build a simple HTML version of a page first, check that it
works using curl or elinks, then enhance it via polyfills and other


I have, but found that relying on polyfills is no different than relying 
on a work around or a 3td party framework.
It easily ads to code bloat, and I'm now moving more and more towards 
static pages with JS and CSS specific to a page are embed in the page as 
that support fast static delivery.


I don't mind waiting a few months or a year for a feature to be added 
and available among all major modern browsers. Once a feature is 
available like that then I make use of it (if I have use for it) to 
support the effort of it being added.
This does mean I end up going back to old code and stripping 
out/replacing my own code that do similar things.

Or to enhance/speed up or simplify the way my old code works.
I don't mind this, it's progress and it gradually improves my code as 
the browsers evolve.
If ten years from now my old pages no longer look/work as intended then 
that is on me or whomever do maintenance them.
If they are important enough then the pages will migrate to newer 
standards/features over time naturally.



I do miss HTML versioning to some extent though, being able to target a 
5.0 or 5.1 minimum (or lowest common denominator) would be nice.

Though this can possibly be missused.

I have seen another approach though to versioning that may be worth 
contemplating though...


Using Javascript one call a function and the browser if this version is 
supported. Example:

if (version_supported('javascript',1,2))

or
if (version_supported('css',3,0))

or
if (version_supported('html',5,0))


The browser could then simply return true or false.
True would mean that all features in the speced version is supported.
False would mean that they are not.

I'm using something similar for a desktop programming API I'm working 
on, the application simply asks if the library support this version 
(number hardcoded at the time the program was compiled) and the library 
answers true or false.
This avoids programmers from accidentally parsing a version number 
wrong, the library is coded to do it correctly instead.


There would still be a small problem where a browser may support for 
example all of CSS 2 but only parts of CSS3.

But in that case it would return True for CSS 2.0 but false for CSS 3.0
So some feature detection would still be needed to detect that.
But being able to ask the browser a general question if it will be able 
to handle a page that targets CSS 3, HTML5, JS 1.2 would simplify things 
a lot.


If actual version numbers is an issue. Why not use years, like so:

if (version_supported('javascript',2004))

or
if (version_supported('css',2009))

or
if (version_supported('html',2012))

Essentially asking, do you support the living spec from 2012 or later?.

I think that using years may possibly be better than versions (and 
you'll never run out of year numbers nor need to start from 1.0 again).

And if finer granularity is needed then months could be added like so:
if (version_supported('html',2012,11))


The javascript can then simply inform the user that they need a more up 
to date browser to fully use the page/app.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Roger Hågensen

On 2015-04-08 14:59, Kyle Simpson wrote:

Consider just how huge an impact stuff like caniuse data is having right now, 
given that its data is being baked into build-process tools like CSS preprocessors, JS 
transpilers, etc. Tens of millions of sites are implicitly relying not on real feature 
tests but on (imperfect) cached test data from manual tests, and then inference matching 
purely through UA parsing voodoo.



Myself I have to confess that I tend to use caniuse a lot myself. I use 
it to check how far back a feature goes in regards to browser versions 
and try to decide where you cut the line In other words I'll end up 
looking at a featyre and thinking that OK! IE10 supports this feature, 
IE9 does not, so minimum IE target is then IE10).


Then I use that feature and test it in latest (general release) version 
of Firefox, Chrome, IE, Opera, if it relatively looks the same and there 
are no glitches and the code works and so on then I'm satisfied, if I 
happen to have a VM available with a older browsers then I might try 
that too just to confirm what CanIUse states in it's tables.


This still means that I either need to provide a fallback (which means I 
need to test for feature existance) or I need to fail gracefully (which 
might require some user feedback/information so they'll know why 
something does not work).
I do not do use browser specific code as I always try to go for feature 
parity.


Now being able to poke the browser in a standard/official way to ask if 
certain features exists/are available would make this much easier.


As to the issue of certain versions of a browsers having bugs related to 
a feature has absolutely nothing to do with whether that feature is 
supported or not.
Tying feature tests to only bug free features is silly, no idea who here 
first suggested that (certainly hope it wasn't me) but it's stupid.


Is the feature implemented/available? Yes or no. If there are bugs or 
not is irrelevant. A programmer should assume that APIs are bugfree 
regardless.


Just ask Raymond Chen or people on the Windows compatibility team what 
happens when programmers try to detect bugs or rely on bugs, fixing said 
bugs in the OS then suddenly breaks those programs and and extra code is 
needed in the OS to handle those buggy programs.


Relying on user agent strings or similar is just a nest of snakes you do 
not want to rummage around in. HTML5 pages/apps should be browser neutral.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Recorder audio and save to server

2015-04-06 Thread Roger Hågensen

On 2015-04-07 03:29, Grover Blue wrote:

I would convert PCM at client.  Not sure if this fits your needs, but heck
out:
https://github.com/audiocogs/ogg.js

Definitely use websockets.  socket.io is what you want.

You don't need another else.  Something to consider: if you've going to be
playing back the recorded audio (or broadcasting it), I'd look into
compressing it even more to optimize your bandwidth usage.


I'll second the use of websockets.
You might also want to look at 
http://stackoverflow.com/questions/20548629/how-can-i-use-opus-codec-from-javascript
The answer from Rainer Rillke is rather impressive and include FLAC and 
Opus support via javascript frameworks.


I'd suggest FLAC for lossless audio and Ogg Opus for lossy, there are 
also no licensing/royalty/patent issues with these codecs.
With Ogg Opus you may be able to leverage existing Opus support in the 
browser.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread Roger Hågensen

On 2015-04-01 06:57, Kyle Simpson wrote:

There are features being added to the DOM/web platform, or at least under 
consideration, that do not have reasonable feature tests obvious/practical in 
their design. I consider this a problem, because all features which authors 
(especially those of libraries, like me) rely on should be able to be tested if 
present, and fallback if not present.

Paul Irish did a round-up awhile back of so called undetectables here: 
https://github.com/Modernizr/Modernizr/wiki/Undetectables

I don't want to get off topic in the weeds and/or invite bikeshedding about individual 
hard to test features. So I just want to keep this discussion to a narrow 
request:

Can we add something like a feature test API (whatever it's called) where certain 
hard cases can be exposed as tests in some way?

The main motivation for starting this thread is the new `link rel=preload` 
feature as described here: https://github.com/w3c/preload

Specifically, in this issue thread: https://github.com/w3c/preload/issues/7 I 
bring up the need for that feature to be testable, and observe that as 
currently designed, no such test is feasable. I believe that must be addressed, 
and it was suggested that perhaps a more general solution could be devised if 
we bring this to a wider discussion audience.



A featurecheck API? THat sort of makes sense.

I see two ways to do this.

One would be to call a function like (the fictional) featureversion() 
and get back a version indicating that the browser support the ECMA 
something standard as a bare minimum. But version checking is something 
I try to avoid even when doing programming on Windows (and MicroSoft 
advise against doing it).


So a better way might be:
featexist('function','eval')
featexist('document','link','rel','preload')
featexist('api','websocket')

Yeah the preload example does not look that pretty but hopefully you 
know what I'm getting at here. Maybe featexist('html','link','preload') 
instead?


On Windows programs I try to always dynamically load a library and then 
I get a function pointer to a named function, if it fails then I know 
the function does not exist in that dll and I can either fail gracefully 
or provide alternative code to emulate the missing function.
It's thanks to this that on streaming audio player I made axctually 
works on anything from Windows 2000 up to WIndows 8.1 and dynamically 
makes use of new features in more recent Windows versions thanks to 
being able to check if functions actually exists.


I use the same philosophy when doing Javascript and HTML5 coding.

With the featexist() above true is returned if present and false if not 
present.
Now what to do if eval() does not exist as a pre-defined function but a 
user defined eval() function exists instead for some reason.
My suggestion is that featexist() should return false in that case as it 
is not a function provided by the browser/client.


Now obviously a|if (typeof featexist == 'function')| would have to be 
done before calling featexist() and there is no way to get around that.


Another suggestion is that if a feature is disabled (by the user, the 
admin or the browser/client for some reason) ten featexist() should 
behave as if that feature does not existis not supported.
In other words featexist() could be a simple way to ask the browser if 
is this available? can I use this right now?



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Modify the Page Visibility spec to let UA's take into account whether iframes are visible on the screen

2015-04-01 Thread Roger Hågensen

On 2015-03-31 23:17, Felix Miata wrote:

Roger Hågensen composed on 2015-03-31 21:09 (UTC+0200):


... For Mozilla browsers, you
can go to about:config and set media.autoplay.enabled to “false�. Also,
the NoScript browser extension can make media click-to-play by default.

I hardly think a lot of users want to follow directions like that.
As a programmer/content designer it would make more sense to do so using
either attributes or javascript instead of bothering the user.

Turning off autoplay is a one time thing, not a big deal like the constant
disrespect of sites disregarding instead of embracing user needs.

...

I have a hard time seeing rationality in any option other than click-to-play.
*My* PC is a tool. *I* should be the one to decide if and when to play, and
if or when to stop or pause.



What are you suggesting exactly? That all iframes should be click to show?
Are you talking about iframe visibility and autopause or autounload?

If you are veering off topic then if possible please start a new thread 
(new subject) in that case.


I'll assume you are talking about autopause for a video tag.
As a user you open another tab, the video pauses at once (since the 
autopause attribute is set) since it's no longer visible.
Now if you go back to it and click play and then tab away again then you 
have manually overridden the autopause.

There might also be a autopause option on the video UI someplace.

My suggestion was for a autopause for iframes, I only noted that it 
might be of use to video and audio tags as well but I did not really 
outline how hey should behave as this topic is about iframes rather than 
video and audio.
Starting a new thread to discuss autopause and autounload or video and 
audio is preferred as that will require more UI discussion than iframes 
(which has no UI).


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Modify the Page Visibility spec to let UA's take into account whether iframes are visible on the screen

2015-03-31 Thread Roger Hågensen

Looking at https://developer.mozilla.org/en/docs/Web/HTML/Element/iframe

Wouldn't maybe the addition of new attribute to the iframe be the best way?


**
autopause
If present the client can pause any processing related to the 
iframe if the iframe is not currently visible. When unpaused a Page 
Visibility event will be sent to the iframe as if the whole page had 
changed status from invisible to visible.
For visibility events see 
https://developer.mozilla.org/en-US/docs/Web/Guide/User_experience/Using_the_Page_Visibility_API

**

This basically makes it opt in, it changes nothing about the behavior of 
current iframes.
Just an example would be a iframe that can be hidden/unhidden by the 
user clicking a button, and if the iframe has the autopause attribute 
then it's state is effectively paused. Once the iframe is unpaued a Page 
Visibility event is sent and whatever code is running in the frame can 
then react to this and and resume, as it never got a event indicating 
the page was made non-visible a programmer should be able to 
programmatically infer that the iframe was unpaused (it only got the one 
event instead of two).


What type of iframes would benefit from this? Some chats, news feeds, 
log views, anything that constantly updates or works in the 
background, but does not need to be updated if not viewable (saves CPU, 
bandwidth, and server resources)


And maybe down the road one could see if a similar autopause can be 
added to the parent page itself (or not). Maybe a autopause would make 
sense if added as a attribute to the body (but actually apply to the 
whole document including any scripts declared in the head).


Adding a autopause attribute to a iframe is probably the easiest way to 
add/deal with this.
If nobody ends up using it, it can easily be dropped again too, so there 
is no immediate downside to this that I can currently think of at least.



On 2015-03-31 02:17, Seth Fowler wrote:

I do want to clarify one other thing: we’re definitely not yet at the point of 
implementing this in Gecko, especially not in a release version. We think this 
functionality is important, and modifying the Page Visibility spec is one way 
to make it accessible to the web. It’s probably even the nicest way to make it 
accessible to the web, if it’s feasible. But it’s not certain that it’s web 
compatible or that everyone agrees this is the best way to go; we’re at the 
starting point of the process here.

I’d be interested to hear any comments that others may have!

Thanks,
- Seth


On Mar 30, 2015, at 3:47 PM, Seth Fowler s...@mozilla.com wrote:

I think we should modify the Page Visibility spec to let UA’s take actual 
visibility of iframes into account when deciding if an iframe is hidden.

This design doesn’t do much for iframes which may be doing significant work, 
though. The most obvious example is HTML5 ads. These ads may be performing 
significant work - computation, network IO, rendering, etc. Some or all of that 
work is often unnecessary when the ad is outside the viewport. Having an API 
that would allow those ads to throttle back their work when they’re not visible 
could have significant positive effects on performance and battery life.

We could get these benefits through a very simple modification of the Page 
Visibility spec. We should make the visibility of iframes independent of the 
top-level browsing context, and instead let UA’s take the actual visibility of 
the iframes into account. If an iframe has been scrolled out of the viewport, 
has become occluded, or has otherwise been rendered non-visible, we should 
regard the iframe as hidden and dispatch a visibilitychange event to let the 
iframe throttle itself.
...
- Seth


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Modify the Page Visibility spec to let UA's take into account whether iframes are visible on the screen

2015-03-31 Thread Roger Hågensen

On 2015-03-31 10:16, duanyao wrote:

autopause looks promising, but I want to ask for more: also add an
autounload attribute to allow UAs to unload specific iframes when
they are invisible.


This is also a good idea.

I also realized that maybe video and audio can benefit from a 
autopause attribute, so that when the user tabs away or minimizes the 
browser window the video or audio automatically pause.
Currently this is done using javascript and Page Visibility, but with a 
autopause this would make a plain video or audio tag to do the same, but 
without javascript.


It is also possible that autounload may be of use there as well, for 
live streams for example and when tabbing back the player have been 
reset to it starting state.
Although in this case autopause and autounload would be mutually 
exclusive, if both are present (by mistake?) I think autopause should 
have priority as it's less destructive (to the user) than autounload.


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Modify the Page Visibility spec to let UA's take into account whether iframes are visible on the screen

2015-03-31 Thread Roger Hågensen

On 2015-03-31 16:09, Boris Zbarsky wrote:

On 3/31/15 2:18 AM, Roger Hågensen wrote:

What type of iframes would benefit from this?


Ads, from a user point of view.

Now getting them to opt in to being throttled...

-Boris



Would not a ad delivery network prefer not to have to push ads out that 
the user is not seeing at all?
If not then they are only wasting bandwidth/CPU/memory on the server, 
and causing impressions that are wasted on nothing (important for the 
advertisers since they pay for them).


It's not throttling, it's proper use of resources. And while a ad 
network can not be guaranteed there are eyeballs on the ad, they can at 
least be assured that the ad is visible.



Imagine a clock or counter or similar, personally I'd love to use 
something similar for a status monitor for a server project, since it 
shows the live status there is no point in updating it while not 
visible, and the moment after being made visible again it will be 
current within a second. No wasted CPU no wasted bandwidth nor server 
resources.


And I already mentioned video and audio (if autopause is taken beyond 
just iframes).


I often open multiple tabs, and then I go through them one by one later. 
If I end up opening 3-4 videos at the same time I have to stop the other 
3 so I do not get a cacophony of 4 videos at once. There is also the 
issue of quadruple bandwidth load on the server.
I often also open anywhere from 2 to 20 tabs with pages in them, what 
point is there in doing ad rotation or similar in all those if I'm not 
looking at any of them except tab #19 ?



But who knows, you may be right, getting people to op in is a issue, 
take gzip for example, almost a halving of the bandwidth on average yet 
there are so many sites out there not making use of it; but that is 
their choice and loss I guess.


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Modify the Page Visibility spec to let UA's take into account whether iframes are visible on the screen

2015-03-31 Thread Roger Hågensen

On 2015-03-31 20:55, Nils Dagsson Moskopp wrote:

Roger Hågensen rh_wha...@skuldwyrm.no writes:


I often open multiple tabs, and then I go through them one by one later.
If I end up opening 3-4 videos at the same time I have to stop the other
3 so I do not get a cacophony of 4 videos at once.

This is something that can be fixed by the UA: For Mozilla browsers, you
can go to about:config and set media.autoplay.enabled to “false”. Also,
the NoScript browser extension can make media click-to-play by default.



I hardly think a lot of users want to follow directions like that.
As a programmer/content designer it would make more sense to do so using 
either attributes or javascript instead of bothering the user.


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Effect of image-orientation on naturalWidth/Height

2015-03-10 Thread Roger Hågensen

On 2015-03-10 09:29, Anne van Kesteren wrote:

On Tue, Mar 10, 2015 at 12:01 AM, Seth Fowler s...@mozilla.com wrote:

I wanted to get the opinion of this list on how image-orientation and the img 
element’s naturalWidth and naturalHeight properties should interact.

I thought there was some agreement that image-orientation ought to be
a markup feature as it affects the semantics of the image (or perhaps
investigate whether rotating automatically is feasible):

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=25508




Just my opinion, but I believe rotation of a image should be stored in 
the image itself. With JPG this is possible. Not sure about PNG or WebP 
or SVG though.


Now if a image space in a webpage (as reserved by css or width height in 
a image tag) is say 4:3 but the image has info about rotation so it ends 
up 3:4 instead then the ideal is for the browser to with that 3:4 
rotated image within the reserved 4:3 space.
The closest analogy I can think of are black bars on movies, although 
in this case it would be the background behind the image showing through 
maybe.


If attributes or CSS override the orientation of an image I'd consider 
that an image effect instead.



--
Roger Hågensen, Freelancer



Re: [whatwg] HTML tags.Panorama, Photo Sphere, Surround shots

2014-12-05 Thread Roger Hågensen

On 2014-11-18 06:57, Paul Benedict wrote:

Is it really the responsibility of HTML to be told about this? I wouldn't
think so. My initial thoughts are that all such information should be
encoded in the file format of the image. I am not saying such information
exists (maybe partially though), but that's where I think it should reside.


Cheers,
Paul



Yeah! Ideally a browser (or client) should be able to request the meta 
information for a image (if available), this would probably be better 
suited as part of HTTP/2.
Instead of a HEAD request a client could do a META request which would 
be the same as a HEAD request but with all meta info the server can 
provide about the file (like lens/ISO etc. info).
Server side something like Apache could let a handler fetch that info 
from a JPG file (maybe even a caching proxy to speed things up). A 
key:value pair would make sense I guess (with maybe a Meta- prefix).
But yeah in HTML itself such would just really bloat up the HTML page 
itself (and one could always use AJAX and serverside scripting or 
similar to fetch such meta info).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-14 Thread Roger Hågensen

On 2014-11-14 19:10, Evan Stade wrote:


The problem is that we don't think autocomplete=off is used judiciously.


Could you make a compromise and respect autocomplete=off for only 
type=text, and ignore autocomplete=off for all other input types as 
you guys planned?
And then look at how the web reacts to autocomplete=off being ignored 
for the other types before deciding on how to handle it for type=text?


I know you can risk end up with bad web designers making all inputs of 
type=text instead of using the proper types like a good web designer 
would (or should).
But for the end user that would be better than textarea hacks or other 
weird things to clear or ignore things or fully custom inputs.


If the results (I'll assume the web will be watched to see how the 
reaction is to such changes right?) are positive and web designers 
actually do it correctly and do not try to hack or change the type of 
inputs to text en masse,
then type=text would work fine forward as a input type for text that 
changes so frequently that any autofill or autocomplete would make no 
sense/be a waste of resources.




Regards,
Roger.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Modifying the URL inside beforeunload event

2014-11-14 Thread Roger Hågensen

On 2014-11-15 02:08, cowwoc wrote:


Personally the way I build apps these days is to just serve static 
files over HTTP, and do all the dynamic stuff over WebSocket, which 
would sidestep all these issues.


You mean you have a single-paged application and rewrite the 
underlying page asynchronously? How do you deal with needing to load 
different Javascript files per page? I guess you could simply load all 
the JS files all your pages would ever need, but I assume you do 
something a little more sophisticated.


Thanks,
Gili



The way you did it was with what I call a one shot cookie. A project I 
worked on I did a solution where sessionStorage was used to keep a token 
and a cookie was set via javascript and thus sent along with the request 
to the server which then tells the browser to delete the cookie in the 
reply.
If the token needs updating then the server can send a one shot cookie 
to the browser, the javascript will then apply the needed changes to 
generate a new token (maybe a new salt or nonce is given) and then 
delete the cookie.
Also, this form of cookie use does not fall under the cookie law in 
Europe AFAIK as it's part of a login mechanism so no need to show those 
annoying this site/page uses cookies box or warning.


Using sessionStorage, and cookies to pass info to/from the server is my 
new preferred way as you can control how often the cookies are sent and 
to what part of the site.


The solution is not as elegant as I'd like it though. One issue is you 
can't set the timeout for a cookie (sent from the server) to 0 or 1 sec 
or similar as the browser could delete the cookie before your javascript 
can get the data from it.
In the other direction the issue is reliably deleting the cookie after 
it has been sent (sometimes one can use POST requests and avoid this 
part, but that may not always be practical).


Looking at 
http://stackoverflow.com/questions/26556749/binding-tab-specific-data-to-an-http-get-request
the solution you ended up with is very similar to what I ended up doing, 
I'm not aware of any better way to do this (yet).



Regards,
Roger.




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Modifying the URL inside beforeunload event

2014-11-14 Thread Roger Hågensen

On 2014-11-15 03:07, cowwoc wrote:

On 14/11/2014 8:56 PM, Roger Hågensen wrote:

Did you also use a URL parameter to indicate which cookie the server 
should look in? I think my solution is problematic in that I have to 
go through GetTabId.html on page load (which looks ugly) and even 
worse I recently discovered that View Source does not work because the 
browser re-sends the request without the tabId parameter (which I 
stripped to make the URL shareable).




What I did was to use a one shot cookie, a request was only ever made 
by user interaction.
The server basically get a token cookie and a id cookie. the server the 
responds and set the cookies for deletion. There is only ever one token 
and one id cookie sent, re-using the cookie names are not an issue.


Now if each tab automatically makes requests then that is a different 
issue and in that case using POST instead of GET is advised, in that 
case you only ever have to send cookies from the server and not to the 
server.



Please describe your approach in more detail on 
http://stackoverflow.com/q/26556749/14731 so we can learn for each.




Don't have an account there, sorry.



Regards,
Roger.




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-13 Thread Roger Hågensen

On 2014-11-13 18:11, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


I just checked WHATWG HTML5 and rel=bookmark isn't there at all (I
didn't check W3C HTML5 though).

The section on the bookmark link type in WHATWG HTML can be found here:
https://html.spec.whatwg.org/multipage/semantics.html#link-type-bookmark

The section on the bookmark link type in W3C HTML can be found here:
http://www.w3.org/TR/html5/links.html#link-type-bookmark



I have no explanation for the missing it's entry in the WHATWG spec, I 
could have sworn I did search for bookmark.
As to the W3C, I suspect I searched the wrong document (wouldn't be the 
first time I've done that).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-13 Thread Roger Hågensen

On 2014-11-13 18:19, Nils Dagsson Moskopp wrote:
AFAIK, all of these interface details lie outside the scope of the 
HTML specification (and rightly so, IMHO). If you need a standard 
symbol for bookmarks I suggest to use U+1F516 BOOKMARK, which looks 
like this „“. 


Then don't spec it but advise or suggest it.  Even the bookmark example 
at 
https://html.spec.whatwg.org/multipage/semantics.html#link-type-bookmark 
says A user agent could determine which permalink applies to which part 
of the spec
thereby acting as a advisory hint/best practice suggestion (note the use 
of could).


I also tested the example code (with doctype html obviously) and the 
browser behaviouir is still the same, rel=bookmark is simply ignored. 
In that case shouldn't rel=bookmark be removed from the WHATWG HTML 
spec to reflect actual use?




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-13 20:20, Evan Stade wrote:

Currently this new behavior is available behind a flag. We will soon be
inverting the flag, so you have to opt into respecting autocomplete=off.



I don't like that browsers ignore HTML functionality hints like that.

I have one real live use case that would be affected by this. 
http://player.gridstream.org/request/
This radio song request uses autocomplete=off for the music request 
because a listener would probably not request the same bunch of songs 
over and over.


Some might say that a request form should use a different input type 
like... well what? It's not a search input is it? There is no 
type=request is it?
in fact the request field is a generic text field that allows a short 
message if needed.


PS! Please be aware that the form is an actual live form, so if you do 
enter and submit something be aware that there might be a live DJ at 
that point actually seeing your request.



Why not treat autocomplete=off as a default hint so if it's off then 
its off and if it's on then it's on but allow a user to right-click (to 
bring up the context menu for the input field) and toggle autocomplete 
for that field.


I checked with Chrome, IE, Opera, Firefox, the context menu does not 
show a choice to toggle/change the autocomplete behavior at all (for 
type=text).


Also the reason the name field also has autocomplete=off is simple, if 
somebody uses a public terminal then not having the name remembered is nice.

Instead HTML5's sessionStorage is used to remember the name.


Perhaps that could be a solution, that if autocomplete=off is to be 
ignored by default then at least let the text cache be per session (and 
only permanently remember text  if autocomplete=on ?).



Also do note that the type of field in this case is type=text.

Also, banks generally prefer to have autocomplete=off for credit card 
numbers, names, addresses etc. for security reasons. And that is now to 
be ignored?



Also note that in Norway this month a lot of banks are rolling out 
BankID 2.0 which does not use Java, instead they use HTML5 tech.
And even todays solution (like in my bank) login is initiated by 
entering my social ID number, which is entered into a input field with 
the text with autocompelete=off.
Now my computer I have full control over but others may not (work place 
computer, they walk off for a coffee) and someone could walk by and type 
the first digit 0-9 and see whatever social id numbers had been entered.




(or did I missread what you meant with autofill here?)


--
 Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 02:49, Glenn Maynard wrote:

On Thu, Nov 13, 2014 at 7:17 PM, Roger Hågensen resca...@emsai.net wrote:

I have one real live use case that would be affected by this.

http://player.gridstream.org/request/


Unfortunately, even if a couple pages have a legitimate use for a feature,
when countless thousands of pages abuse it, the feature needs to go.  The
damage to people's day-to-day experience outweighs any benefits by orders
of magnitude.


Punishing those who do it right because of the stupidity of the many, 
can't say I'm too thrilled about that.





This radio song request uses autocomplete=off for the music request
because a listener would probably not request the same bunch of songs over
and over.


(The use case doesn't really matter to me--the abuse is too widespread--but
this is wrong.  If I request a song today, requesting it again tomorrow or
the next day is perfectly natural, especially if my request was never
played.)



No it's inherently correct for the use case as listeners tend to enter 
things like:


Could you play Gun's'Rose?
Love you show, more rock please?
Where are you guys sending from?


  Also, banks generally prefer to have autocomplete=off for credit card
numbers, names, addresses etc. for security reasons. And that is now to be
ignored?


Yes, absolutely.  My bank's preference is irrelevant.  It's my browser, not
my bank's.  This is *exactly* the sort of misuse of this feature which
makes it need to be removed.


Then provide a way for the user (aka me and you) to toggle autocomplete 
for individual fields then.

That way I could toggle autocomplete off for the request field.

You wouldn't take away somebody's wheelchair without at least providing 
them a chair would you? (yeah stupid metaphor, I know, it sounded better 
in my head, really.)






Also the reason the name field also has autocomplete=off is simple, if
somebody uses a public terminal then not having the name remembered is nice.


This is another perfect example of the confused misuse of this feature.
You don't disable autocompletion because some people are on public
terminals--by that logic, every form everywhere would always disable
autocomplete.  This must be addressed on the terminal itself, in a
consistent way, not by every site individually.  (Public terminals need to
wipe the entire profile when a user leaves, since you also need cache,
browser history, cookies, etc.)



Point taken.


What about https://wiki.whatwg.org/wiki/Autocompletetype
Couldn't a type=chat be added then?

That live example  above was just one.
What about web chat clients that use input type text? Do you really want 
autocomplete forced on always for that?
If a user can't toggle autocomplete on/off per field as they want 
themselves then a type must exist where autocomplete is off by default.


Is that too much to ask for? (as both a user and developer)

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 02:49, Glenn Maynard wrote:
Unfortunately, even if a couple pages have a legitimate use for a 
feature, when countless thousands of pages abuse it, the feature needs 
to go. The damage to people's day-to-day experience outweighs any 
benefits by orders of magnitude.

  Also, banks generally prefer to have autocomplete=off for credit card
numbers, names, addresses etc. for security reasons. And that is now to be
ignored?

Yes, absolutely.  My bank's preference is irrelevant.  It's my browser, not
my bank's.  This is *exactly* the sort of misuse of this feature which
makes it need to be removed.



By default ignoring autocomplete=off (unless the user crawls into the 
browser settings, possibly under advanced settings somewhere?)

then those who miss-use it today will continue to do so.

Take the following example (tested only in Firefox and Chrome).
http://jsfiddle.net/gejm3jn1/

Is that what you want them to start doing?
If a bank or security site wishes to have input fields without 
autocomplete they can just use textarea.

Are you going to enforce autocomplete=on for textarea now?

Why not improve the way autocomplete works so there is a incentive to 
use it the right way? (sorry I don't have any clever suggestions on that 
front).



My only suggestion now is:
Default to autocomplete=off working just as today.
Provide a setting under Privacy settings in the browser (global). There 
are also per site privacy settings possible so (site specific).
Then add a contexts menu to all input field where autocomplete can be 
enabled/disabled. (Spellcheck already does this for example in most 
browsers).





--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 03:57, Ben Maurer wrote:

If the site sets autocomplete=off could you disable the saving of new 
suggestions? One of the main use cases for turning off autocomplete is to 
disable the saving of sensitive or irrelevant information. If the user is 
filling in an address or cc num it's likely they have the opportunity to save 
that on other sites.


Looking at https://wiki.whatwg.org/wiki/Autocompletetype
I see credit cards has it's own subtype, this would allow some 
granularity (possibly tied to the security/privacy preferences of the 
user set in the browser).


Then there is this http://blog.alexmaccaw.com/requestautocomplete
(hmm, that name/email in the example image there looks very familiar... :P )

Now that is a very good incentive to actually use autocomplete, it saves 
me from having to start typing into every fields to trigger the 
autocomplete popuplist.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 04:30, Glenn Maynard wrote:
(Trimming for time and to avoid exploding the thread.  Others can 
respond to the rest if they like.)


No it's inherently correct for the use case as listeners tend to
enter things like:


Could you play Gun's'Rose?
Love you show, more rock please?
Where are you guys sending from?


(You said would probably not request the same bunch of songs over and 
over, and now you're replying as if you said something completely 
different.)




What do you mean?

Where are you guys sending from?
vs
WSP please.
vs
WASP please.

If the listener presses W all those are suggested.
I'm not sure if you have ever seen how listeners type but but there is a 
lot of weird things and miss-spellings.



This is getting more off topic but... have you ever typed wrong and now 
the autocomplete keeps listing your wrong spelling every time? And the 
only way to fix it is to nuke all your data, there is no way to 
edit/control the auto suggest data in a browser.




autocomplete they can just use textarea.
Are you going to enforce autocomplete=on for textarea now?


I'm not worried about that at all.  When autocomplete doesn't happen,
people blame the browser (most people aren't web authors and don't know
this is the web page's fault).  When text entry is glitchy because the page
used a textarea or other ugly hacks, it's the web page that looks bad.
That's its own deterrant.



And what about Webchat clients? What should for example a Websocket 
based chat client use for it's text input then?
input type=text autocomplete=off makes sense in that case, using a 
text area does not make sense, you have to catch the pressing of the 
enter key for example as textarea can not trigger a onsubmit event.


At the very least allow autocomplete=off to still work on type=text 
or simply make a new type=chat where it can be set to off.


When autocomplete doesn't happen, people blame the browser
Yes and so do I when I have to dig into the bowls to force the darn 
thing off, I prefer to have autocomplete off for password fields as it 
helps train my memory into remember the myriad of passwords I use around 
the net.

And now it's gonna autosuggest my passwords by default?
Over the years I've been more annoyed with autocomplete being on than 
off, so I'm totally in reverse to how you are annoyed, but I'm not 
advocating getting rid of it because of that.


It's annoying enough with websites informing me the website is using 
cookies, again and again and again.
Same thing with Would you like to remember this password? and I click 
never. And then the next time I'm asked again, and again.


If autocomplete=off is going to be ignored and it'll always be on form 
now on then remove it from the specs fully,
but at the very least create another spec that ensures a user can choose 
which sites and fields should and should not have autocomplete, leave it 
in the hands of the users, I'd be happy with that.


Also the autocomplete list grows pretty big eventually, how do you clear 
it then? Some stuff like my email I might want to have autocomplete for 
but not other inputs (like a Subject field in a contact form), but to 
clear that I'll have to clear all the autocomplete stuff.


If you can address my concerns or point out where how I can handle all 
this stuff then I'll be satisfied.
But if the browser do not let me control my autocomplete then at least 
I'd want the web form to be able to do so, at least there the web 
developer might just have provided the option to toggle it unlike the 
browser.


Take Chrome 38, i settings. Hidden behind Show advanced settings... 
under Privacy I'll find a button Clear browsing data
Clicking that I see a window and in it is a checkbox with Autofill form 
data as it's label.

I have the option of clearing all form, autocomplete data or not at all.

I'd rather have it a toggleable in the input field context menu and 
while at it a clear suggestions option for that field.
Add that and I'll happily see autocomplete=off and autocomplete=on 
vanish from the spec, but not before.


There is no granularity in the browser settings for 
autocomplete/fill/suggest.
With autocomplete=off and on there is at least some granularity (but 
obviously flawed otherwise this would be a non-issue).




Also would a compromise be possible temporarily at least? Like 
autocomplete=off working only for input type=text ?





--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data

2014-11-13 Thread Roger Hågensen

On 2014-11-14 08:02, Evan Stade wrote:

On Thu, Nov 13, 2014 at 5:17 PM, Roger Hågensen resca...@emsai.net wrote:


On 2014-11-13 20:20, Evan Stade wrote:


Currently this new behavior is available behind a flag. We will soon be
inverting the flag, so you have to opt into respecting autocomplete=off.



I don't like that browsers ignore HTML functionality hints like that.

I have one real live use case that would be affected by this.
http://player.gridstream.org/request/
This radio song request uses autocomplete=off for the music request
because a listener would probably not request the same bunch of songs over
and over.


autocomplete=off will still be respected for autocomplete data. This
should cover your use case.



Also, banks generally prefer to have autocomplete=off for credit card
numbers, names, addresses etc. for security reasons. And that is now to be
ignored?


I'm not sure what security threat is addressed by respecting
autocomplete=off.


SSN, PIN, and so on.
Sure, it's the users responsibility to ensure their PC/laptop/tablet is 
secured.
But it's very quick to press 0-9 and you got a pin, that being said a 
bank should have two factor anyway (or better), and pins can  be 
changed. SSN can not though. Also the government is pretty strict in 
Norway on the leaking of SSN (here's it called Personal Number though) 
and in that case they start with 0-9 so it's quick to get the 
autocomplete to spit it out.


This is also autocomplete, not Autofill (in Chrome parlance). 


In that case, my mistake, autocomplete, autofill, autosuggest, input 
history, it all kind of blurs together, so apologies for that.


Would there be any point in having a per FORM autofill on/off instead?
That way if a autofill=off is set for the form itself then the user 
could be prompted This site wishes to not store any form data, Agree? 
Yes! No and then have the browser remember the choice the user made (so 
the next time based on the user choice, the form is either autofilled or 
not). and maybe word it better than I did there.
And if the autofill=off hint is missing (or set to on) then the user 
is never prompted.


This would give even more power to the user than currently.

If it was my bank I would probably (if shown such a prompt) prefer to 
not have anything autofilled or autocompleted.
But if it was a comment form on a blog I'd probably want that 
(autofilled and/or autocomplete etc).
As a user I should be able to choose that easily. (digging around in 
advanced settings is not what I'd call easy.)
The key though is it defaults to autofill and the user prompt only 
appears if autofill=off is set as a parameter for the form, and the 
user choice is remembered.


Geolocation data is prompted for in a similar way as to what I describe 
here right?



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-12 Thread Roger Hågensen

On 2014-11-11 23:31, Markus Lanthaler wrote:

On 7 Nov 2014 at 20:01, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


A link element in the header, maybe call it link rel=share
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if
present) should be used, although I guess in a way rel=share will
probably replace the need to use rel=canonical in the long run.

I do not understand. Why should one invent a rel value (“share”) that
conveys the same semantics as an already existing one (“canonical”) ?

I also have to admit that I struggle to see what value adding a rel=share 
link to a page adds!? If you look at how people share links (they copy and paste what's 
shown in the browser's address bar) then I wonder why anything at all is needed on the 
page to be shared... The story is obviously different for Share Web APIs or share 
endpoints as they are called in https://wiki.whatwg.org/wiki/Sharing/API  (Facebook, 
Reddit, bitly etc.).


Then a rel=share could be used to provide hints for those (in the form 
of a OpenShare standard similar to OpenSearch?).
But good point nonetheless, rel=bookmark is very underused as well, 
probably because it's original intent was superseded by people 
bookmarking http://example.com/somepage#section
I just checked WHATWG HTML5 and rel=bookmark isn't there at all (I 
didn't check W3C HTML5 though).



The most interesting question however is why (desktop) browsers haven't added a 
share button till now..


Wish I knew. As I mentioned in another post just bookmarking a url is 
not fully supported still. (right click a URL in Opera and Chrome, I see 
no Bookmark option there, Firefox and IE does however).



Anyway, my point was (probably muddled by me) that a Sharing API may 
just encompass the whole sharing path, which as you said above starts 
with people copying/dragging/right clicking a address bar or URL.
Once that URL is captured (and any possible hints) then it's passed to 
the Share API, and I feel it is important that the initial user step is 
also covered. (as that is not documented at all currently right?)


Which brings another issue, how far is too far? Should the naming be 
standardized as well?


Right click a URL on a page and what do you see?
Chrome shows Copy link adress
Firefox shows Bookmark This Link and Copy Link Location
IE shows Add to favorites... Copy shortcut
Opera Copy link address

Right click a page and what do you see?
Chrome shows nothing
Firefox shows nothing
IE shows Add to favorites... Create shortcut
Opera Add to Speed Dial Add to bookmarks Copy address

Right click a address field and what do you see?
Chrome shows Copy
Firefox shows Copy
IE shows Copy
Opera Copy

Very confusing and inconsistent.

I'd like to see the following:
Right click a URL on a page and see Copy Link Bookmark/Share Link...
Right click a page and see Copy Link Bookmark/Share Link...
Right click a address field and see Copy Link Bookmark/Share Link...
For touch screens/devices holding the finger for x amount of time would 
equal a right click.


Copy Link will simply copy to the clip board. Drag and Drop behaves 
the same as Copy Link.

Bookmark/Share Link... will present a Share API.

Opera has a neat thing when you bookmark a page, you are given a option 
of either a normal bookmark or a Speed Dial bookmark (tiny icon), and it 
also lets you choose the look of your bookmark (site logo, page 
thumbnail or text), by the looks of it very easy to add other forms of 
bookmarks or sharing to that UI (Facebook an Twitter etc.)


To me there is no difference between a bookmark of a link or sharing a 
link, a bookmark is simply you sharing with yourself.


I also wonder if a standardized icon/symbol should exist for a 
Bookmark/Share button on the surrounding UI of a browser.
Opera has a heart symbol, Firefox has a star and clipboard/list thingy, 
IE has a star, and Chrome has a star.


A star has been used for Favorite/Bookmark for quite a while.
So what about Bookmark/Share ? Does a book with a star make sense or 
is that too cluttered? Or is Opera on trend with their heart?



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-11 Thread Roger Hågensen

On 2014-11-10 10:35, Anne van Kesteren wrote:

On Fri, Nov 7, 2014 at 7:38 PM, Roger Hågensen resca...@emsai.net wrote:

A worthy goal would be to help developers de-clutter websites from all those
share icons we see today, so if this could be steered towards that it would
be great.

That is what the proposal is doing. The share button would be part of
the browser UI. Perhaps you misunderstood it?

(However, it is unclear whether all domains are interested in such a migration.)




I must have miss-understood, I saw window.close() mentioned and I 
thought this was a javascript API suggestion for yet another way to 
sharing things.


I looked a bit close now and wonder if this is related in any way to 
https://wiki.mozilla.org/Mobile/Archive/Sharing ?


Do you plan to go for a OpenShare route (modeled after OpenSearch) or 
something simpler like I mentioned earlier?


If all a web author need to do is slap a rel=share on a a tag or a 
link tag in the head and then have it automatically appear/listed in a 
browser Share UI for that page then that would be ideal in my oppinion.
Something like a OpenShare could build further on this hopefully, but 
for wide adoption the simpler the better.
Also OpenSearch is for searching an entire site or parts of it, wile a 
OpenShare would be just for one page or link so that would be overkill 
and it would cause another HTTP request to occur which is a waste IMO.


I'm also curious if any browsers actually do something if multiple 
rel=bookmark exist in a page (head and body), are they taken into 
account in the Bookmark UI at all? I certainly can not recall eve seeing 
this happen.


A quick test in Chrome, Firefox, Opera, IE here with the following in 
head:

link href=http://example.com/test3; rel=bookmark title=Test 3
link href=http://example.com/test4; rel=bookmark title=Test 4

And the following in body;
a href=http://example.com/test!; rel=bookmark title=Test 1Click 
Here1/a
a href=http://example.com/test2; rel=bookmark title=Test 2Click 
Here2/a

a href=http://example.com/test0; title=Test 0Click Here0/a

The result is the same, if I use the Browser UI bookmark then the head 
links are ignored, and if I right link the body a link then I'm not 
given a bookmark choice at all, just copy the url or save it.


If bookmark is so ignored perhaps it would be best to take bookmark (and 
to some extent canonical) and roll that into a rel=share standard 
which is defined/tied to this activities/intent proposal?


Note! Firefox allows right clicking any URL and choosing to Bookmark it, 
and IE does the same but it called Favorites there instead, in either 
case I assume that rel=bookmark is ignored and the title is also ignored 
as the test0 link which does not specify rel bookmark is treated 
identically to them. Opera and Chrome does not seem to allow right 
clicking a URL and bookmark it. As I do not have Safari I have no idea 
what it does in these cases.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-09 Thread Roger Hågensen

On 2014-11-07 20:01, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


A link element in the header, maybe call it link rel=share
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if
present) should be used, although I guess in a way rel=share will
probably replace the need to use rel=canonical in the long run.

I do not understand. Why should one invent a rel value (“share”) that
conveys the same semantics as an already existing one (“canonical”) ?



Three reasonings:
1. HTTP (301) redirects are advised over rel=canonical, Matt Cutts at 
Google has posted about that in the past as far as I recall. And it 
makes sense as the bots don't need to parse the page to get the 
canonical url.
2. Bookmarking should be of the current page the user has displayed, if 
they bookmark the page and a different url is bookmarked I'd consider 
that undesired behaviour (in the eyes of the user) unless a UI informs 
them or gives them an option.
3. rel=share has already has been invented though I'd hardly call 5 
letters an invention.


rel=share also shows clear intent.

A bookmark may be user specific or private to that user.
A canonical (or HTTP 301) indicate to the browser or bot that the page 
is over there and not here.

A share is intended to to be, well shared.

It semantically makes sense, at least to me.
rel=bookmark and rel=canonical and a rel=share are all hints.

A search engine for exasmple if it sees a rel=share link that is 
different from say the canonical url (either via HTTP 301 or current 
page or rel=)canonical) should probably ignore it as such a share link 
may have a share tracking url with a reference ID in it.


Also, rel=share is in the wild, I had a url to a list of rel= 
occurrences on the web but ironically I did not bookmark i/note it down. 
While it was low on the list, it was there.


Anyway, this is one place where the rel=share idea is mentioned. 
https://wiki.mozilla.org/Mobile/Archive/Sharing


There is also a rel=share-information floating around out there, but 
the search engines aren't making it easy for me to search this stuff 
(I'm probably using the wrong syntax/markup). But I found it referenced 
here https://code.google.com/p/huddle-apis/wiki/AuditTrail


There is a rel=share example use on page 5 of 
https://tools.ietf.org/id/draft-jones-appsawg-webfinger-00.txt

Used exactly as how I described it.

Here is a example of rel=share-link being used 
https://github.com/engineyard/chat/blob/master/views/index.jade


And rel=share is used in an example here 
https://code.google.com/p/huddle-apis/wiki/Folder#Response
And stated specifically here 
https://code.google.com/p/huddle-apis/wiki/Folder#Sharing_a_folder


As I see it there share is not the same as bookmark or canonical.
There may be some overlap with rel=share and a normal link though (if 
rel=share is used outside the html head).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] New approach to activities/intents

2014-11-07 Thread Roger Hågensen

On 2014-11-03 17:42, Anne van Kesteren wrote:

https://wiki.whatwg.org/wiki/Sharing/API has a sketch for what a very
minimal Sharing API could look like.



I have often pondered the same when seeing a Share button or icon on a 
webpage.
Some solutions have a single icon that pops up a menu, while other sites 
has a row of the most common social sites.


In retrospect however I realize that any Share API would be no different 
than how people currently share or bookmark things.
A worthy goal would be to help developers de-clutter websites from all 
those share icons we see today, so if this could be steered towards that 
it would be great.


There are two ways to do this that I'd recommend.

A link element in the header, maybe call it link rel=share 
href=http://example.com/article/12345/; /
or link rel=share / if the current url (or the canonical url link if 
present) should be used, although I guess in a way rel=share will 
probably replace the need to use rel=canonical in the long run.


Then browser devs can simply utilize that info in their own Share UI 
(which presumably is tied into the accounts set up on the device/machine 
in some way).
A browser UI could provide a nice looking and device friendly way to 
add/edit/remove social services that have sharing capabilities (Google+, 
Facebook, Twitter, Skype, etc.)


If the share link is missing this does not mean the page can not be 
shared, in that case it should be treated as a page is normally treated 
today, the share link is just a browser hint as to the ideal link to use 
when sharing this page.


Also note that using the link element allows the possibility of using 
hreflang to present multiple share links (one international aka 
English and one in the language of the page), or use media to provide 
multiple share links for different types of devices.


There already is  a link rel=search  so a rel=share just makes sense 
IMO.
It certainly will get rid of the clutter of share icons and buttons on 
websites (over time), those can be a pain to click on using touch 
devices (without zooming first), a browser Share UI could easily be 
hidden on the edge and make se of swipe left from edge or swipe right 
from edge (or top/bottom etc.) or use gestures to open a Share UI.
Some of those share icons may fail to list the social network the user 
prefer (like email for example) but if that is all setup in the browser 
then the user can share it at one (or multiple) social services just the 
way they like it.


Also note that title can be applied to such a share link as well, thus 
providing a suggested title the browser can choose (or not) to use when 
sharing it.
Any icons/logo is either taken from the icon/logo of the current page or 
from the href linked page (and whatever icon/logo that may have).


Existing services like AddThis or ShareThis (two of the more popular 
ones I believe?) should be able to access the link rel=share params 
via javascript (to access hreflang and media and title) so they will 
still remain competitive solutions; I aløso believe there are browser 
plugins for these two services as well and the browser can/could provide 
the rel=share link to those types of plugins.


Also note that there can be multiple link rel=share and that if 
allowed when speced that rel=share could be allowed to be global, that 
way the links to be shared could be inline in the document thus part of 
the content and useable by the user which is always ideal.



Anyway, I'll shut up now before I veer way off topic here.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Passwords

2014-10-18 Thread Roger Hågensen

On 2014-10-17 17:09, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net writes:


Also http logins with plaintext transmission of passwords/passphrases
need to go away, and is a pet peeve of mine, I detest Basic
HTTP-Authentication which is plaintext.

Note that Basic Auth + HTTPS provides reliable transport security.


This precludes that a site has a certificate, and depite someone like 
StartSSL giving them out free, sites and forums still do not use HTTPS.

Also, Basic Auth is also plaintext so the server is not Zero Knowledge.




Hashing the password (or passphrase) in the client is the right way to
go, but currently javascript is needed to make that possible.

Do you know about HTTP digest authentication?
http://en.wikipedia.org/wiki/Digest_access_authentication

Yes, and it's why I said Basic HTTP Authentication, Digest is the 
better method of HTTP Authentication.
And I know that very well and it's very underdeveloped, there is no 
logout possible (you stay logged in until the browser session is ended 
by the user),
and styling the login is not possible and it's not as easy to implement 
with AJAX methods.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-15 Thread Roger Hågensen

On 2014-10-15 18:10, Tab Atkins Jr. wrote:

On Wed, Oct 15, 2014 at 8:59 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:

For the XSS attacker, couldn't they just use 
`theInput.removeAttribute(writeonly); alert(theInput.value);`?

Or is this some kind of new un-removable attribute?

Doesn't matter if it is or not - the attacker can still always just
remove the input and put a fresh one in.

Nothing in-band will work, because the attacker can replace arbitrary
amounts of the page if they're loaded as an in-page script.  It's
gotta be *temporally* isolated - either something out-of-band like a
response header, or something that has no effect by the time scripts
run, like a meta that is only read during initial parsing.

~TJ



There is also legitimate needs for being able to edit the password field 
from a script.
I have a custom login system (that is currently in public use) that 
takes the password, does a HMAC on it (plus a salt and some time limited 
info).
This allows login in without having to send the password in the clear. 
It's not as secure as https but it's better than plaintext.


A writeonly password field would have to be optional only or my code 
would break.
And I'm not the only one. SpiderOak.com also uses this method (they use 
bcrypt on the password to ensure that SpiderOak has Zero-Knowledge).


Any limitations on form manipulations should be based on same origin 
limitations instead, such that only a javascript with the same origin as 
the html with the form can read/write/manipulate the form.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] Passwords

2014-10-15 Thread Roger Hågensen
Was Re: [whatwg] Proposal: Write-only submittable form-associated 
controls.


On 2014-10-16 01:31, Eduardo' Vela Nava wrote:

If we have a password manager and are gonna ask authors to modify their
site, we should just use it to transfer real credentials, not passwords..
Passwords need to die anyway.


And use what instead? At some point a password or passphrase (aka 
sentence) is needed.

Password managers need a password to lock their vault as well.

Password/phrases are free, all other methods requires something with a 
cost.
Biometrics requires scanners, and good ones (that can't be fooled by by 
breathing on a printed out fingerprint) are expensive.

There are USB sticks, and Smart cards (which require a card reader)
Audio require a microphone (and can be heard by others) my voice is my 
passport verify me.
You could use a app but this means you need a smart phone (which I don't 
have and probably do not plan to get any time soon, no need for one).
There is SMS but a phpBB based forum site isn't going to shell out cash 
for SMS based login or similar.
Biometrics have other issues, the voice may change (your voice changes 
throughout the day), your fingerprints change based on moisture, and you 
can damage them, there are diseases and medicines that can affect them. 
The retina may change as you get older, even your DNA may get damaged 
over time.


Also credentials (certificates) are not free if you want your name in 
them. (you can get free email/identity ones from StartSSL.com and a few 
other places but they are tied to your email only).
Installing certificates are not always easy either, and then there is 
the yearly or so renewals, and you can throw away old certs or you will 
b unable to decode encrypted emails you got archived.


A regular user will feel that all this is too much noise to deal with.
They could use something like Windows Live as a single sign on and tie 
that to the Windows OS account, but only sites that support signon with 
Live can take advantage of that.
And a password (or a portable certificate store, or biometric of sorts) 
is still needed for the Windows OS on that machine anyway.


I mentioned StartSSL above, the cool thing they do is they hand out 
domain only verified certificates, so any website can have free https, 
why the heck this isn't a thing more than it is I don't understand, 
each time I see a login to a site or forum that is http only I always 
ponder why the heck they aren't using HTTPS to secure their login. But I 
digress.


Single word passwords need to go away, if a attacker finds out/guesses 
it in for one site, changes are the same pass is used on multiple sites 
as is or with minor variations. Passphrases is the solution to some of 
this problem as it will make dictionary attacks much more expensive. 
There are still sites that enforce a 8 character password which is 
insane, people should be allowed to enter any password they are able to 
enter on their keyboard, be it one character or long sentences, with or 
without numbers or odd characters, the more restrictions there are on 
the password input the easier it is to guess/crack. The only 
restrictions that does no harm would be to ask for passphrases instead.


Also http logins with plaintext transmission of passwords/passphrases 
need to go away, and is a pet peeve of mine, I detest Basic 
HTTP-Authentication which is plaintext.
Hashing the password (or passphrase) in the client is the right way to 
go, but currently javascript is needed to make that possible.
If a password field could have a hash attribute that would be progress 
in the right direction. input type=password hash=bcrypt or 
something similar perhaps a comma to separate method and number of 
rounds, alternatively just input type=password hash and use a 
browser default instead (in this case the server side need to support 
multiple methods of hashing and the hashed password need a prefix to 
indicate method, salt and rounds if any.


There is some new crypt stuff, but again that needs javascript to be 
utilized, having a password be hashable by the browser without the need 
for any scripts to do so would be the best of both worlds in my opinion.
For example if a hostile script had access to the form and hte password 
field, the password would have been hashed before it was put in the 
password field anyway, sure they might be able to snoop the hash but the 
hash could be using a unique salt (or should rather) and would be 
useless to re-use.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Controlling the User-Agent header from script

2014-10-13 Thread Roger Hågensen

On 2014-10-13 15:53, Anne van Kesteren wrote:

Per XMLHttpRequest User-Agent has been off limits for script. Should
we keep it that way for fetch()? Would it be harmful to allow it to be
omitted?

https://github.com/slightlyoff/ServiceWorker/issues/399

A possible attack I can think of would be an firewall situation that
uses the User-Agent header as authentication check for certain
resources.


That's a server security issue and not a browser one, attackers would 
never use a nice browser for attacks anyway,
what point is there in background checks for security guards if the 
window is always open so anyone can get in? ;)


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Controlling the User-Agent header from script

2014-10-13 Thread Roger Hågensen

On 2014-10-13 16:16, Nils Dagsson Moskopp wrote:

Anne van Kesteren ann...@annevk.nl writes:


Per XMLHttpRequest User-Agent has been off limits for script.

Reporting UA “Mozilla/4.0 (MSIE 6.0';DROP TABLE browsers;--u{!=})”
broke hilariously many sites when I did have set it as my default UA
string, even though I think it conforms to RFC 2616, section 14.43.

Again, that's a server security issue and not a browser one, attackers 
would never use a nice browser for attacks anyway,
what point is there in background checks for security guards if the 
window is always open so anyone can get in? ;)


Also, a script being able to set a custom XMLHttpRequest User-Agent 
would be nice.
Not necessarily replace the whole thing but maybe concatenate to the end 
of the browser one?
That way a webmaster would be able to see that the request is from 
script Blah v0.9 when it really should be Blah v1.0 for example.
I always make sure that any Software I make uses a custom User-Agent, 
same goes for any PHP scripts and so on, ditto if I use CURL, that way 
the logs on the server will provide some insight.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Forcing orientation in content

2013-07-14 Thread Roger Hågensen

On 2013-07-13 06:17, Glenn Maynard wrote:


  Changing orientation is disruptive.

I can hardly imagine how obnoxious Web browsing would be on a mobile
device, if every second page I navigated to decided to flip my device to a
different orientation.  This feels like the same sort of misfeature as
allowing pages to resize the browser window: best viewed in 800x600 (so
we'll force it), best viewed in portrait (so we'll force it).



I have a tablet here that does that with a few apps.
And one of them actually does it within itself during certain parts of 
the program.


And I can testify it's annoying as hell. For those curious it was a 
banking app. And the main menu is forced/locked. But the rest like 
account activity etc. is not.
And you can imagine how stupid it is when you have to rotate the tablet 
each time you go back to the main menu.


I find responsive and scalable design (so it looks good) on multiple 
aspects ratios and multiple PPIs is a must for modern coding.


Please note I have not said orientation at all above, instead I said 
aspect ratio as that is the key here. Any device (unless it's square) 
has only two aspects.
There really is no up or down. Again this is from experience with my 
tablet. It is rectangular and when I pick it up i pick it up. And which 
ever edge faces me becomes down.
And I prefer a wide aspect ratio normally, but for parts with listings I 
prefer a narrow aspectratio.


My suggestion is that a webpage or web app signal to the browser what 
it's preferred aspect ratio (and resolution) is by using existing CSS 
viewport features.

But the browser is under no obligation to enforce anything.

If a rotation lock is really that desired, then the browser MUST provide 
a user toggle-able option that is off by default and is named something 
along the lines of: Allow Pages/Apps to disable rotation.
But. At the same time a similar option would also be needed called: 
Always lock Pages/Apps to Horizontal (or Vertical) orientation.


Now I have not looked at many tablets and phones, and certainly not 
their option screens, so I have no idea if some or several of them 
already have one of these options.


My advise is that if a page or app is aspect limited to simply keep it 
aspect limited (use the current CSS stuff to help inform the browser 
about that).
Let the user rotate the screen to whatever works best for them. For all 
one might know their device might be huge and have a very high PPI, you 
can never know.
There are people who prefer to have a monitor rotated 90 degrees, or put 
two browser windows side by side.
And as has been said, certain devices may have orientation detection 
turned off, or the device may not even have that feature at all.


Myself I think ideally that page rotation locking should be a user 
choice and put in the browser context menu so the user can just click 
and select if they wish to lock the rotation (for that page).
Also if a page really looks better rotated 90 degrees then the user will 
quickly figure that out anyway, by *gasp* rotating their display.
And by not allowing web pages/apps to force the orientation we also 
encourage better design.
HTML5 + CSS + Javascript is all about being fluid and dynamic and 
adaptable and being scalable and fail/fallback gracefully.

It would be silly to take a step backwards from that.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Forcing orientation in content

2013-05-03 Thread Roger Hågensen

On 2013-05-03 08:29, Gray Zhang wrote:
 Not sure if WHATWG is doing anything, but in the W3C there 
ishttps://dvcs.w3.org/hg/screen-orientation/raw-file/tip/Overview.html 
in the Web Apps group

 ...

 How would it behave if my web app requests orientation locking but is 
placed in an `iframe` element and the parent browsing context is 
locked in another orientation?


The logical behavior would be that the parent element takes precedence, 
and the child (the iframe in this case) retains it's aspect ratio if 
possible.



R.

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Hide placeholder on input controls on focus

2013-03-21 Thread Roger Hågensen

On 2013-03-21 14:02, Markus Ernst wrote:

Am 21.03.2013 12:10 schrieb James Ross:
Just as an added data-point (that I only noticed today) - Windows 7's 
placeholder implementation in the Start menu and Explorer's search box:
  - Focusing the input box with tab/Control-E or autofocus when 
opening the Start menu does *not* hide the placeholder.

  - Control-A or clicking in the textbox hides the placeholder.


I was not aware of the possibility to distinguish between clicking in 
a textbox and other ways to focus it. This behaviour seems to be very 
user-friendly to me.




As far as I know there are hover, focus, and modified (are there 
others?). The events varies whether it's in a browser (and which 
browser) and which OS (and which GUI API).


Ideally a browser chrome should follow the OS styleguide to provide a 
consistent OS user experience.
And with HTML5 stuff should behave consistently in all HTML5 supporting 
browsers.
But that's just my opinion on where the line should be drawn. There 
are after all things like the context menu stuff that crosses the GUI 
boundaries.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Hide placeholder on input controls on focus

2013-03-20 Thread Roger Hågensen

On 2013-03-20 10:18, Markus Ernst wrote:
The problem is that some users do not even start to type when they see 
text in the field they focused. Thus I strongly believe that some 
visible hint at the _focusing_ moment would be helpful for these 
users. If the Opera and IE behaviour of totally hiding the placeholder 
is considered as suboptimal, the placeholder could be blurred, made 
semi-transparent or whatever; but I am sure that something should 
happen when the control gets focus, and not only when the user starts 
typing.


Have it dim/vanish at not just onfocus but onmouseover as well? (and 
TAB, but that should be the same as onfocus usually)

I agree that this would be beneficial.

Here is an example: (go to http://htmledit.squarefree.com/ or someplace 
similar or save it locally as .html and test it that way)


style type=text/css
/** css start */

input::-webkit-input-placeholder
{ /* WebKit browsers */
color: red;
}

input:hover::-webkit-input-placeholder
{ /* WebKit browsers */
opacity:0.5;
text-align:right;
}

input:focus::-webkit-input-placeholder
{ /* WebKit browsers */
opacity:0.0;
}

input:placeholder
{ /* future standard!? */
color: red;
}

input:placeholder:hover
{ /* future standard!? */
opacity:0.5;
text-align:right;
}

input:focus:placeholder
{ /* future standard!? */
opacity:0.0;
}

/** css end */
/style

!- * html start * -
input name=first_name id=first_name placeholder=Your first 
name... type=text

!- * html end * -


I only did webkit! (and what I assume will be the standard?)
Reason I did not add any css for IE 10 or Firefox 19 is that they fail 
(at least I could not easily get this to work in those browsers), Chrome 
25 handles this just fine.
Other than me playing around a little with the right align to visually 
move the placeholder text out of the way, I assume this is how you 
would like it to look/behave Markus?


So maybe a placeholder opacity of 0.5 on hover, and opacity of 0.0 on 
focus would be a suitable browser default. (web authors should still be 
able to style the behavior like I just did)


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Priority between a download and content-disposition

2013-03-19 Thread Roger Hågensen

On 2013-03-18 13:50, Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:

It's currently unclear what to do if a page contains markup like a
href=page.txt download=A.txt if the resource at audio.wav
responds with either

1) Content-Disposition: inline
2) Content-Disposition: inline; filename=B.txt
3) Content-Disposition: attachment; filename=B.txt

People generally seem to have a harder time with getting header data
right, than getting markup right, and so I think that in all cases we
should display the save as dialog (or display equivalent download
UI) and suggest the filename A.txt.

You mention `audio.wav` but that is not part of your example. Also note
that there are all manners of other things web browsers need to take in-
to account when deciding on download file names, you might not want to
e.g. suggest using desktop.ini, autorun.inf or prn to the user.

That aside, it seems clear to me that when the linking context says to
download, then that is what a browser should do, much as it would when
the user manually selects a download context menu option. In contrast,
when the server says filename=example.xpi then the browser should pick
that name instead of allowing overrides like

   a href='example.xpi' download='example.zip' ..

which would cause a lot of headache, especially from third parties. And
allowing such overrides in same-origin scenarios seems useless and is
asking for trouble (download filenames broken after moving to CDN).


The expected behavior from a href='example.xpi' download='example.zip' 
... is that it is a download hint
A UI of some sorts should appear where the user has the option to 
download. (for example a requester with Run Now and Save As or Print or 
Share or Email and similar).
download= attribute is  just a browser hint, a user (and thus the 
browser) can (and should be able to) override this behavior if desired 
(in options somewhere, maybe under a Applications tab?)


If the server provided file-type matches that of the href (i.e. they are 
both .xpi), or are identical then the download attribute filename hint 
should be the default.


If the server provided a file-type that conflict with the href then the 
browser need to use some logic to figure out which of the three to display.
If the server provided a filename is different than the href then the 
browser need to use some logic to figure out which of the three to display.
If download attribute has a full or relative url then href (or server) 
should be used instead.


What is the best logic to use?
Both href and download are put there by either the author of the page or 
some automated system (forum/blog software/CDN/who knows...)
href and download in that respect should be equally trusted (or is it 
distrusted?)
What the server says always trumps href and download, and href (or 
server) always trumps download if href and server match in file-type.

The only exception is situations where the content is generated in some way.
a href=example.php?type=csv download=report1.csvDownload Report 1 
as CSV/a
a href=example.php?type=xml download=report1.xmlDownload Report 1 
as XML/a


Now the server might categorize it as text/html, I've seen this by 
mistake on servers before (not properly configured),

or the script did not set the proper content type when creating the headers.
So in this case the download hint is very helpful.

How many web scripts extensions out there is there? .php .asp .cgi .py 
.???

What about this then?
a href=example.com/reports/1/?type=xml 
download=report1.xmlDownload Report 1 as XML/a
and with the server type text/html by mistake, how to handle that 
then? Whom to trust?
The server may (or may not) redirect to a url of 
example.com/reports/1/index.php?type=xml or 
example.com/reports.php?id=1type=xml

it may also simply remain example.com/reports/1/?type=xml
Or what if it is a href=example.com/reports/1/xml/ 
download=report1.xmlDownload Report 1 as XML/a


A URL is simply a way to point to some content, what to do with it is up 
to the browser and the user.
One would hope the server serves it as the right type but this is not 
always true.
The page author may not even have control or the ability to add 
filetypes to the server configuration. (webhotels for example)
The download attribute indicate the authors desired behavior for 
clicking the link.


So let's break it down (from a more or less browser's point of view):

1. The user clicks the link, there is a download attribute so we will 
show a dialog with a Save As (and possibly other alternatives, dependent 
on browser and OS features and user options).
2. If there is no download attribute/no filename hint, then use href and 
try to make a user friendly filename out of that.
3. Listen to what the server says (in the HTTP header), does it say it 
is  a .xml ? If yes then that is good, if not then treat it as if it was 
binary for the moment.
4. Make sure the text displayed is along the line of: Download 

Re: [whatwg] Priority between a download and content-disposition

2013-03-19 Thread Roger Hågensen

On 2013-03-19 15:31, Nils Dagsson Moskopp wrote:

Roger Hågensen resca...@emsai.net schrieb am Tue, 19 Mar 2013
14:31:15 +0100:


[…]
What should be shown if there is an issue/conflict?

Maybe:
Download https://example.com/reports/1/xml/; as report1.xml ?
WARNING! File identified as actually being an executable! (*.exe)

At least here on Debian GNU/Linux, executables have no file extension.
Besides that, what would be the MIME type of windows executables?


application/octet-stream as far as I know for most exes and misc 
binaries on most platforms, and windows exe's start with MZ as the 
very first two bytes.





Or:
Download https://example.com/reports/1/xml/; as report1.xml ?
NOTE! File identified as not being a xml, appears to be text. (*.txt)

So, what about polyglots?
http://linux-hacks.blogspot.de/2009/02/theory-behind-hiding-zipped-file-under.html


Data hiding? (well close to it anyway) That is way beyond the scope of 
this, also I doubt you could do that with a executable on most platforms.
And if a .jpg turns out to have a zip attached then it just a .jpg with 
a zip attached, it's as simple as that.



The key though is showing: Download url as file.ext ?
And in cases where a quick file header scan reveals a possible issue
(or simply wrong fileformat extension) either a notice or warning
text in addition.
But this is only if the user actually hose Save As in the download
dialog, they might have chosen Share on facebook or Print or
Email to... or even Open
a similar but different dialog would obviously be needed in that case.

I find all of this approach insanely complex for a negligible benefit.



How so? All the information is mostly there. (HTTP header from server is 
always fetched be it HEAD, GET, POST calls, and a browser usually 
fetches the beginning of a file to sniff anyway).
The suggested name and extension would be in the download attr, and href 
is as it's always been.


Today if you download/run a link to a exe you do get asked if you really 
want to run this. (and this is a browser UI not a OS UI).

What is so complex about simply adding : as file.ext ?
to that UI which is already there?

In cases where the download attr and href and the server header and 
browser sniffing all agree it looks no different (nor behaves any 
different) than it does today when you right click and choose Save As
What is so complex about just suggesting some consistency in behavior 
with a improvement to boot?


And if you refer to the Share on/at... or Email to... or Print or 
Open then those are dialog options that do exist today or will, and was 
just used as examples and is not otherwise part of this in any other way.


Maybe there is a language barrier here and I'm not explaining this 
correctly, in which case I apologize for that. Let me know if anything 
in particular needs clarification.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] A plea to Hixie to adopt main

2012-11-09 Thread Roger Hågensen

On 2012-11-07 23:41, Ian Hickson wrote:

On Thu, 8 Nov 2012, Ben Schwarz wrote:

What does concern me, as a web builder, *every day*, is how I markup the
content in-between a header and a footer.

If you just want it for styling purposes, div is perfect.


article
headerh1, h2, p/header
div class=content/div
footertime, a.permalink/footer
/article

Exactly like that (or even without the class, if you just have one per
article you can just do article  div to select it).



I've begun to do this a lot now, the less I have to use class= or id= 
for styling the better.
In one of my current projects I'm basically only using id= for actual 
anchor/indedx use, and no class= at all.
In fact except the few id= for index shortcuts the stylingin is all done 
in the .css and the only css referencve in the html document is the 
inclusion of the css link url.
I guess you could call it stealth css as looking at the html document 
does not reveal that css is used at all (except the css link in the html 
header)

I wish more webauthors would do this, makes for very clean html indeed.
Now back to the topic (sorry for getting sidetracked).

As to the main thing, the only time I'd ever be for adding that to 
HTML markup was if it would be specced per the following.


main and /main encloses the content of a document, can be used in 
place of a div or article but can only occur once in a document.
If more than one instance of main and /main block is encounctered, 
then parsers should only accept the first and ignore any others.
If no main then the content in the document itself is considered the 
main content.


Maybe it's just me, but wouldn't a main sort of be a synonym for 
body almost? *scratches head*



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] A plea to Hixie to adopt main, and main element parsing behaviour

2012-11-09 Thread Roger Hågensen

On 2012-11-08 10:51, Steve Faulkner wrote:

What the relevant new data clearly indicates is that in approx 80% of cases
when authors identify the main area of content it is the part of the
content that does not include header, footer or navigation content.


It also indicates that where skip links are present or role=main is used
their position correlates highly with the use of id values designating the
main content area of a page.



I'm wondering if maybe the following might satisfy both camps ?

Example1:
!doctype html
html
head
titletest/title
/head
divdiv before body/div
bodybody text/body
divdiv after body/div
/html

Example2:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
footerfooter after body/footer
/html


A html document ALWAYS has a body. So why not adjust the specs and free 
the placement of body,

thus allowing div and header and footer blocks before/after.
Curretly http://validator.w3.org/check gives warning, but that is easily 
fixed by allowing it.
The other issue is how will older browser handle this (backwards 
compatibility) and how much/little work is it to allow this in current 
browsers?


I'd rather see body unchained a little than having main added that 
would be almost the same thing.
And if you really need to layout/place something inside body then 
use a article or div instead of a main.


body already have a semantic meaning that's been around since way back 
when, so why not unchain it?
As long as body and /body are within html and /html it shouldn't 
matter if anything is before or after it.


Only issue that might be confusing would be
Example3:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
articlearticle outside body/article
footerfooter after body/footer
/html

In my mind this does not make sense at all.
So maybe Example2 should be used to unchain body a little.

Example2:
!doctype html
html
head
titletest/title
/head
headerheader before body/header
bodybody text/body
footerfooter after body/footer
/html

Example4:
!doctype html
html
head
titletest/title
/head
body
headerheader before body/header
divbody text/div
footerfooter after body/footer
   /body
/html

Example 4 is how I do it on some projects, while what I actually wish I 
could do is Example 2 above.
Maybe simply unchaining body enough to allow one header and one 
footer outside (but inside html) would be enough to satisfy people's 
need?
I wondered since the start why header and footer could not be 
outside body, it seems so logical after all!


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] header body footer

2012-11-09 Thread Roger Hågensen

Starting a new subject on this to keep the email threads more clear:

Suggestion is that the following should be possible,
this would allow body to act as if it was a main.

!doctype html
html
head
titleheader and footer outside body/title
style
body {border:1em solid #7f2424;}
header {border:1em solid #247f24;}
footer {border:1em solid #24247f;}
/style
/head !-- I wish the head/head tags was called meta/meta 
instead. --

header
This is the header of the document!
/header
body
This is the main content of the document!
/body
footer
This is the footer of the document!
/footer
/html


As can be seen in most modern browsers. Semantically the content appear 
correctly.
The only issue is the css styling of body as body is treated as the 
parent of header and footer.
I'm not sure how much work or not it would be to allow 1 
header/header and 1 footer/footer outside body and let those 
have html as their parent instead,
but if it's not too much work then this could fix the lack of a 
main, and it would avoid the need of using a extra div or similar 
inside body just for styling.


Any other pros (or cons)?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Endianness of typed arrays

2012-03-28 Thread Roger Hågensen

On 2012-03-28 12:01, Mark Callow wrote:


On 28/03/2012 18:45, Boris Zbarsky wrote:

On 3/28/12 2:40 AM, Mark Callow wrote:

Because you said JS-visible state (will) always be little-endian.

So?  I don't see the problem, but maybe I'm missing something...

The proposal is that if you take an array buffer, treat it as a
Uint32Array, and write an integer of the form W | (X  8) | (Y  16)
| (Z  24) into it (where W, X, Y, Z are numbers in the range
[0,255]), then the byte pattern in the buffer ends up being WXYZ, no
matter what native endianness is.

Reading the first integer from the Uint32Array view of this data would
then return exactly the integer you started with...

So now you are saying that only the JS-visible state of ArrayBuffer is
little-endian. The JS-visible state of int32Array, etc. is in
platform-endiannesss. I took your original statement to mean that all
JS-visible state from TypedArrays is little-endian.

Regards

 -Mark



Getting rather messy this isn't it?
arrayBuffer should be native endianess (native as to the JS engine), 
anything else does not logically make sense for me as a programmer.


xhr.responseType = 'arraybuffer' on the other hand is a bigger issue as 
a client program (browser) could be little endian but the server could 
be big endian.
So in this case it would make sense if xhr.responseType = 'arraybuffer' 
and xhr.responseType = 'arraybuffer/le' was the same and 
xhr.responseType = 'arraybuffer/be' was for big endian/network byte order.


Personally I think that arrayBuffer should be native, and that 
xhr.responseType should be ambiguous, in other words let the 
implementers make sure of the endianess.
A client can easily ask for a desired endianess from the server by using 
normal arguments during the query, or possibly a xhr.responseEndian='' 
property if that makes sense at all.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] NRK (Norwegian Broadcasting) focuses on HTML5

2011-12-14 Thread Roger Hågensen

Just some trivia/news of some interest to HTML5 supportes:

NRK (similar to what BBC is in England) has decided to focus on HTML5.


http://www.digi.no/885011/nrk-gaar-for-html5

http://translate.google.com/translate?hl=ensl=nou=http://www.digi.no/885011/nrk-gaar-for-html5ei=CRfpTtexJqaA4gTX4NXmCAsa=Xoi=translatect=resultresnum=1ved=0CCQQ7gEwAAprev=/search%3Fq%3Dhttp://www.digi.no/885011/nrk-gaar-for-html5%26hl%3Den%26safe%3Doff%26biw%3D1920%26bih%3D886%26prmd%3Dimvns

http://nrk.no/ (website)
http://nrkbeta.no/ (testbed for new techs)

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Timing API proposal for measuring intervals

2011-07-08 Thread Roger Hågensen

On 2011-07-08 12:32, Mark Callow wrote:


On 08/07/2011 11:54, James Robinson wrote:

True.  On OS X, however, the CoreVideo and CoreAudio APIs are specified to
use a unified time base (see
http://developer.apple.com/library/ios/#documentation/QuartzCore/Reference/CVTimeRef/Reference/reference.html)
so if we do end up with APIs saying play this sound at time X, like Chris
Roger's proposed Web Audio API provides, it'll be really handy if we have a
unified timescale for everyone to refer to.

If you are to have any hope of synchronizing a set of media streams you
need a common timebase. In TV studios it is called house sync. In the
first computers capable of properly synchronizing media streams and in
the OpenML specification it was called UST (Unadjusted System Time).
This is the monotonic uniformly increasing hardware timestamp referred
to in the Web Audio API proposal. Plus ça change. Plus ça même. For
synchronization purposes, animation is just another media stream and it
must use the same timebase as audio and video.

Regards

 -Mark


Agreed, and the burden of providing monotonic time lies on the OS (and 
indirectly the MB, HPET etc. or audio card or GPU clock or whatever the 
clock source is.)
So Browsers should only need to convert to/from (if needed) Double and 
OS highres time format (which should be there via a OS API in a modern OS).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



[whatwg] A moment of thank you to the browser devs.

2011-03-24 Thread Roger Hågensen

http://html5test.com/

Now that Firefox 4 is out as well as IE9 those tests are starting to 
look real good.
With the WebM plugin for IE9, WebM is now possible in all the big 4 
(Opera, IE, FF, Chrome) browsers (haven't tested Safari but like Chrome 
it uses Webkit so should be similar).
Opera does lag a little on the Elements tests (IE9 now support section 
and article elements, as does Chrome and FF), but there is a Opera beta 
I haven't tested but I assume that will be added for the next final release?


So I'd just like to thank all the browser devs for the awesome work 
being done to fast track HTML5.
Now if just the users would be as quick at updating their old browsers, 
then we could get rid of a lot of the old junk on the net and go for 
HTML5 design exclusively.


So who knows, maybe 2011 will become the HTML5 year?

--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Session Management

2011-03-03 Thread Roger Hågensen

On 2011-03-03 10:44, Dave Kok wrote:

Op 02-03-11 22:11:48 schreef Roger Hågensen:

Method #3:
The server (or serverside script, like PHP or similar) sends the
following to the browser:
 header('HTTP/1.0 401 Unauthorized');
 header('WWW-Authenticate: Close realm=My Realm');
 *PS! the auth stuff is much longer here obviously, this was just
 to show the use of Close*

Note:
If Method 1 or 2 is used the browser should probably send the
following

to the server:
 GET /private/index.html HTTP/1.1
 Authorization: Close username=something
 *PS! the auth stuff is much longer here obviously, this was just
 to show the use of Close*

May I point out that the HTTP is outside the scope of the HTML5 spec.
Also the HTTP is stateless. This requires both parties keep state which
breaks the statelessness property of the HTTP. I, for one, prefer to
preserve the statelessness property of HTTP.


Please appreciate the notion that HTML5 is broader then just browsing
the internet. - Dave Kok

And indeed it is. HTTP Authentication (especially Digest) is far from 
stateless,

it's state chances with every single nonce number change.
It's basically a constant (but very cheap cpuwise) 
handshake/hmac/diffie-hellman agreement.
Also if you are thinking about the HTTP status codes, those are beyond 
stateless,
but if you insist, then simply re-use the 403 with some minor tweaks so 
it acts as a logoff,

because re-using 401 would break the statelessness as you say.

I'm surprised you advocate ajax/XMLHttpRequest and allow to close from a 
form,

that would open up to some security issues.
The beauty of HTTP Digest Authentication is that the password is never 
sent as plaintext or in any form that can compromise the user's password.
Only the user themself (and thus indirectly the browser) or the server 
should be able to initiate a session close of Auth Digest,
allowing it to close from a script is just bad, and... dare I say it, 
counter to the statelessness of HTTP *laughs*


At least we agree on one thing, that where HTTPS is not available or 
where the site owners have either not discovered or is too lazy or 
unable to take advantage of StartSSL.com which is free,
then HTTP Digest Authentication should almost be a requirement for any 
site that need login credentials. (like forums, shops etc.)
Funny how many stores only pull out the HTTPS stuff when paying for 
things you buy (or use a specialist service), but not at all when you 
log in to your account with them otherwise. *sigh*


Heck, I even have https on my own little site, my hoster provided the ip 
for free, they set up the certificate etc, for free, excellent service. 
(I only pay the hoster a small yearly fee, domeneshop.no for you 
Norwegians out there)
and combine that with startssl.com and my total cost of securing the 
communication with my site should I ever need it or others need 
it??? PRICELESS, since it was absolutely free, not a single cent paid.
But... a lot of web hotels or hosters out there do not allow you to do 
SSL or it costs extra, or they can not give you a ip or it costs extra, 
and, and, and.
So I have sympathy for those unable to. but hey with the CA that 
provides free domain/server certs there is no excuse if you ARE able to,
and programmingwise it's less work too, Auth Digest needs some extra 
massaging from PHP to work nicely in a integrated way but even then 
the logout issue still exist (and even if you log out the sie is still 
spammed by the browser with login credentials all the time)
I've never really worked with the Apache auth_digest stuff, but it's 
probably even more restricted than doing it yourself via PHP.


And don't forget that you complain that my suggestions messed with HTTP 
which HTML5 had no business to mess with,
yet you yourself suggested XMLHttpRequest and some ajax stuff to 
close/end a HTTP authentication?
This already proves that HTML5 isn't just HTML + CSS + Javascript + lots 
of other stuff, but we can also add + HTTP
Now if this Auth Digest is so important for web apps, then shouldn't 
WHATWG work together with (um what is the HTTP group called?)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Session Management

2011-03-02 Thread Roger Hågensen

On 2011-03-02 18:42, Bjartur Thorlacius wrote:



Just see what happens when users login to a site, then navigate to
another and authenticate to the latter, and then logout from the
latter. In that case, they're still authenticated to the former site.
In theory, this shouldn't be a problem, as users should clear all UA
data before granting anyone else access to the UA data store, but in
ill-managed public terminals, that may not be the case.

Yes but do they? Theory is nice but can't a site aid a user in this?


If neither the sysadmin, nor the user, clear the credentials - who will?
This specifically is probably the main use case for expiring auth tokens.



Three Ways...

Method #1:
Browser timeout. For legacy reasons the browser could default to within 
a sensible min/max timeout.
Once the timeout is triggered, the HTTP Authentication is ended, and the 
the user has to log in again.

Like say maybe 30 minutes to 60 minutes.
This can easily be done right now for all current browsers. No UI 
changes or any real code changes at all.


Note:
Ideally the user should be able to adjust the default timeout within 
some sensible min/max range,

but this would require a UI change/addition.

Method #2:
A second way to logout from a HTTP Authentication would be to end the 
HTTP Authentication when the LAST tab or window is closed that is using 
the authentication to that site/directory.


Note:
It's a shame one can not use javascript to let the webdesigner provide a 
button or url with javascript:window.close() or similar.
Perhaps a javascript:crypto.httpauth_closesession() or similar could 
be added in the future.


Method #3:
The server (or serverside script, like PHP or similar) sends the 
following to the browser:

header('HTTP/1.0 401 Unauthorized');
header('WWW-Authenticate: Close realm=My Realm');
*PS! the auth stuff is much longer here obviously, this was just to 
show the use of Close*


Note:
If Method 1 or 2 is used the browser should probably send the following 
to the server:

GET /private/index.html HTTP/1.1
Authorization: Close username=something
*PS! the auth stuff is much longer here obviously, this was just to 
show the use of Close*



I think that Method 3 is the real key piece here, on it's own it allows 
the server to timeout the client/user AND notify the client that this 
has happen.
combined with Method 1 and 2 it is now possible for either the client or 
browser to end the http authentication session and notify each other, 
and let the user know as well.
Method 3 alone would not need a UI change, it would simply instruct the 
browser to clear it's auth session, the page content itself could hold a 
message from the server to the user that they are now logged out.


Explained as easily as possible, the closing is exactly the same as 
serverside WWW-Authenticate: Digest and clientside Authorization: 
Digest but
instead of the word Digest it is replaced with Close, the rest of the 
auth should otherwise be just like a normal Digest auth to ensure it's 
not a fake close.
just doing WWW-Authenticate: Close might be an issue with future 
improvements beyond Digest method, so maybe WWW-Authenticate: Close 
Digest  would make more sense.
Just avoid calling it Digest Close as that could be confused with a 
normal Digest.
Close is just an example, End or Quit or Clear could just as 
well be used, the word doesn't matter, the hint brings from the server 
to the browser is the vital key though.


It is basically the server saying to the browser that those session 
credentials are no longer valid, please stop spamming me with them 
*laughs* at which point the browser clears the auth session,
and starts talking to the site with a clean slate again. If something 
like Method 3 was implemented then I'm pretty sure that the devs of 
phpBB, vBulletin and who knows how many CMS devs out there would be 
happy to support this.


Sidesubject:
Hopefully the old WWW-Authenticate: Basic is fully deprecated soon as it 
is no different from plaintext html login forms (almost all forums and 
websites out there that do not use SSL/certificates).
WWW-Authenticate: Digest should be minimum requirement. I'm not sure but 
I believe that Opera did fix some of the issue with Basic being fallen 
back to, no idea how all browsers lay on this currently.
It would be tempting to fix the Basic issue and security hole by 
instead changing things so that it's called: WWW-Authenticate2: Digest 
and WWW-Authenticate2: Close Digest where Basic is not allowed at all,
this would prevent exploits that try to sneak Basic into the header and 
make the browser use plain text instead.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Intent of the FileSystem API

2011-03-02 Thread Roger Hågensen

On 2011-03-02 02:31, Tatham Oddie wrote:

Glenn,

That's an XP path you've provided.

On Vista or 7 it'd be:

C:\Users\tatham.oddie\AppData\Local\Google\Chrome\User 
Data\Default\Storage

Microsoft explicitly did work in Vista to reduce the lengths of those base 
paths.

Now, the Google component of the path is actually the longer part.


In this case couldn't it just be made to be 
C:\Users\tatham.oddie\AppData\Local\Google\Chrome\Storage ?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-06 Thread Roger Hågensen

On 2011-02-06 04:54, Boris Zbarsky wrote:

On 2/5/11 10:22 PM, Roger Hågensen wrote:


This is just my oppinion but... If they need random number generation in
their script to be cryptographically secure to be protected from another
spying script...
then they are doing it wrong. Use HTTPS, issue solved right?


No.  Why would it be?


Oh right! The flaw might even exist then as well, despite https and http 
not being mixable without warning.




I'm kinda intrigued about the people you've seen asking, and what 
exactly it is

they are coding if that is an issue. *laughs*


You may want to read these:

https://bugzilla.mozilla.org/show_bug.cgi?id=464071
https://bugzilla.mozilla.org/show_bug.cgi?id=475585
https://bugzilla.mozilla.org/show_bug.cgi?id=577512
https://bugzilla.mozilla.org/show_bug.cgi?id=322529


 [snip]



And don't forget that browsers like Chrome runs each tab in it's own
process, which means the PRNG may not share the seed at all with another
tab


Well, yes, that's another approach to the Math.random problems.  Do 
read the above bug reports.


-Boris



Outch yeah, a nice mess there.

Math.random should be fixed (if implementations are bugged) so that 
cross-site tracking is not possible, besides that Math.random should 
just be a quick PRNG for generic use.
The easiest fix (maybe this should be speced?) is that Math.random must 
have a separate seed per Tab/Page, this means that even an iframe would 
have a different seed than the parent page.
If this was done, then those bugs could all be fixed (apparently). And 
it wouldn't hurt to advise Mother or Mersenne or similar as a minimum 
PRNG.
Maybe seed should be speced in regards to tabs/pages etc, would this 
fall under WHATWG or the JS group?


But anyway, those bugs does not need actual crypto quality PRNG, so it's 
a shame their fixing is hampered by a fix vs new feature discussion.

I can't help but see these two issues as completely separate.
1. Fix the seeding of Math.random for tabs/pages so cross-site tracking 
is not possible.
2. Add Math.srandom or Crypto.random or Window.random a cryptographic 
PRNG data generator (which could map to OS API or even RNG Hardware).



Hmm. What of the name of this thing?
I think it would be better to ensure it is not named random but 
srandom or s_random or c_random to avoid any confusion with 
Math.random

How about cryptrnd, anyone?

I'd hate to see a bunch of apps using cryptographically secure random 
numbers/data just because it was called random,

while in all likelyhood they'd be fine with Math.random instead.


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-06 03:34, Boris Zbarsky wrote:
The context in which I've seen people ask for cryptographically secure 
Math.random are cases where one script can tell what random numbers 
another script got by examining the sequence of random numbers it's 
getting itself.  But I was never told what that other script was 
doing, only that it wanted its random numbers to be unguessable.


Hmm! A hostile script/cross-site exploit?
But if a script is running that close to another script, isn't the 
guessing of the other script's random numbers the least of your worries?
The bad script is already inside the house anyway, but just in the 
other room right?


It kinda reminds me of Raymond Chen at MicroSoft. Just Google the 
followingsite:msdn.com It rather involved being on the other 
side of this airtight hatchway

Kind reminds me of some of those stories.
I assume they are worried about two tabs or an iframe in a page, and a 
bad script is trying to figure out the random numbers another script has.


This is just my oppinion but... If they need random number generation in 
their script to be cryptographically secure to be protected from another 
spying script...
then they are doing it wrong. Use HTTPS, issue solved right? I'm kinda 
intrigued about the people you've seen asking, and what exactly it is 
they are coding if that is an issue. *laughs*
Besides, isn't there several things (by WHATWG even) that prevents such 
spying or even makes it impossible?


I have yet to hear of any actual panic regarding this, the same issue 
is theoretically know with EXE's as well.
But with the multithreaded and multicore CPU's, clock variations, and so 
on, trying to exploit the pattern in say a Mersienne Twister PRNG by 
pulling lots of random numbers

would either A. not work or B. cause a suspicious 100% cpu use on a core.
And don't forget that browsers like Chrome runs each tab in it's own 
process, which means the PRNG may not share the seed at all with another 
tab (I'm guessing pretty surely that each tab HAS it's own seed).

Besides, social engineering has a much higher success rate than this so...

Would be nice if some crypto/security experts popped their heads in 
about now though, in particular about the float question in previous 
posts :)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-05 11:10, Adam Barth wrote:

On Fri, Feb 4, 2011 at 9:00 PM, Cedric Viviercedr...@neonux.com  wrote:

getRandomValues(in ArrayBufferView data)
Fills a typed array with a cryptographically strong sequence of random values.
The length of the array determines how many cryptographically strong
random values are produced.


We had same discussion when defining readPixels API in WebGL.

Advantages :
1) this allows to reuse the same array over and over when necessary,
or circular buffer, instead of trashing the GC with new allocations
everytime one wants new random bytes.
2) this allows to fill any integer array directly (Float*Array might
need more specification here though as Boris pointed out - could be
disallowed initially)
3) this avoids exposing N methods for every type and makes refactoring
simpler (changing the array type does not require changing the
function call)

(and also better matches most existing crypto APIs in other languages
that are also given an array to fill rather than returning an array)

Oh, that's very cool.  Thanks.

Adam


I must say I like this as well. Having used RandomData(*buffer,length) 
in PureBasic makes more sense to me (then again I like procedural 
unmanaged programming with a sprinkle of ASM and API stuff so...)


But getRandomValues(in ArrayBufferView data) seem to indicate that each 
byte (value) is random, limited to an array of 8bit data?.

Now if that is the intention then that's fine.

But wouldn't getRandomData(in ArrayBufferView data) be the ideal? As 
there could be from 8bit of random data to whatever the max size of an 
array is in steps of 8bits (and you can always mask/truncate by hand for 
exact bits)


But other than that little nitbit, filling an array/buffer instead of 
returning one? Good idea!



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-05 Thread Roger Hågensen

On 2011-02-06 05:07, Cedric Vivier wrote:

On Sun, Feb 6, 2011 at 11:34, Roger Hågensenresca...@emsai.net  wrote:

But getRandomValues(in ArrayBufferView data) seem to indicate that each byte
(value) is random, limited to an array of 8bit data?.

In the context of typed arrays, a value depends of the type of the
ArrayBufferView. ArrayBufferView are interchangable using the same
ArrayBuffer (the actual underlying bytes).
Passing an Uint8Array will give you random Uint8 values at each index
of the array, passing an Int32Array will give you random Int32 values
at each index of the array as well.


Ah ok, so just fill the buffer/destination with random data. That sounds 
as good as and as flexible as one possibly can get.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Cryptographically strong random numbers

2011-02-04 Thread Roger Hågensen

On 2011-02-05 04:39, Boris Zbarsky wrote:

On 2/4/11 7:42 PM, Adam Barth wrote:

interface Crypto {
   Float32Array getRandomFloat32Array(in long length);
   Uint8Array getRandomUint8Array(in long length);
};


The Uint8Array version is good; let's do that.

For the other, what does it mean to return a random 32-bit float?  Is 
NaN allowed?  Different NaNs?  -0?  Infinity or -Infinity?  Subnormal 
values?


Looking at the webkit impl you linked to and my somewhat-old webkit 
checkout, it looks like the proposed impl returns something in the 
range [0, 1), right?  (Though if so, I'm not sure why the 0xFF bit is 
needed in integer implementation.)  It also returns something that's 
not uniformly distributed in that range, at least on Mac and sometimes 
on Windows (in the sense that there are intervals inside [0, 1) that 
have 0 probability of having a number inside that interval returned).


In general, I suspect creating a good definition for the float version 
of this API may be hard.


Not really, usually it is a number from 0.0 to 1.0, which would map to 
say the same as 0 to whatever max 64bit is.
Depending on the implementation, the simplest is just to do 
(pseudocode)   float=Random(0,$)/$

A Float64Array getRandomFloat64Array() would also be interesting.
In fact the 32bit and 64bit and uint8 could all be generated from the 
same random data source, just presented differently, uint8 would be the 
raw'est though,

and 32bit float is pretty much just truncation of a 64bit float.
But with either float there would never be NaN -0 or Infinity or 
-Infinity. Only the range 0.0 to 1.0 must be returned.
And yes, float issues of rounding and almost correct but not quite 
will also be an issue here.


Float random does not make much sense in crypto. In normal random stuff 
I do see it usefull but not crypto.
Then again, look at the potential use cases out there. Does any use 
float? Or do they all use uint/raw?

If they do not use float then just do not include float at all in crypto.

Right now I can only see random floats being of use in 
audio/video/graphics/games/input/output/etc. But not in crypto. (the 
only key and nonce data/values I've ever seen has been raw/uint or 
an integer or string. never a float)



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Onpopstate is Flawed

2011-02-02 Thread Roger Hågensen

On 2011-02-02 23:48, Jonas Sicking wrote:

I think my latest proposed change makes this a whole lot better since
the state is immediately available to scripts. The problem with only
sticking the state in an event is that there is really no good point
to fire the event. The later you fire it the longer it takes before
the page works properly. The sooner you fire it the bigger risk you
run that some script runs too late to get be able to catch the event.

/ Jonas



Yeah it's a shame it can't be atomic.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:15, Gregory Maxwell wrote:


I don't like keyframe seeking as the default. Keyframe seeking
assumes things about the container, codec, and encoding which may not
be constants or even applicable to all formats. For example a file
with rolling intra may have no keyframes,  and yet are perfectly
seekable.  Or if for some reason a client can do exact seeking very
cheaply for the request (e.g. seeking to the frame immediately after a
keyframe) then that ought to be permitted too.

I'd rather say that the default should be an implementation defined
accuracy, which may happen to be exact, may differ depending on the
input or user preferences, etc.


Accurate seeking also assumes things about the codec/container/encoding.
If a format does not have keyframes then it does have something 
equivalent.
Formats without keyframes can probably (I might be wrong there) seek 
more accurate than those with keyframes.


With keyframes the logical is that if the seek goes to 14:11.500 or an 
exact frame number,

then a keyframe based format would ideally be seeked to the exact keyframe,
or the first keyframe before the say B seeked B frame(s).
B frames contain too little info and may need pixels from the keyframe 
(or I or P frame etc.)


Any speccing on this should simply be based on the ideal or best-effort 
seeking that the 10 most popular
and the 10 oldest (but still in use) formats are able to. (and some 
formats are probably in both categories as well)

And just spec based on that.

But I guess that there could be high and low resource modes.
If the system/browser is in a low resource state then it makes sense to 
go keyframe (or equivalent)

and just do rough seeks,
but if in a high resource mode then keyframe + microseek (just made 
that up) for accurate seeking should be used.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:04, Philip Jägenstedt wrote:
Concretely: Add seek(time, flags) where flags defaults to nothing. 
Accurate seeking would be done via seek(time, accurate) or some 
such. Setting currentTime is left as is and doesn't set any flags.


Hmm. I think the default (nothing) should be synonymous with 
best-effort (or best) and leave it to the 
browser/os/codec/format/etc. as to what best effort actually is.
While accurate means as accurate as technically possible, even if it 
means increased resource use. I can see online video editing, subtitle 
syncing, closed caption syncing, and audio syncing being key usage 
examples of that.
And maybe a simple flag for when keyframe or second seeking and 
similar is good enough, preferring lower resource seeking.
So best ( default) and accurate and simple, that covers most 
uses right?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] File API Streaming Blobs

2011-01-21 Thread Roger Hågensen

On 2011-01-21 21:50, Glenn Maynard wrote:

On Fri, Jan 21, 2011 at 1:55 PM, David Flanaganda...@davidflanagan.com  wrote:

Doesn't the current XHR2 spec address this use case?
Browsers don't seem to implement it yet, but shouldn't something like this
work for the original poster?

He wants to be able to stream data out, not just in.

It's tricky in practice, because there's no way for whoever's reading
the stream to block.  For example, if you're reading a 1 GB video on a
phone with 256 MB of memory, it needs to stop buffering when it's out
of memory until some data has been played and thrown away, as it would
when streaming from the network normally.  That requiests an API more
complex than simply writing to a file.


Hmm! And I guess it's very difficult to create a abstract in/out 
interface that can handle any protocol/stream.
Although an abstract in/out would be ideal as that would let new 
protocols to be supported without needing to rewrite anything at the 
higher level.




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 22:57, Silvia Pfeiffer wrote:

On Sat, Jan 22, 2011 at 8:50 AM, Roger Hågensenresca...@emsai.net  wrote:

On 2011-01-21 22:04, Philip Jägenstedt wrote:

Concretely: Add seek(time, flags) where flags defaults to nothing.
Accurate seeking would be done via seek(time, accurate) or some such.
Setting currentTime is left as is and doesn't set any flags.

Hmm. I think the default (nothing) should be synonymous with best-effort
(or best) and leave it to the browser/os/codec/format/etc. as to what
best effort actually is.
While accurate means as accurate as technically possible, even if it means
increased resource use. I can see online video editing, subtitle syncing,
closed caption syncing, and audio syncing being key usage examples of that.
And maybe a simple flag for when keyframe or second seeking and similar is
good enough, preferring lower resource seeking.
So best ( default) and accurate and simple, that covers most uses
right?


Not really. I think simple needs to be more specific. If the browser
is able to do frame accurate seeking and the author wants to do
frame-accurate seeking, then it should be possible to get the two
together, both on keyframe boundaries and actual frame boundaries
closest to a given time.

So, I think what might make sense is:
* the default is best effort
* ACCURATE is time-accurate seeking
* FRAME is frame-accurate seeking, so to the previous frame boundary start
* KEYFRAME is keyframe-accurate seeking, so to the previous keyframe

Cheers,
Silvia.




Hmm, that sounds good, though I think that this would be more intuitive:
* default is best effort (if the interface for seeking isn't that 
accurate which can happen with small screen devices, or the author 
doesn't care or need accuracy, best effort is what happens today anyway)

* TIME (accurate seeking, millisec fraction supported)
* FRAME (accurate seeking, previous/next depending on seek direction)
* KEYFRAME (keyframe seeking, previous/next depending on seek direction)

The default/best effort, may be any of TIME or FRAME or KEYFRAME or even 
a combo of TIME and FRAME, it all depends on the 
OS/Browser/Device/Format/Codec/Stream.
An author must be able to test/check if TIME or FRAME or KEYFRAME is 
available, if none are available then only the default best effort is 
available.
If the author just chooses the default, but the browser actually 
delivers TIME or FRAME or KEYFRAME accuracy, then that should be relayed 
in some way so the author can display the correct units to the user 
visually or even convert them if possible,
like for example default/best effort seek is used but the actually 
seeking is FRAME then the author could convert and display that as a 
TIME value instead, as time is less confusing for average users than 
frame numbers.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-21 23:48, Gregory Maxwell wrote:


It seems surprising to me that we'd want to expose something so deeply
internal while the API fails to expose things like chapters and other
metadata which can actually be used to reliably map times to
meaningful high level information about the video.


Well you would never seek to a chapter or scene change anyway,
the author would instead preferably get a list of index/chapter/scene 
points which point to a TIME or FRAME and seek using that value instead.


I was thinking that in my other post I mentioned the flags default and 
TIME and FRAME.
But essentially the browser always does best effort, so the flag TIME or 
the flag FRAME should only indicate that the author wish to either use 
TIME or FRAME when seeking,
it is entirely up to the browser etc. if seeking actually occurs that 
way or not. It's just that TIME or FRAME is the base being used.

In which case KEYFRAME could just be dropped from my other post really.

So if the author uses/wish to use the flag TIME but the browser only 
presents FRAME then the author can still use time, or they could use 
FRAME but convert that to time for the user.
If TIME can be millisec. (i.e: 00:45:10.958 the 23rd frame of minute 45 
and second 10 if 24fps) Then TIME is basically synonymous with FRAME 
which would be 1343.
I assume that we won't run into issue with this normally. (who'd 
actually have 1000+ fps? and if that is the case then FRAME must be used 
for super highspeed/slowmo etc.)

So under normal use TIME and FRAME would be the exact same thing.

This means the flags would only be:
* default (TIME or FRAME)
* FRAME (frame must be used/supported as TIME is not accurate, 1ms 
accuracy needed.)




--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-21 Thread Roger Hågensen

On 2011-01-22 01:27, Silvia Pfeiffer wrote:



It seems surprising to me that we'd want to expose something so deeply
internal while the API fails to expose things like chapters and other
metadata which can actually be used to reliably map times to
meaningful high level information about the video.


Chapters have an API:
http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#dom-texttrack-kind-chapters
.

However, chapters don't have a seeking API - this is indeed something
to add, too. Could it fit within the seek() function? e.g.
seek(chaptername) instead of seek(time)?

Silvia.


The issue with that is that if chapter or index info does not exist the 
seek will fail.
At least with TIME and FRAME you are guaranteed to seek (and even if 
frame is keyframe you'd still end up at a frame near).


To me only TIME makes sense right now as HH:II:SS.MS 
(hours:minutes:seconds.milliseconds) and FRAME if 1ms for rare cases 
where video is more than 1000fps.
Benefit of TIME is it's framerate agnostic, so 00:15:20.050 would be the 
same wether the FPS is 24 or 30.
Which is ideal in the case of framerate changes due to being bounced 
up/down to a higher or lower quality stream while seeking or during 
buffering.
I saw the spec mentioning doubles, I'm assuming that TIME would be a 
double where you'd have: seconds.fraction (which would even handle FPS 
in the thousands)
So I think that focusing on TIME and really pushing that would benefit 
all in the short and long run. an author can easily calculate and use 
FRAME from TIME anyway for the few users that would actually need to 
work with that.
Myself I've done some video editing, but I've done more audio editing 
than I can recall, and I've never missed using frames for audio or 
video, I prefer time and millisecond fractions myself, I sync audio on 
timestamp and not frames for example.


So maybe just let the flag be default and nothing else, but as mentioned 
previously, leave it an enum just in case for the future (I'm thinking 
possible future timing standards that might appear, though it's hard to 
beat doubles really).



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-20 Thread Roger Hågensen

On 2011-01-20 19:16, Zachary Ozer wrote:

== New Proposal ==


I like this. It seems you laid out everything to ensure a balanced 
buffer, kinda like a moving window buffer which I pointed out earlier.
So as far as I can see, your proposal looks pretty solid, unless there 
are any implementation snafus. (looks at the Chrome, Safari, Opera, 
Firefox guys in the list (hmm, where's the IE guys?))
I really like the way you described the state3, and I think that would 
be my personal preference for playback myself. I assume JW player would 
be very quick at supporting/using it?


--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



  1   2   >