RE: Push API and Service Workers

2014-10-15 Thread Shijun Sun
Thanks folks for the quick responses to the questions! 

RE: [Martin Thomson] If I sit (as I do) with a tab open to gmail for a very 
long time, then it is of some advantage to me (and my network usage) to use 
something like push rather than websockets (or even server sent events).  
Besides, server sent events might be roughly equivalent, but they are horribly 
kludgy and suffer from robustness issues.

I think Martin made a very good point.  For mobile devices, it is certainly 
undesirable to keep the websockets for a long time.

RE: [Jonas Sicking] The current design separates the "trigger" from "what to do 
when the trigger fires". Which both makes for a smaller API, and for a more 
flexible design.

That is a valid argument.  To be clear, my question right now is not whether we 
need Service Workers in the spec.  I'd like to understand how that works in 
typical scenarios and whether we need it in all scenarios.

RE: [John Mellor] For example Android devices already maintain a persistent 
connection to Google Cloud Messaging (GCM) servers, so in our current prototype 
in Chrome for Android, we set GCM as the endpoint to which the app server sends 
messages, and GCM relays the messages to the Android device, which wakes up 
Chrome, which launches a Service Worker, and then app JS handles the message.

My expectation would be the device (i.e. the push client) will pass the message 
to the Service Worker (when it is active), and then the Service Worker will 
wake up the browser.  Anyway, my excuse to be new to the area ;-)

Let's take the GCM example from another angle.  Assuming we have a WebRTC app, 
which has registered for a push notification at GCM.  Now there is an incoming 
video call, while the browser is still inactive.  The notification from the web 
server will go to the GCM, which relays it to the device push client, which 
displays a toast notification *right away*, when user clicks, the browser is 
launched with the webapp to take the call.

Is this a reasonable expectation for the E2E scenario?  Would there be extra 
latency if we want to wait for the Service Worker to be ready (based on its 
schedule), which then pushes a web notification for user to take the call (or 
dismiss the call)?  

Best, Shijun



RE: [DOM-Level-3-Events] Synthetic mouse events triggering default action

2014-10-15 Thread Bogdan Brinza
> -Original Message-
> From: Travis Leithead
> Sent: Wednesday, October 15, 2014 10:53 AM
> To: Anne van Kesteren; Bogdan Brinza
> Cc: public-webapps@w3.org
> Subject: RE: [DOM-Level-3-Events] Synthetic mouse events triggering default
> action
> 
> -Original Message-
> From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com]
> On Behalf Of Anne van Kesteren
> > On Wed, Oct 15, 2014 at 2:40 AM, Bogdan Brinza 
> wrote:
> >> http://www.w3.org/TR/DOM-Level-3-Events/#trusted-events
> >
> >That text is utterly broken as we've discussed several times now.
> 
> Yeah, I've got a bug to fix that. Will probably refactor to match basically 
> what
> Sicking wrote in 12230.
> 
> >Events do not cause action. Actions cause events (that can then prevent
> further action). There is one issue here with some implementations around a
> limited set >of events, discussed to great length in this bug:
> >https://www.w3.org/Bugs/Public/show_bug.cgi?id=12230
> 
> It looks like that bug is trending in the right direction. We have additional 
> data
> to drop into that bug now to help Rick and others identify additional
> behaviors and sites that depend on them.

Thanks for clarifications. Based on discussion in the bug (thanks Travis for 
adding further details!) I've filed crbug.com/423975 for this specific issue.


Re: Push API and Service Workers

2014-10-15 Thread Martin Thomson
On 15 October 2014 14:58, Domenic Denicola  wrote:
> So from my perspective, implementing the push API without service workers 
> would be pretty pointless, as it would give no new capabilities.

That's not strictly true.  If I sit (as I do) with a tab open to gmail
for a very long time, then it is of some advantage to me (and my
network usage) to use something like push rather than websockets (or
even server sent events).  Besides, server sent events might be
roughly equivalent, but they are horribly kludgy and suffer from
robustness issues.



Re: Push API and Service Workers

2014-10-15 Thread John Mellor
On 15 October 2014 23:07, Shijun Sun  wrote:

> My understanding here is that we want to leverage the "push client" in the
> OS.  That will provide new capabilities without dependency on a direct
> connection between the app and the app server.


The Push API doesn't use a direct connection between the app and the app
server: instead it assumes that there is a direct connection between the UA
and a push server of the UA's choice. Then the app server can send messages
to the push server, which relays them to the UA, which delivers them to the
Service Worker.

For example Android devices already maintain a persistent connection to
Google Cloud Messaging (GCM) servers, so in our current prototype in Chrome
for Android, we set GCM as the endpoint
 to which
the app server sends messages, and GCM relays the messages to the Android
device, which wakes up Chrome, which launches a Service Worker, and then
app JS handles the message.

--John


Re: Push API and Service Workers

2014-10-15 Thread Jonas Sicking
The hard question is: What do you do if there's an incoming push
message for a given website, but the user doesn't have the website
currently open.

Service Workers provide the primitive needed to enable launching a
website "in the background" to handle the incoming push message.

Another solution would be to always open up a browser tab with the
website in it. But that's only correct behavior for some types of push
messages. I.e. some push messages should be handled without any UI
being opened. Others should be opened by launching a "attention
window" which is rendered even though the phone is locked. Others
simply want to create a "toast" notification.

We could add lots of different "types" of push messages. But that's a
lot of extra complexity. And we'd have to add similar "types" of
geofencing registrations, and "types" of alarm clock registrations,
etc.

The current design separates the "trigger" from "what to do when the
trigger fires". Which both makes for a smaller API, and for a more
flexible design.

/ Jonas



On Wed, Oct 15, 2014 at 2:42 PM, Shijun Sun  wrote:
> Hi,
>
>
>
> I'm with the IE Platform team at Microsoft.  I joined the WebApps WG very
> recently.  I am looking into the Push API spec, and got some questions.
> Hope to get help from experts in the WG.
>
>
>
> The current API definition is based on an extension of the Service Workers.
> I'd like to understand whether the Service Workers is a must-have dependency
> for all scenarios.  It seems some basic scenarios can be enabled without the
> Service Worker if the website can directly create a PushRegistrationManager.
> I'm looking at Fig. 1 in the spec.  If the user agent can be the broker
> between the "Push client" and the webpage, it seems some generic actions can
> be defined without the Service Workers - for example immediately display a
> toast notification with the push message.
>
>
>
> It is very likely I have missed some basic design principle behind the
> current API design.  It'd be great if someone could share insights on the
> scenarios and the normative dependency on the Service Workers.
>
>
>
> All the Best, Shijun
>
>



RE: Push API and Service Workers

2014-10-15 Thread Shijun Sun
My understanding here is that we want to leverage the "push client" in the OS.  
That will provide new capabilities without dependency on a direct connection 
between the app and the app server.

-Original Message-
From: Domenic Denicola [mailto:dome...@domenicdenicola.com] 
Sent: Wednesday, October 15, 2014 2:59 PM
To: Shijun Sun; public-webapps
Subject: RE: Push API and Service Workers

I'm not an expert either, but it seems to me that push without service workers 
(or some other means of background processing) is basically just server-sent 
events. That is, you could send "push notifications" to an active webpage over 
a server-sent events channel (or web socket, or long-polling...), which would 
allow it to display a toast notification with the push message.

So from my perspective, implementing the push API without service workers would 
be pretty pointless, as it would give no new capabilities.





RE: Push API and Service Workers

2014-10-15 Thread Domenic Denicola
I'm not an expert either, but it seems to me that push without service workers 
(or some other means of background processing) is basically just server-sent 
events. That is, you could send "push notifications" to an active webpage over 
a server-sent events channel (or web socket, or long-polling...), which would 
allow it to display a toast notification with the push message.

So from my perspective, implementing the push API without service workers would 
be pretty pointless, as it would give no new capabilities.





Push API and Service Workers

2014-10-15 Thread Shijun Sun
Hi,

I'm with the IE Platform team at Microsoft.  I joined the WebApps WG very 
recently.  I am looking into the Push API spec, and got some questions.  Hope 
to get help from experts in the WG.

The current API definition is based on an extension of the Service Workers.  
I'd like to understand whether the Service Workers is a must-have dependency 
for all scenarios.  It seems some basic scenarios can be enabled without the 
Service Worker if the website can directly create a PushRegistrationManager.  
I'm looking at Fig. 1 in the spec.  If the user agent can be the broker between 
the "Push client" and the webpage, it seems some generic actions can be defined 
without the Service Workers - for example immediately display a toast 
notification with the push message.

It is very likely I have missed some basic design principle behind the current 
API design.  It'd be great if someone could share insights on the scenarios and 
the normative dependency on the Service Workers.

All the Best, Shijun



Re: [gamepad] Allow standard-mapped devices to have extra buttons or axes & allow multiple mappings per device

2014-10-15 Thread Brandon Jones
Whew... trying to process all of Katelyn's email. Forgive me if I overlook
some things, but I'm going to try and hit a few points that have been on my
mind:

On Wed Oct 15 2014 at 12:43:14 AM Katelyn Gadd  wrote:
>
> If there is one standard mapping, and it covers most users, this means
> the natural path for developers is to only support the standard
> mapping. They won't add keybinding/button remapping, they won't add
> support for multiple layouts, they might not even add support for
> choosing which of your gamepads you want to use. This is not an
> unrealistic fear: It is the status quo in virtually all console games
> and in many PC games. In some cases you can rebind by editing .ini
> files, but that is out of the question for web games since their
> configuration typically lies in not-user-editable JS on the server or
> in blobs stored somewhere (localStorage, indexed db, cookies) that are
> not easily user-editable. This 'developers only support the easiest
> thing' future is relatively likely and is actually worse than the
> previous status quo of raw-controllers-only.
>

To be brutally honest I wish that even _this_ was the status quo. In my
experience thus far:

 * Very few games on the web bother with the gamepad API, which I think
boils down to a marketing problem. I've actually had game devs say things
like "So, do you think the web will ever have a way to interact with
gamepads/joysticks/wheels/etc?" If nobody knows the API exists it's hard to
complain much about it's deficiencies. Unfortunately I'm not sure how we
solve this, but assuming that we do...

* Many of the games that I DO see utilizing the API seem to take the
approach of "Well, the dev had a KnockoffBox720Pad, so everything is hard
coded to whatever that reported." It would be great if devs even bothered
looking at the mapping field, but I don't see too many cases where they do.
Again, this feels like a marketing problem. :P


> I think the solution to this is simple: Ensure that there is more than
> one standard mapping. Find a way to make sure that developers have to
> do at least a basic amount of effort to handle things correctly. If
> the simple hack isn't sufficient for wider audiences, they will have
> an incentive to do a good job. This doesn't have to be a punitive
> measure - there are diverse controllers out there that people might
> want to use, from standard gamepads to racing wheels to flight sticks.
> Something as simple as supporting the additional inputs of the
> dualshock 3 & 4 is a good rationale here (they both have gyros &
> accelerometers; the 4 has a touchpad and touchpanel button.)
>

I'm not against adding a new mapping type. "flightstick" seems like the
obvious choice there, though the button layouts on those can vary pretty
radically. At the very least you could standardize axes layout, primary
fire, secondary fire, and hat switch layout and allow for whatever button
order seems natural after that. (Slight tangent: Does anyone know if
the HOTAS-style
controllers  report
both the throttle and stick as a single device?)

I do agree that having more than one standard mapping encourages more
robust usage, assuming we overcome the previously mentioned problems of API
visibility.


> we have the *miserable* constraint where you have to push a button to
> make your controller show up. This changes the ordinal identifiers of
> gamepads from session to session, and - even worse - means that a
> controller cannot be activated by analog stick inputs, which is a real
> problem for games that start with a menu (like two of my major ports).
>

I think we could actually use axes as activation inputs with a little more
processing. The big reason for not doing it right now is that on a lot of
devices inputs that report as an axis (triggers on XInput) have a resting
value other than 0. This may also be a problem if a gamepad has been set
down in such a way that a stick is offcenter (though to be fair the same
problem exists for buttons. I have a gamepad with a stuck "X" button that
immediately activates whenever I go to a gamepad-enabled site.)

My proposed solution would be to not report an input's value unless it has
reported 0 or at least below a deadzone (post-user-agent-mapping) at least
once. This would mean that a joystick axes wouldn't report unless they had
been in their resting state at least once since the page started reporting,
and a stuck button would be a dead input instead of a continuously firing
one. The benefit is that we can then treat any non-zero inputs after that
point as intentional, including axes movement. Thus picking up a controller
and pushing the stick up/down to navigate a menu would properly activate
the gamepad API. It also wouldn't require any API change, which is
generally preferable in any case where the API has been out in the wild.

This defies user expectations and is just plain confusing. Polling an
> actual device to get its st

RE: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Domenic Denicola
From: Paul Cotton [mailto:paul.cot...@microsoft.com] 

> It seems to me that layering a Stream interface in "an independent spec" 
> would "support other APIs" like MSE.   Why cannot this be done? 

There is no layering to be done here. There are two separate designs: the 
flawed single-Stream interface, and the multiple-interface ReadableStream and 
WritableStream designs.

The type of platform-specific pieces referred to were not separate designs like 
the Stream object, but instead things like how to integrate ReadableStream and 
WritableStream with Fetch, MSE, Web Audio, etc., or how to add compatibility 
layers for transitioning between old pseudo-stream interfaces like object URLs 
and srcObjects to actual streams.


Re: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Aaron Colwell
On Wed, Oct 15, 2014 at 11:24 AM, Domenic Denicola <
dome...@domenicdenicola.com> wrote:

> From: Paul Cotton [mailto:paul.cot...@microsoft.com]
>
> > Would it be feasible to resurrect this interface as a layer on top of
> [1] so that W3C specifications like MSE that have a dependency on the
> Streams interface are not broken?
>
> The decision we came to in web apps some months ago was that the
> interfaces in that spec would disappear in favor of WritableStream,
> ReadableStream, and the rest. The idea of a single Stream interface was not
> sound; that was one of the factors driving the conversations that led to
> that decision.
>
>
This decision makes sense to me. I just need to sit down and read the new
Streams spec to come up with a new proposal. The simplest solution looks
like it would be to just change the Stream to ReadableStream in the
existing appendStream() signature and then update the relevant algorithms
to properly interact with that object. Like I said, I don't have enough
knowledge yet to fully understand the implications of that yet. I plan to
spend some quality time with the Stream spec sometime this week so I can
get a better handle on this. I will probably have questions for Domenic and
Takeshi, but I know where to find them. :)

I believe the old Stream interface is only a problem for IE. I don't think
any other browser has shipped it. While I am sensitive to the backwards
compatibility issues, it isn't clear to me that we should stick to the old
signature just because IE chose to ship a Stream object before the spec was
in a more stable state. In some ways I feel like it is better that there
are different object names because then this could just be handled like a
method overload instead of trying to figure out how to distinguish 2
implementations with the same object name.

Aaron


RE: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Paul Cotton
> The decision we came to in web apps some months ago was that the interfaces 
> in that spec would disappear in favor of WritableStream, ReadableStream, and 
> the rest.

This seems to contradict the direction that was published in Feb 2014 [1] in 
which it was clearly stated:

" In addition to the base Stream spec, the remaining platform-specific pieces 
which do not fit into the shared-base spec will live in an independent spec. 
This includes things such as support in other APIs (XHR, MediaStreaming, etc) 
or DOM specific scenarios - (createObjectURL()). The current W3C Streams API 
will focus on this aspect of the API surface, while leaving the core 
functionality to be defined in the base spec."

It seems to me that layering a Stream interface in "an independent spec" would 
"support other APIs" like MSE.   Why cannot this be done? 

/paulc

[1] http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/0218.html 


Paul Cotton, Microsoft Canada
17 Eleanor Drive, Ottawa, Ontario K2E 6A3
Tel: (425) 705-9596 Fax: (425) 936-7329

-Original Message-
From: Domenic Denicola [mailto:dome...@domenicdenicola.com] 
Sent: Wednesday, October 15, 2014 2:25 PM
To: Paul Cotton; Takeshi Yoshino
Cc: Jerry Smith (WINDOWS); Anne van Kesteren; public-webapps; Arthur Barstow; 
Feras Moussa; public-html-me...@w3.org; Aaron Colwell
Subject: RE: [streams-api] Seeking status of the Streams API spec

From: Paul Cotton [mailto:paul.cot...@microsoft.com] 

> Would it be feasible to resurrect this interface as a layer on top of [1] so 
> that W3C specifications like MSE that have a dependency on the Streams 
> interface are not broken?

The decision we came to in web apps some months ago was that the interfaces in 
that spec would disappear in favor of WritableStream, ReadableStream, and the 
rest. The idea of a single Stream interface was not sound; that was one of the 
factors driving the conversations that led to that decision.



RE: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Domenic Denicola
From: Paul Cotton [mailto:paul.cot...@microsoft.com] 

> Would it be feasible to resurrect this interface as a layer on top of [1] so 
> that W3C specifications like MSE that have a dependency on the Streams 
> interface are not broken?

The decision we came to in web apps some months ago was that the interfaces in 
that spec would disappear in favor of WritableStream, ReadableStream, and the 
rest. The idea of a single Stream interface was not sound; that was one of the 
factors driving the conversations that led to that decision.



RE: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Paul Cotton
> I replaced the W3C Streams API spec WD with a pointer to the WHATWG Streams 
> spec and a few sections discussing what we should add to the spec for browser 
> use cases.

This change means that the W3C Editor’s draft no longer defines the Stream 
interface as was previously supported.  Would it be feasible to resurrect this 
interface as a layer on top of [1] so that W3C specifications like MSE that 
have a dependency on the Streams interface are not broken?

/paulc

[1] https://github.com/whatwg/streams

Paul Cotton, Microsoft Canada
17 Eleanor Drive, Ottawa, Ontario K2E 6A3
Tel: (425) 705-9596 Fax: (425) 936-7329

From: Takeshi Yoshino [mailto:tyosh...@google.com]
Sent: Tuesday, October 14, 2014 11:06 PM
To: Paul Cotton
Cc: Jerry Smith (WINDOWS); Anne van Kesteren; public-webapps; Arthur Barstow; 
Feras Moussa; public-html-me...@w3.org; Domenic Denicola; Aaron Colwell
Subject: Re: [streams-api] Seeking status of the Streams API spec

Re: establishing integration plan for the consumers and producers listed in the 
W3C spec, we haven't done anything than what Domenic introduced in this thread.

I wrote some draft of XHR+ReadableStream integration spec and is implemented on 
Chrome, but the plan is not to ship it but wait for the Fetch API as discussed 
at WHATWG.


On Wed, Oct 15, 2014 at 9:10 AM, Paul Cotton 
mailto:paul.cot...@microsoft.com>> wrote:
This thread was recently re-started at 
http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0084.html

Domenic's latest document is at https://streams.spec.whatwg.org/  The W3C 
document has NOT been updated since 
http://www.w3.org/TR/2013/WD-streams-api-20131105/ .

Not to confuse people, too late but I replaced the W3C Streams API spec WD with 
a pointer to the WHATWG Streams spec and a few sections discussing what we 
should add to the spec for browser use cases.


/paulc

Paul Cotton, Microsoft Canada
17 Eleanor Drive, Ottawa, Ontario K2E 6A3
Tel: (425) 705-9596 Fax: (425) 
936-7329


-Original Message-
From: Jerry Smith (WINDOWS)
Sent: Tuesday, October 14, 2014 8:03 PM
To: Domenic Denicola; Aaron Colwell
Cc: Anne van Kesteren; Paul Cotton; Takeshi Yoshino; public-webapps; Arthur 
Barstow; Feras Moussa; public-html-me...@w3.org
Subject: RE: [streams-api] Seeking status of the Streams API spec

Where is the latest Streams spec?  
https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm doesn't have much 
about WritableStreams.

Jerry

-Original Message-
From: Domenic Denicola 
[mailto:dome...@domenicdenicola.com]
Sent: Tuesday, October 14, 2014 10:18 AM
To: Aaron Colwell
Cc: Anne van Kesteren; Paul Cotton; Takeshi Yoshino; public-webapps; Arthur 
Barstow; Feras Moussa; public-html-me...@w3.org
Subject: RE: [streams-api] Seeking status of the Streams API spec

From: Aaron Colwell [mailto:acolw...@google.com]

> MSE is just too far along, has already gone through a fair amount of churn, 
> and has major customers like YouTube and Netflix that I just don't want to 
> break or force to migrate...again.

Totally understandable.

> I haven't spent much time looking at the new Stream spec so I can't really 
> say yet whether I agree with you or not. The main reason why people wanted to 
> be able to append a stream is to handle larger, open range, appends without 
> having to make multiple requests or wait for an XHR to complete before data 
> could be appended. While I understand that you had your reasons to expand the 
> scope of Streams to be more general, MSE really just needs them as a "handle" 
> to route bytes being received with XHR to the SourceBuffer w/o having to 
> actually surface them to JS. It would be really unfortunate if this was 
> somehow lost in the conversion from the old spec.

The way to do this in Streams is to pipe the fetch stream to a writable stream:

fetch(url)
  .then(response => response.body.pipeTo(writableStream).closed)
  .then(() => console.log("all data written!"))
  .catch(e => console.log("error fetching or piping!", e));

By piping between two UA-controlled streams, you can establish an 
off-main-thread relationship between them. This is why it would be ideal for 
SourceBuffer (or a future alternative to it) to be WritableStream, especially 
given that it already has abort(), appendBuffer(), and various state-like 
properties that are very similar to what a WritableStream instance has. The 
benefit here being that people could then use SourceBuffers as generic 
destinations for any writable-stream-accepting code, since piping to a writable 
stream is idiomatic.

But that said, given the churn issue I can understand it not being feasible or 
desirable to take that path.

> Perhaps, although I expect that there may be some resistance to dropping this 
> at this point. Media folks were expecting the Streams API to progress in such 
> a way that would

RE: [DOM-Level-3-Events] Synthetic mouse events triggering default action

2014-10-15 Thread Travis Leithead
-Original Message-
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren
> On Wed, Oct 15, 2014 at 2:40 AM, Bogdan Brinza  wrote:
>> http://www.w3.org/TR/DOM-Level-3-Events/#trusted-events
>
>That text is utterly broken as we've discussed several times now.

Yeah, I've got a bug to fix that. Will probably refactor to match basically 
what Sicking wrote in 12230.

>Events do not cause action. Actions cause events (that can then prevent 
>further action). There is one issue here with some implementations around a 
>limited set >of events, discussed to great length in this bug:
>https://www.w3.org/Bugs/Public/show_bug.cgi?id=12230

It looks like that bug is trending in the right direction. We have additional 
data to drop into that bug now to help Rick and others identify additional 
behaviors and sites that depend on them.


[Bug 22228] Clarification of timeout attribute late overrides

2014-10-15 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=8

Anne  changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |FIXED

--- Comment #5 from Anne  ---
https://github.com/whatwg/xhr/commit/2677cc2e0fe79d290437c3ea9ff370a5d795294b

Sorry for taking so long in fixing this regression.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



PSA: publish WD of Screen Orientation on Oct 21

2014-10-15 Thread Arthur Barstow

Hi All,

Mounir would like to publish a new WD of Screen Orientation and the 
target date for that publication is October 21:


  

If anyone has any major concerns about this plan, please speak up as 
soon as possible.


-Thanks, AB



Re: [DOM-Level-3-Events] Synthetic mouse events triggering default action

2014-10-15 Thread Anne van Kesteren
On Wed, Oct 15, 2014 at 2:40 AM, Bogdan Brinza  wrote:
> http://www.w3.org/TR/DOM-Level-3-Events/#trusted-events

That text is utterly broken as we've discussed several times now.
Events do not cause action. Actions cause events (that can then
prevent further action). There is one issue here with some
implementations around a limited set of events, discussed to great
length in this bug:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12230


-- 
https://annevankesteren.nl/



Re: [gamepad] Allow standard-mapped devices to have extra buttons or axes & allow multiple mappings per device

2014-10-15 Thread Florian Bösch
I've been looking into mapping input devices for a fair while, and I think
it's not a straighforward problem to solve for a UA (or even a OS).

Difficulties include, but are not limited to:

   - Buttons reported by the driver as axes
   - Axes reported by the driver as buttons
   - Buttons which are also axes (on the device)
   - What an app-developer would want to map an input as
   - Combos
   - New devices and browser update cycles (do you really want to
   disadvantage browsers with slower update cycles from using newer devices?)
   - Differences in how different OSes report devices
   - Devices which cannot be fit into standard mapping schemes (basically
   everything but gamepads, and even there it doesn't work)

It is my belief that UAs should leave mapping alone, and if mapping is
appplied, always make it possible for a developer to get the unmapped
inputs, so that the marketplace (of libraries/APIs/Services) can take care
of that problem.


Re: [gamepad] Allow standard-mapped devices to have extra buttons or axes & allow multiple mappings per device

2014-10-15 Thread Katelyn Gadd
Hi,

I agree with Chad's suggestions: the spec should be expanded to expose
more than the core set of axes & buttons, and richer metadata would be
extremely useful to developers.

I've been using the gamepad API (as shipped in Chrome, at least - I
haven't had time to adapt to Firefox's implementation, because I used
gamepad.js and it's a broken pile...) for a while now, in a few
shipped JSIL-based game ports. My experiences & opinions inform the
following.

I apologize if this is a bit hard to read, by the way - all this
crunch time for the game I'm working on has fried my brain a bit. :-)

To try and express my PoV on this (previously stated to WG
members/participants, but not directly on this list - sorry, too much
on my plate to add another workgroup to my subscriptions):

I think best-effort mapping to 'standard' layouts is a great idea. I
have some objections to how it is done, but it is an improvement over
what we had before. My main concern is that the current design for
remapping sets us up for a descent into a quagmire that can't easily
be fixed. The scenario I anticipate happening:

Gamepads only have one standard mapping, supported by the vast
majority of available controllers (let's just say 360 pads and
dualshock 3/4 controllers, since those put together probably cover at
least 90% of the market). If you expand the remapping to all XInput
devices, that widens it even further - though this is problematic in a
way I will get to later.

If there is one standard mapping, and it covers most users, this means
the natural path for developers is to only support the standard
mapping. They won't add keybinding/button remapping, they won't add
support for multiple layouts, they might not even add support for
choosing which of your gamepads you want to use. This is not an
unrealistic fear: It is the status quo in virtually all console games
and in many PC games. In some cases you can rebind by editing .ini
files, but that is out of the question for web games since their
configuration typically lies in not-user-editable JS on the server or
in blobs stored somewhere (localStorage, indexed db, cookies) that are
not easily user-editable. This 'developers only support the easiest
thing' future is relatively likely and is actually worse than the
previous status quo of raw-controllers-only.

I think the solution to this is simple: Ensure that there is more than
one standard mapping. Find a way to make sure that developers have to
do at least a basic amount of effort to handle things correctly. If
the simple hack isn't sufficient for wider audiences, they will have
an incentive to do a good job. This doesn't have to be a punitive
measure - there are diverse controllers out there that people might
want to use, from standard gamepads to racing wheels to flight sticks.
Something as simple as supporting the additional inputs of the
dualshock 3 & 4 is a good rationale here (they both have gyros &
accelerometers; the 4 has a touchpad and touchpanel button.)

When the common case is slightly more complicated, it forces
developers to at least *understand* the breadth of the problem and
make rational choices. It also encourages the development of
high-quality, userspace JS wrappers/abstraction layers, so fixes &
improvements can happen in those libraries. I would argue that the
success of XInput is not due to its excessive, somewhat-detrimental
'least common denominator' design, but because it aggressively
targeted & fixed the most common problems with DirectInput.

OK, so, there's my rationale for why the current approach to mapping
should change. Now to dive in deeper to some other issues I see with
the gamepad API.

My first objection to the design & implementation is that it is based
on passive enumeration & device acquisition. You have to poll to find
out what gamepads there are and interact with them. This obviously
produces the threat of fingerprinting, and to mitigate fingerprinting
we have the *miserable* constraint where you have to push a button to
make your controller show up. This changes the ordinal identifiers of
gamepads from session to session, and - even worse - means that a
controller cannot be activated by analog stick inputs, which is a real
problem for games that start with a menu (like two of my major ports).
This defies user expectations and is just plain confusing. Polling an
actual device to get its state is perfectly reasonable, but polling
the overall state of your input devices is not so. Device availability
is something that should be event-based (it is at a driver level
anyway) and it should be exposed to content as events.

The fingerprinting problem is addressed with the solution I proposed
here: Content should *request* the input it needs, at which point the
useragent is able to figure out how to provide it. To mitigate
fingerprinting concerns, this can be done based on user consent, and
the input presented to the content can end up fully sanitized, because
the user content has no need to