Re: Device Orientation API future

2018-01-11 Thread Blair MacIntyre
> > Specifically:  I was wondering about the real impact of the webvr polyfill 
> > not working, on Firefox users.  My mention of the work implementing WebVR 
> > was pointing out that we will hopefully not need to worry about the 
> > webvr-polyfil working on Gecko-based browsers in the not-to-distant future, 
> > whenever we have full platform coverage for a real WebVR/WebXR 
> > implementation.
> 
> My ideal here is that we have an explicit user action / gesture or prompt 
> for: Magic window, WebVR/XR and device orientation. We might choose to make 
> several prompts rolled into one for VR but ultimately it will still require 
> the user to understand the risks. I'm not really sure there is an obvious way 
> of protecting the user from magic window being just as dangerous as device 
> orientation.

I think that breakdown of categories is a good set.  Right now, WebVR/XR 
(HMD’s) has been using one approach.  "Magic window”-like apps have not been 
implement in WebVR, but rather using device orientation (akin to the polyfill), 
so anything that looks like that (right now) is just as dangerous, as you say.

My hope is that the “Magic window” style of AR/VR is supported directly by the 
WebXR api.  There will be some pushback at increased friction (right now, 
experiences like that start working immediately because of device orientation) 
but I think on the whole, the community group understands the need for some 
sort of user action / gesture.

> > if we’re including iOS in the list of platforms we may want to try and 
> > remove device orientation from
> 
> I don't think we need to remove it just provide UI in the short term to 
> require the user to approve.

Great.  I would agree with that approach.

> > when WebXR (including “Magic Window” support) will ship in all versions of 
> > Firefox
> 
> I'm unsure here too, I'm not after restricting these websites from working at 
> least in the short term. I think the functionality should still exist if we 
> can come up with the right controls to these APIs.
> 
> I suspect we are mostly talking about fennec but I'm also not sure if any 
> laptops/tablets use these APIs if the sensors exist. If the APIs are in use 
> again I think the prompts should be clear to the user about the risks they 
> have.
> 
> One thing that we could try is taking the user though an on-boarding tutorial 
> when they first see a website that uses this API. The tutorial could explain 
> to them what the API will be called, how they will be prompted and what the 
> risks are to them.

I think we (folks in the MR team) would be interested to explore that with you; 
 I personally would be, partially because I suspect we may need such an 
“education” approach with future sensors and capabilities.  If we look down the 
road at the kind of sensor data advanced AR displays collect (using sensors to 
build models of the world around the user, for example), we will eventually 
want to explore how to expose that data into the web (as part of WebXR, or some 
other API).  Explaining what this means to users will be hard, much like device 
orientation.

Thanks for your patience with all of this!




> 
> On Thu, Jan 11, 2018 at 6:15 PM, Blair MacIntyre <bmacint...@mozilla.com 
> <mailto:bmacint...@mozilla.com>> wrote:
> Oh, I see what you are saying.   I think there is some confusion here 
> (perhaps on my part only).
> 
> I do not know if the main use of (and motivation for) the sensor APIs is 
> webvr, but I have not been involved in it.  I thought that (newer) API was 
> brought up in this discussion as a suggestion for a replacement for the 
> deviceorientation APIs.  (Again, I’m unaware of the history or motivations 
> behind that API.)
> 
> There has been some supposition in this discussion that the main use of the 
> device orientation APIs is the WebVR polyfill.  I do not know if that is 
> true, I don’t think Chris or I said that — I haven’t seen any mention of any 
> data to support or not support that.  It is clear that _a_ use of the device 
> orientation APIs is supporting WebVR polyfill.   But it also used for 
> panoramic photo viewers, 360 video viewers, and probably other (legitimate) 
> things.   Regardless, I fully understand the security/privacy concerns.
> 
> My message this morning was intended to (perhaps) reframe the discussion and 
> (perhaps) let us move forward.  Specifically:  I was wondering about the real 
> impact of the webvr polyfill not working, on Firefox users.  My mention of 
> the work implementing WebVR was pointing out that we will hopefully not need 
> to worry about the webvr-polyfil working on Gecko-based browsers in the 
> not-to-distant future, whenever we have full platform coverage for a real 
> WebVR/WebXR implementati

Re: Device Orientation API future

2018-01-11 Thread Blair MacIntyre
Oh, I see what you are saying.   I think there is some confusion here (perhaps 
on my part only).

I do not know if the main use of (and motivation for) the sensor APIs is webvr, 
but I have not been involved in it.  I thought that (newer) API was brought up 
in this discussion as a suggestion for a replacement for the deviceorientation 
APIs.  (Again, I’m unaware of the history or motivations behind that API.)   

There has been some supposition in this discussion that the main use of the 
device orientation APIs is the WebVR polyfill.  I do not know if that is true, 
I don’t think Chris or I said that — I haven’t seen any mention of any data to 
support or not support that.  It is clear that _a_ use of the device 
orientation APIs is supporting WebVR polyfill.   But it also used for panoramic 
photo viewers, 360 video viewers, and probably other (legitimate) things.   
Regardless, I fully understand the security/privacy concerns.

My message this morning was intended to (perhaps) reframe the discussion and 
(perhaps) let us move forward.  Specifically:  I was wondering about the real 
impact of the webvr polyfill not working, on Firefox users.  My mention of the 
work implementing WebVR was pointing out that we will hopefully not need to 
worry about the webvr-polyfil working on Gecko-based browsers in the 
not-to-distant future, whenever we have full platform coverage for a real 
WebVR/WebXR implementation.   

What is/was unclear to me is:
- if we’re including iOS in the list of platforms we may want to try and remove 
device orientation from.   Matters insofar as we WON’T be able to implement 
WebVR/WebXR there.
- when WebXR (including “Magic Window” support) will ship in all versions of 
Firefox.  I could “guess” but that’s not useful (I have no control over that).  

But, if we laid out a plan that said “in the short term we’ll do X, which may 
not be ideal, when WebXR is available we’ll do Y”, that might help.  I hope 
that the second step is not too far in the future, but thinking of it that way 
at least doesn’t lock us into “we need to find a satisfactory solution that 
keeps the webxr-polyfill working indefinitely” since it doesn't need to work 
indefinitely.

Please forgive me for the lack of clarity.  And, of course, if that sort of 
approach isn’t acceptable, just say so.


> On Jan 11, 2018, at 12:53 PM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Thu, Jan 11, 2018 at 6:48 PM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>>> In that case I'm not entirely sure why we'd also pursue new
>>> variants separately.
>> 
>> I’m not sure what this means?
> 
> That if our main usage for the new sensor APIs (those discussed in
> https://github.com/w3ctag/design-reviews/issues/207) is WebVR/XR and
> we don't have any other uses that are compelling enough, and WebVR/XR
> will come with their own APIs for this, there's no reason for us to
> worry about the new sensor APIs.
> 
> 
> -- 
> https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-11 Thread Blair MacIntyre
I don’t understand this comment.  

> On Jan 11, 2018, at 12:50 PM, Martin Thomson <m...@mozilla.com> wrote:
> 
> As Anne said, I don't know why you would define a new API rather than
> enhancing the existing one, other than NIH.  But I guess the damage is
> now done.
> 
> On Fri, Jan 12, 2018 at 4:48 AM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>>> On Thu, Jan 11, 2018 at 5:30 PM, Blair MacIntyre <bmacint...@mozilla.com> 
>>> wrote:
>>>> First, this discussion pertains to FF on Windows, Mac, Android and Linux, 
>>>> I assume?  FF for iOS just uses the wkWebView and it’s up to Apple to 
>>>> decide on things like this.  Is this correct?
>>> 
>>> I believe there's some tricks we could pull on iOS in theory.
>> 
>> Perhaps.  But is that part of the discussion?  I ask because
>>> 
>>> 
>>>> From a WebVR perspective, the polyfill (that uses device-orientation) 
>>>> defers to the built in WebVR API if it exists.
>>> 
>>> So WebVR/XR has its own equivalents for these APIs? I was not aware of
>>> that.
>> 
>> No, it’s different:  WebVR/XR provide precise 3D orientation and position 
>> (assume 3D position tracking is available) of the display.  Typically, 
>> that’s a Head-Worn Display (ie., a Vive or Rift or whatever).  Currently, 
>> WebVR has only been implemented only for head-worn displays.  The polyfill 
>> was used to fill in the gaps;  provide “VR” on those paper “cardboard” 
>> display holders, for example.
>> 
>> Moving forward, WebXR will include the notion of “Magic Window” displays, 
>> meaning “you’re holding the device in your hands and it tries to give the 
>> appearance of a portal into the virtual or AR world”.  So, “tracked 3D 
>> content”.
>> 
>> For the WebVR/XR api to work, it must provide a super-set of the 
>> device-orientation capabilities to the application.  There are separate 
>> discussions about the security aspects of WebVR/XR:  it will not be 
>> accessible without permission or user-gesture, as this API is.
>> 
>> So, it’s not an “equivalent” API, but is rather providing the information 
>> needed to do 3D AR/VR directly, without relying on getting device 
>> orientation from this API.
>> 
>> 
>>> In that case I'm not entirely sure why we'd also pursue new
>>> variants separately.
>> 
>> I’m not sure what this means?
>> 
>>> 
>>> 
>>> --
>>> https://annevankesteren.nl/
>> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-11 Thread Blair MacIntyre
> On Thu, Jan 11, 2018 at 5:30 PM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>> First, this discussion pertains to FF on Windows, Mac, Android and Linux, I 
>> assume?  FF for iOS just uses the wkWebView and it’s up to Apple to decide 
>> on things like this.  Is this correct?
> 
> I believe there's some tricks we could pull on iOS in theory.

Perhaps.  But is that part of the discussion?  I ask because 
> 
> 
>> From a WebVR perspective, the polyfill (that uses device-orientation) defers 
>> to the built in WebVR API if it exists.
> 
> So WebVR/XR has its own equivalents for these APIs? I was not aware of
> that.

No, it’s different:  WebVR/XR provide precise 3D orientation and position 
(assume 3D position tracking is available) of the display.  Typically, that’s a 
Head-Worn Display (ie., a Vive or Rift or whatever).  Currently, WebVR has only 
been implemented only for head-worn displays.  The polyfill was used to fill in 
the gaps;  provide “VR” on those paper “cardboard” display holders, for 
example.  

Moving forward, WebXR will include the notion of “Magic Window” displays, 
meaning “you’re holding the device in your hands and it tries to give the 
appearance of a portal into the virtual or AR world”.  So, “tracked 3D 
content”.  

For the WebVR/XR api to work, it must provide a super-set of the 
device-orientation capabilities to the application.  There are separate 
discussions about the security aspects of WebVR/XR:  it will not be accessible 
without permission or user-gesture, as this API is.

So, it’s not an “equivalent” API, but is rather providing the information 
needed to do 3D AR/VR directly, without relying on getting device orientation 
from this API.


> In that case I'm not entirely sure why we'd also pursue new
> variants separately.

I’m not sure what this means?

> 
> 
> -- 
> https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-11 Thread Blair MacIntyre
I’ve been thinking about this since my last message, and I wanted to step back 
and clarify something for myself.

First, this discussion pertains to FF on Windows, Mac, Android and Linux, I 
assume?  FF for iOS just uses the wkWebView and it’s up to Apple to decide on 
things like this.  Is this correct?

From a WebVR perspective, the polyfill (that uses device-orientation) defers to 
the built in WebVR API if it exists.  We have WebVR on Windows, and will 
hopefully(?) have it on all these other platforms soon-ish (I am unsure of 
timelines for this).  

On top of this, device orientation really is mostly used on mobile, so Android 
is likely the main platform of concern here?  Some Windows/Linux machines 
(tablets?) have orientation, I assume (although I haven’t looked at it in a few 
years on such a device), but most don’t.

So is this discussion mostly about Android?  One question might be:  how long 
till we ship WebVR of some form on Android?  Most people don’t have “VR” 
displays for their Android phones (e.g., Daydream, GearVR, etc), but I’m 
wondering if we might have part of the mitigation we’re thinking about be to 
have our WebVR implementation work on all devices, not just ones with “real” VR 
support.  The next API (now called WebXR) will include support for what the 
community is calling “Magic Window” VR/AR, which is just AR/VR without a 
headmounted display.

All of this is to say:  if we assume that within 2018, we could conceivably 
have WebXR of some form available in FF on all our platforms, to handle the 
cases the WebVR polyfill currently handles, does this change this discussion?

The main use will them be for things like panoramic images / 360 video viewers 
— when we aren’t talking “VR”, perhaps throttling might work, or perhaps other 
approaches might be suitable that aren’t appropriate for VR. 


> On Jan 11, 2018, at 6:30 AM, Jonathan Kingston  wrote:
> 
> We have three categories of solutions suggested here:
> - Throttling
> - An explicit gesture to approve using the API
> - A prompt
> 
> We might be able to do some/all of those depending on the situation. Is
> there anything else I have missed that has been suggested?
> 
> I honestly would like to request we do some user studies on different
> content strings for prompts and see if users understand the risks.
> Prompts are a UX pattern that are convention already, inventing new ones
> will take more experimenting to perfect. So if we can find the right
> content where users understand some of the time, this is better than not
> giving users the ability to ever understand where their data is going.
> 
> Thanks
> 
> On Thu, Jan 11, 2018 at 7:20 AM, Anne van Kesteren  wrote:
> 
>> On Thu, Jan 11, 2018 at 5:39 AM, Chris Van Wiemeersch 
>> wrote:
>>> Anne and Martin, can you think of changes to request for the Sensor API
>>> that we would resolve or reasonably improve the existing fingerprinting
>>> concerns?
>> 
>> It sounds like Chrome's approach is throttling, which would probably
>> work, but it doesn't work for WebVR, right? (At which point we're back
>> at looking at a permission prompt and being unsure how to phrase the
>> question.)
>> 
>> 
>> --
>> https://annevankesteren.nl/
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-04 Thread Blair MacIntyre
I’m unclear of which side of the line we want to fall on between supporting 
existing sites or requiring them to change.

If we are going to break existing websites, then perhaps looking at the Generic 
Sensor API (as CVan suggests in his email) is a more rational approach;  is it 
implemented in FF yet?  Plans for it?

For device orientation, my assumption (perhaps incorrect?) has been that we’d 
be trying to support existing sites.  The worry I have with the gross motion 
idea is that the UX would be terrible.  Right now, you start a site (VR, 360 
image, etc) and can look around:  in my own experience, the motion is rarely 
fast.  So the site would appear stuck, and the user may never discover it’s 
stuck.

But, perhaps if we’re willing to change the browser itself to provide some 
feedback, this could work though.  
- I go to a site, it requests motion
- motion events are not sent initially
- FF waits a bit to see if the user shakes or grossly moves the phone (e.g., 
perhaps a newer site says “shake the phone to activate panoramic viewing”)
- if it does happen, FF slides in/down a message saying something similar 
(e.g., “site wants to watch the movements of your device;  shake phone or click 
yes to confirm, no to deny”)
- if shake doesn’t happen in short time, or they click no, that’s that.  If 
they shake or click yes, motion is used.

But, perhaps this is too confusing.  

For the perms API, I imagined it might just work with devicemotion:  setting up 
the callback could trigger a perms request, and the data would only start 
flowing on acceptance.  

> On Jan 3, 2018, at 11:52 PM, Martin Thomson <m...@mozilla.com> wrote:
> 
> On Thu, Jan 4, 2018 at 1:09 PM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>> We could chat about it, sure;  how do you envision it working without 
>> breaking old websites?
> 
> With the understanding that this is purely spitballing...
> 
> We would stop providing events (or provide them with extremely low
> frequency [1]), but if the currently focused context has an event
> handler registered for orientation events, we would enable events once
> the orientation changes by a large amount or quickly.  The thresholds
> might need some tuning, but a shake or large movement should work.
> 
> That means that sites that expect and only receive subtle movement
> would stop receiving events.  Sites that don't receive input focus
> would stop receiving events (that prevents an embedded iframe from
> getting events).  But sites that legitimately use the API will only
> appear to be a little "sticky" initially.  We might also persist this
> "implicit" permission to remove that stickiness for sites that are
> used often (or reduce the activation thresholds over repeat visits).
> 
> We should also look at getting a hook into the permission API so that
> the current state can be queried.  But that API doesn't really
> understand this sort of model, so that might be tricky to work out.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-03 Thread Blair MacIntyre
We could chat about it, sure;  how do you envision it working without breaking 
old websites?

> On Jan 3, 2018, at 5:43 PM, Martin Thomson <m...@mozilla.com> wrote:
> 
> On Thu, Jan 4, 2018 at 2:52 AM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>> I was more concerned about the idea (or, at least what I though might be
>> suggested) that you only get orientation if they give location permission.
>> This seems overkill:  even if I know what the data means, I can see uses of
>> orientation that I’d be comfortable with but that I wouldn’t be comfortable
>> giving my geolocation.  that’s all I was talking about.
> 
> I guess that someone needs to work out how to control access to
> orientation without invoking that prompt then.  I think that we could
> easily give access to orientation with geolocation, but I can see that
> there are plenty of cases for orientation *without* geolocation.
> Could we explore the gross movement idea some more?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2018-01-03 Thread Blair MacIntyre
I would tend to think that GPS location is more sensitive then device 
orientation, and so exposing device orientation if they’ve given location perms 
seems like a good avenue to explore, yes.

I was more concerned about the idea (or, at least what I though might be 
suggested) that you only get orientation if they give location permission.  
This seems overkill:  even if I know what the data means, I can see uses of 
orientation that I’d be comfortable with but that I wouldn’t be comfortable 
giving my geolocation.  that’s all I was talking about.

> On Jan 3, 2018, at 10:48 AM, Jonathan Kingston <j...@mozilla.com> wrote:
> 
> When the language for the permission prompt isn't going to be clear about 
> what the user is exposing (screen, camera and mic) we should be talking about 
> risks.
> 
> For GPS we only ever talk about "location", I still don't think that is a far 
> stretch from head/position tracking.
> 
> On Wed, Jan 3, 2018 at 2:47 PM, Blair MacIntyre <bmacint...@mozilla.com 
> <mailto:bmacint...@mozilla.com>> wrote:
> I don’t think tying orientation to GPS is really a viable approach.
> 
> The main use case for the orientation API, I think, is not AR;  it’s 360 
> images and videos, and “cardboard VR”, right now.
> 
> > On Jan 1, 2018, at 5:01 PM, Martin Thomson <m...@mozilla.com 
> > <mailto:m...@mozilla.com>> wrote:
> >
> > The suggestion that was made in the past was to tie orientation to
> > geolocation.  I think that this would be obvious enough to pass.
> > Orientation is basically a refinement of position.  It clearly makes
> > sense for AR applications.  Pure VR applications might only care about
> > relative orientation and so might suffer a little.
> >
> > I realize that friction is always a concern, but the amount of
> > side-channel information that leaks through the API is hard to ignore.
> > I think that a prompt is wise, while we investigate ways in which we
> > might improve the UX.
> >
> > For instance, we could attempt to interpret gross movement as a
> > deliberate indication of intent.  Then sites could use this to
> > implement their own permission process ("shake your phone/head to
> > start").
> >
> > On Fri, Dec 22, 2017 at 2:52 AM, Jonathan Kingston <j...@mozilla.com 
> > <mailto:j...@mozilla.com>> wrote:
> >> Following the intent to deprecate filed on Sunday for the Ambient Light and
> >> Proximity sensor APIs
> >> <https://groups.google.com/forum/#!topic/mozilla.dev.platform/DcSi_wLG4fc 
> >> <https://groups.google.com/forum/#!topic/mozilla.dev.platform/DcSi_wLG4fc>>,
> >> we propose to discuss the future of the Device Orientation API.
> >>
> >> DeviceOrientation
> >> <https://w3c.github.io/deviceorientation/spec-source-orientation.html 
> >> <https://w3c.github.io/deviceorientation/spec-source-orientation.html>>
> >> (deviceorientation, deviceorientationabsolute, and devicemotion events) has
> >> far more usage than the other two sensor APIs and so we need to be more
> >> careful with it to prevent breakage.
> >>
> >> Currently this API is restricted to first party domain scripts only,
> >> however Chrome has filed an intent to ship to have a feature policy to
> >> enable this in third party scripts
> >> <https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/RX0GN4PyCF8/6XVhJ_oTCgAJ
> >>  
> >> <https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/RX0GN4PyCF8/6XVhJ_oTCgAJ>>.
> >> This would mean that advertisements and others would have unrestricted
> >> access to the users sensor information assuming they’re included through an
> >> iframe with the relevant allow attribute set.
> >>
> >> Risks
> >>
> >> Some of the keylogging risks are outlined in papers [1] and [2], however
> >> there are also risks of the user being identified by physical or
> >> environmental factors like mapping the swing of the device to walking gate
> >> patterns and the angle and shaking of the phone to match to patterns in
> >> altitude and terrain type.
> >>
> >> The current API provides unprompted floating point precision of sensor data
> >> at 60hz to the website.
> >>
> >> Generic sensor API
> >>
> >> These APIs are being replaced by the work on the generic sensor API as
> >> outlined in the following TAG thread
> >> <https://github.com/w3ctag/design-reviews/issues/207 
> >> <https://github.com/w3ctag/design-reviews/issues/207>&g

Re: Device Orientation API future

2018-01-03 Thread Blair MacIntyre
I don’t think tying orientation to GPS is really a viable approach.

The main use case for the orientation API, I think, is not AR;  it’s 360 images 
and videos, and “cardboard VR”, right now.  

> On Jan 1, 2018, at 5:01 PM, Martin Thomson  wrote:
> 
> The suggestion that was made in the past was to tie orientation to
> geolocation.  I think that this would be obvious enough to pass.
> Orientation is basically a refinement of position.  It clearly makes
> sense for AR applications.  Pure VR applications might only care about
> relative orientation and so might suffer a little.
> 
> I realize that friction is always a concern, but the amount of
> side-channel information that leaks through the API is hard to ignore.
> I think that a prompt is wise, while we investigate ways in which we
> might improve the UX.
> 
> For instance, we could attempt to interpret gross movement as a
> deliberate indication of intent.  Then sites could use this to
> implement their own permission process ("shake your phone/head to
> start").
> 
> On Fri, Dec 22, 2017 at 2:52 AM, Jonathan Kingston  wrote:
>> Following the intent to deprecate filed on Sunday for the Ambient Light and
>> Proximity sensor APIs
>> ,
>> we propose to discuss the future of the Device Orientation API.
>> 
>> DeviceOrientation
>> 
>> (deviceorientation, deviceorientationabsolute, and devicemotion events) has
>> far more usage than the other two sensor APIs and so we need to be more
>> careful with it to prevent breakage.
>> 
>> Currently this API is restricted to first party domain scripts only,
>> however Chrome has filed an intent to ship to have a feature policy to
>> enable this in third party scripts
>> .
>> This would mean that advertisements and others would have unrestricted
>> access to the users sensor information assuming they’re included through an
>> iframe with the relevant allow attribute set.
>> 
>> Risks
>> 
>> Some of the keylogging risks are outlined in papers [1] and [2], however
>> there are also risks of the user being identified by physical or
>> environmental factors like mapping the swing of the device to walking gate
>> patterns and the angle and shaking of the phone to match to patterns in
>> altitude and terrain type.
>> 
>> The current API provides unprompted floating point precision of sensor data
>> at 60hz to the website.
>> 
>> Generic sensor API
>> 
>> These APIs are being replaced by the work on the generic sensor API as
>> outlined in the following TAG thread
>> , though it’s
>> currently unclear how to properly deal with the risks of sensors other than
>> throttling. It’s unclear that throttling sufficiently addresses the risks
>> and also makes them a poor choice for VR.
>> 
>> Chrome has stated their plan for the UX of the generic sensor API
>> 
>> and it doesn’t address the unprompted access to sensors, nor do we feel
>> showing a new indicator about sensor usage goes far enough to mitigate the
>> risk.
>> 
>> We feel that Firefox should prompt users in some manner when accessing
>> granular sensor information. Until these concerns are mitigated it seems we
>> shouldn’t open up access to these sensors via a feature policy to third
>> parties.
>> 
>> Ideas to reduce user risk from the current API:
>> 
>> - Dialling down the precision of this event or frequency it is fired from
>> 60hz to 5hz however this would limit it’s usage in Web VR.
>> 
>> - Restrict to secure contexts; this reduces some risk in particular with
>> man-in-the-middle proxies that modify traffic, but is not going to address
>> the overall issue on its own
>> 
>> - We could place these events behind a permission prompt preventing drive
>> by usage; a big problem with this suggestion is that it’s unclear what to
>> ask the user
>> 
>> - Restrict access to only the active tab
>> 
>> Kind regards,
>> 
>> Anne van Kesteren, Jonathan Kingston, and Frederik Braun
>> 
>> [1] https://www.usenix.org/legacy/event/hotsec11/tech/final_files/Cai.pdf
>> 
>> [2] https://dl.acm.org/citation.cfm?id=2714650
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device Orientation API future

2017-12-21 Thread Blair MacIntyre
(I’m CC:ing a few of the MR team folks, although I’m sure many of them are on 
the dev-platform list already).

Speaking for myself (not the official stance of the WebVR team), I share these 
concerns and like the trajectory you suggest.  

I would HOPE that in the long run (some number of years?) all browsers with 
support the WebVR APIs and new uses of this API for VR will stop needing this 
API.  But that will take some time, and (unfortunately) ignores that there are 
many sites that currently use this to do “3 degree of freedom” (orientation 
only) AR views.  Eliminating the API breaks all of these sites and experiences. 
 This includes non-VR uses like looking at panoramic photos or images.

Of the 4 suggestions at the bottom, the first seems problematic (as you say, 
doesn’t really eliminated the problem, and breaks the WebVR usage).  But, the 
other three seems reasonable.

Adding a permission prompt would increase friction, perhaps, and it’s not clear 
what to ask (“This website wants to use the motion of your device”?).  

I’ll let others chime in.

> On Dec 21, 2017, at 10:52 AM, Jonathan Kingston  wrote:
> 
> Following the intent to deprecate filed on Sunday for the Ambient Light and
> Proximity sensor APIs
> ,
> we propose to discuss the future of the Device Orientation API.
> 
> DeviceOrientation
> 
> (deviceorientation, deviceorientationabsolute, and devicemotion events) has
> far more usage than the other two sensor APIs and so we need to be more
> careful with it to prevent breakage.
> 
> Currently this API is restricted to first party domain scripts only,
> however Chrome has filed an intent to ship to have a feature policy to
> enable this in third party scripts
> .
> This would mean that advertisements and others would have unrestricted
> access to the users sensor information assuming they’re included through an
> iframe with the relevant allow attribute set.
> 
> Risks
> 
> Some of the keylogging risks are outlined in papers [1] and [2], however
> there are also risks of the user being identified by physical or
> environmental factors like mapping the swing of the device to walking gate
> patterns and the angle and shaking of the phone to match to patterns in
> altitude and terrain type.
> 
> The current API provides unprompted floating point precision of sensor data
> at 60hz to the website.
> 
> Generic sensor API
> 
> These APIs are being replaced by the work on the generic sensor API as
> outlined in the following TAG thread
> , though it’s
> currently unclear how to properly deal with the risks of sensors other than
> throttling. It’s unclear that throttling sufficiently addresses the risks
> and also makes them a poor choice for VR.
> 
> Chrome has stated their plan for the UX of the generic sensor API
> 
> and it doesn’t address the unprompted access to sensors, nor do we feel
> showing a new indicator about sensor usage goes far enough to mitigate the
> risk.
> 
> We feel that Firefox should prompt users in some manner when accessing
> granular sensor information. Until these concerns are mitigated it seems we
> shouldn’t open up access to these sensors via a feature policy to third
> parties.
> 
> Ideas to reduce user risk from the current API:
> 
> - Dialling down the precision of this event or frequency it is fired from
> 60hz to 5hz however this would limit it’s usage in Web VR.
> 
> - Restrict to secure contexts; this reduces some risk in particular with
> man-in-the-middle proxies that modify traffic, but is not going to address
> the overall issue on its own
> 
> - We could place these events behind a permission prompt preventing drive
> by usage; a big problem with this suggestion is that it’s unclear what to
> ask the user
> 
> - Restrict access to only the active tab
> 
> Kind regards,
> 
> Anne van Kesteren, Jonathan Kingston, and Frederik Braun
> 
> [1] https://www.usenix.org/legacy/event/hotsec11/tech/final_files/Cai.pdf
> 
> [2] https://dl.acm.org/citation.cfm?id=2714650
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device orientation/motion events privacy issues

2017-09-22 Thread Blair MacIntyre
>>> We discussed this a bit with Anne on IRC.  It seems like this API is a good 
>>> use case for a permission prompt to the user.  Since the API works by 
>>> registering an event listener, the only realistic option seems to be 
>>> Permission.request() before registering the event listeners.  Unfortunately 
>>> it seems that a while ago we have pushed back on this API 
>>> , but it seems that this use 
>>> case wasn't considered back then.  Anne said he'll look into opening up 
>>> that discussion again to see if we can use a permission prompt for this API…
>> I’d love to see a discussion about this — I’ve been thinking about the 
>> question of “informed consent” by users to this “less obviously problematic” 
>> data (to a typical person:  it seems more obvious why geolocation might be 
>> more of a problem than device orientation) in the context of augmented 
>> reality on the web.   We’re also thinking about other data that might 
>> exposed eventually by AR sensors.
>> 
>> But in theory, for the AR/VR use cases, I’m not against asking user 
>> permission:  for me, one of the strengths of doing AR/VR on the web is that 
>> fact that the UA can give user’s control over what data each site/experience 
>> has access to.  I’d actually love to go further, and allow user’s to see 
>> what’s being used and toggle it on/off while the experience is running (we 
>> experimented with location access in the Argon4 AR web browser this summer, 
>> letting the user toggle location access on/off easily without reloading the 
>> page).
>> 
>> One question I would have is how to deal with permission fatigue.  If an 
>> AR/VR app generates 3 or 4 separate permission requests (location, 
>> deviceorientation, camera, and perhaps other sensors eventually), is it 
>> possible to think about how to aggregate these into one or more groups that 
>> might also explain to users why all of these are needed?  (“AR applications 
>> need access to camera, location and orientation”)   (I’m not sure if this 
>> has been talked about in the past).
> 
> Yeah this has come up in the past.  The difficulty at the API level is that 
> these things all have different API entry points that trigger the permission 
> prompt, so at the time that the browser is about to prompt for permission X 
> it can't predict that a page may soon prompt for permission Y.  We could 
> however build our permission prompt UI in a way that could deal with multiple 
> prompts in a better way than showing individual door hangers.
> 
> But since we already have various different permission prompts, this is an 
> existing problem, so solving it shouldn't be a prerequisite for the 
> discussion in the current thread.  :-)

Good points, I agree, it shouldn’t be a blocker … it just seemed like this 
might be one of those situations that really exacerbates it.  

>> But I wonder, at some point, what apps will still need this?  If AR/VR 
>> don’t, and apps like “viewing 360 video or panoramic images” can use the 
>> AR/VR apis to access the data … there might bit a lot more we can do with 
>> this API at that point, to reduce it’s impact.
> Games are a common use case also AFAIK.
> 
> What kinds of modifications to the API did you have in mind?


That last statement was mostly aimed at the idea that “if the applications that 
require high resolution data, perhaps including games, could shift to using 
WebVR/WebAR APIs to get at orientation, then having filtered data, or 
permission prompts, would impact less of the use cases”.

Beyond that, I was mostly just thinking about the things that have been 
suggested
- permission prompts
- perhaps allowing pages to request lower fidelity data, or reduced data:  if 
I’m a game that really just wants to tilt left/right, perhaps I can just 
request that?  Or (as suggested) reduce frequency.   The location API has 
multiple levels of fidelity you can request.
- since the APIs are often used for relative motion, perhaps we could add a 
random starting orientation delta, so they don’t align with the world.  

Not sure how feasible/useful they are, and what the programmer ergonomics 
would, but it certainly reduces the data being sent to the browser.  And 
perhaps some lower precision, or “offset / filtered” versions could be returned 
without user permission prompts, while others could result in prompts.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Device orientation/motion events privacy issues

2017-09-22 Thread Blair MacIntyre
>> What's the reason for this? I don't know for sure, but it may be necessary 
>> for things like AR/VR to have higher resolution than that.
> The reason is to limit the frequency of sensor data the web application 
> receives to allow it to guesstimate the changes to the device position to 
> limit how accurately it can guess how the device is being used.  It was just 
> an idea I copied from the spec for discussion, not sure if it is effective or 
> not really.
> 
> We discussed this a bit with Anne on IRC.  It seems like this API is a good 
> use case for a permission prompt to the user.  Since the API works by 
> registering an event listener, the only realistic option seems to be 
> Permission.request() before registering the event listeners.  Unfortunately 
> it seems that a while ago we have pushed back on this API 
> , but it seems that this use 
> case wasn't considered back then.  Anne said he'll look into opening up that 
> discussion again to see if we can use a permission prompt for this API…

I’d love to see a discussion about this — I’ve been thinking about the question 
of “informed consent” by users to this “less obviously problematic” data (to a 
typical person:  it seems more obvious why geolocation might be more of a 
problem than device orientation) in the context of augmented reality on the 
web.   We’re also thinking about other data that might exposed eventually by AR 
sensors. 

But in theory, for the AR/VR use cases, I’m not against asking user permission: 
 for me, one of the strengths of doing AR/VR on the web is that fact that the 
UA can give user’s control over what data each site/experience has access to.  
I’d actually love to go further, and allow user’s to see what’s being used and 
toggle it on/off while the experience is running (we experimented with location 
access in the Argon4 AR web browser this summer, letting the user toggle 
location access on/off easily without reloading the page).

One question I would have is how to deal with permission fatigue.  If an AR/VR 
app generates 3 or 4 separate permission requests (location, deviceorientation, 
camera, and perhaps other sensors eventually), is it possible to think about 
how to aggregate these into one or more groups that might also explain to users 
why all of these are needed?  (“AR applications need access to camera, location 
and orientation”)   (I’m not sure if this has been talked about in the past).

(For those who are interested: a group of us over in Emerging Technology have 
been working on expanding WebVR to include AR/MR use cases, we’re documenting 
this “WebXR" proposal  in github.com/mozilla/webxr-api, and building a sample 
implementation in github.com/mozilla/webxr-polyfill. We’ve also got a sample 
iOS app, for demonstrating/using it on iOS, that leverages ARKit, in 
github.com/mozilla/webxr-ios).

I mention this because, at some point in the future, there will hopefully be VR 
and AR/MR apis (perhaps via this webxr proposal, perhaps via a different one) 
in most browsers, and that that point, the uses for this API may diminish.  
They are still used by various polyfills (e.g., the WebVR polyfill uses 
deviceorientation), since WebVR isn’t in all browsers yet, and will be used by 
WebXR and other similar efforts for some time as well.  

But I wonder, at some point, what apps will still need this?  If AR/VR don’t, 
and apps like “viewing 360 video or panoramic images” can use the AR/VR apis to 
access the data … there might bit a lot more we can do with this API at that 
point, to reduce it’s impact.

 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: security problems [WAS: Intent to remove: sensor APIs]

2017-08-02 Thread Blair MacIntyre
> At least these things should be purely optional and providing an
> *easy* way to filter that data. (same for the geolocation stuff).


FWIW, I wouldn’t mind being involved in a discussion about this, if people want 
to seriously consider putting it behind a "user-permission prompt"  (similar to 
geolocation) or "user-action requirement” (similar to webvr and some aspects of 
mobile video playing) of some sort.  

There has been discussion of this issue in the WebVR community, for example, 
noting that in WebVR, you don’t get any device reports without a  user action 
requesting the “VR”.  But, there is the tension between making the APIs usable, 
permission fatigue on the part of users, etc.  On top of that, there is very 
likely a need to not just “ask once at the start” but toggle access to 
sensitive info on/off as the user uses a web app (e.g., in the experimental 
Argon4 “AR-enabled” web browser, we have the ability to toggle location data 
on/off at any time without having to reload).

I think as we move toward exposing AR technology (like Tango, ARKit, Windows 
Holographic) in web user-agents, we may need to rethink how we obtain and 
manage the data user’s give to pages.  

I want the web to work well in these new application areas;  but I also want 
the characteristics of the web we love (i.e., the ability to feel relatively 
safe as you move around and follow links) to survive as well.  I believe that 
respecting user privacy and supporting their ability to control information 
flow may actually be the thing that makes the web a preferred platform for 
AR/VR,  since the various platforms are giving all data to apps automatically, 
which create a “take it or leave it” attitude regarding privacy and sensor 
information.   This is a major driver for me for how a WebAR api may structured.

Anyway, if folks want to discuss this, let me know.  We should probably move 
off this thread?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-08-02 Thread Blair MacIntyre
> On Wed, Aug 2, 2017 at 4:39 PM, Blair MacIntyre <bmacint...@mozilla.com> 
> wrote:
>> Are we still talking about deviceorientation?
> 
> As I said twice and Frederik repeated, we're not, other than asking if
> anyone has a plan for how to make it interoperable.

Yes, I know;  I was just responding to Enrico’s question. :)

> Note that it's far
> from a W3C standard: https://www.w3.org/TR/orientation-event/. Doesn't
> seem like anything got approved there.

Fair enough, sorry for my incorrect assertion, I should have checked!   So, 
this is an example of the danger, perhaps, of everybody implementing a working 
proposal before it’s approved, and then having it adopted and used widely.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-08-02 Thread Blair MacIntyre
Are we still talking about deviceorientation?

It’s used to determine the 3D orientation of the device, so that we can tell 
the direction it is facing.  Developers use it to render 3D graphics (WebGL or 
CSS3D using perspective DIV) around the user.  e.g., look at one of my project 
samples, like https://samples.argonjs.io/directionsWebGL 
, which uses device location and 
deviceorientation (this simple samples puts 3D labels in the cardinal 
directions, and uses the position to illuminate them based on the current sun 
location).  The WebVR polyfill uses it to determine viewing direction, to 
simulate 3D device orientation.

It’s used for panoramic image viewing (orient the pano with the camera 
movement), and google street view uses it for similar motion control. 

Regarding security:  perhaps it is, I have seen discussions of this sort.  But, 
it would seem that ship sailed when the W3C approved it, and now it’s common 
and assumed and relied upon. Removing it in Firefox would render Firefox 
incompatible with a growing use of the web, especially mobile (including 
Windows tablets).  This might be a discussion the security team wants to have, 
I guess:  is the Firefox team worried enough about the threats opened by this 
API to justify breaking a large class of applications, and making Firefox 
unusable for AR/VR moving forward.

> On Aug 2, 2017, at 9:54 AM, Enrico Weigelt, metux IT consult 
>  wrote:
> 
> On 02.08.2017 13:01, Salvador de la Puente wrote:
>> I strongly encourage you to take a look at the telemetry stats regarding
>> the usage of deviceorientation API and other. I don't know the penetration
>> of proximity and ambient light APIs but deviceorientation is definitively
>> used.
> 
> Just curious: for what exactly is it used ?
> 
> For rendering / GUI, I'd assume that the job of the windowing system /
> compositor - the application would just see a window geometry change.
> 
> Making that information visible to websites (even worse: movement
> tracking via g-sensor, etc), definitively looks like security nightmare
> which even the Stasi never dared dreaming of.
> 
> --mtx
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-07-24 Thread Blair MacIntyre
I’m not sure what you’re asking:  I’ve been using the deviceorientation API 
like this for many years, as have plenty of other people.It’s absolutely 
needed.
 
--
Blair MacIntyre
Principal Research Scientist
bmacint...@mozilla.com <mailto:bmacint...@mozilla.com>




> On Jul 24, 2017, at 8:04 PM, Enrico Weigelt, metux IT consult 
> <enrico.weig...@gr13.net <mailto:enrico.weig...@gr13.net>> wrote:
> 
> On 24.07.2017 20:46, Blair MacIntyre wrote:
> 
>> We are working on adding AR capabilities to the browser, and this will 
>> (similarly)
> > need to know device orientation.
> 
> Please make sure, we can easily compile completely w/o that.
> 
> 
> --mtx
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-07-24 Thread Blair MacIntyre
On Jul 24, 2017, at 4:38 PM, Enrico Weigelt, metux IT consult 
 wrote:
> 
> On 24.07.2017 15:07, Mike Hoye wrote:
>> 
>> I have a sense that as AR gets richer and more nuanced that ambient
> 
> Are we still talking about browsers ?


Yes.  

There are plenty of websites that use deviceorientation, for example to let you 
pan around a panographic photo or a 3D “scene” using the orientation of your 
mobile device.  

Most websites that use WebVR prefer to use “real” WebVR APIs, but on most 
mobile browsers (which don’t support them) use deviceorientation to support a 
“3DOF” (just orientation) view in the world.

We are working on adding AR capabilities to the browser, and this will 
(similarly) need to know device orientation.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-07-24 Thread Blair MacIntyre
True, true.  For example, if the ambient light sensing could deliver the kind 
of “estimation of ambient lighting” that Apple’s ARKit does, we could use that 
in rendering.

But, one question will be:  what of these capabilities should just be part of 
“WebAR”, and which can be used effectively independent of WebAR?  Especially 
when we think of issues discussed on this thread (e.g., threat models and 
security) having “things needed for AR” folded into WebAR, and accessible in 
the context of whatever permission model WebAR ends up using may be the way we 
want to go.

--
Blair MacIntyre
Principal Research Scientist
bmacint...@mozilla.com <mailto:bmacint...@mozilla.com>




> On Jul 24, 2017, at 11:07 AM, Mike Hoye <mh...@mozilla.com 
> <mailto:mh...@mozilla.com>> wrote:
> 
> 
> I have a sense that as AR gets richer and more nuanced that ambient light and 
> proximity sensing will become important as well, even if we're not there yet.
> 
> - mhoye
> 
> On 2017-07-24 10:39 AM, Blair MacIntyre wrote:
>> I was just about to say the same thing.  This API is essential for our AR 
>> work;  the fact that Firefox is different than other browsers is 
>> problematic, but there are javascript libraries that help with it.  Getting 
>> rid of it would be really bad.
>> 
>>> On Jul 24, 2017, at 9:57 AM, Ben Kelly <bke...@mozilla.com 
>>> <mailto:bke...@mozilla.com>> wrote:
>>> 
>>> On Mon, Jul 24, 2017 at 5:10 AM, Anne van Kesteren <ann...@annevk.nl 
>>> <mailto:ann...@annevk.nl>> wrote:
>>> 
>>>> * Device orientation
>>>> 
>>> Isn't this one required to build a decent web experience on mobile for some
>>> sites?  It seems pretty common on mobile to adjust the UX based on whether
>>> the device is in portrait/landscape orientation.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org>
>>> https://lists.mozilla.org/listinfo/dev-platform
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org>
>> https://lists.mozilla.org/listinfo/dev-platform
> 
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: sensor APIs

2017-07-24 Thread Blair MacIntyre
I was just about to say the same thing.  This API is essential for our AR work; 
 the fact that Firefox is different than other browsers is problematic, but 
there are javascript libraries that help with it.  Getting rid of it would be 
really bad.

> On Jul 24, 2017, at 9:57 AM, Ben Kelly  wrote:
> 
> On Mon, Jul 24, 2017 at 5:10 AM, Anne van Kesteren  wrote:
> 
>> * Device orientation
>> 
> 
> Isn't this one required to build a decent web experience on mobile for some
> sites?  It seems pretty common on mobile to adjust the UX based on whether
> the device is in portrait/landscape orientation.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform