Re: [webvr] [gamepad] Missing VRPose for tracked controllers

2016-05-24 Thread Florian Bösch
On Tue, May 24, 2016 at 8:50 PM, Brandon Jones  wrote:
>
> but Yaw probably just initializes to whatever position the user started
> with.
>
Applications for these kinds of controllers usually have some "reset
forward" thing. Sometimes the settings of the device do (as is the case
with Oculuses HMD), sometimes it's up to programmers of the application to
do it (as was the case with the dk1, dk2). The Razer Hydra had some
calibration utility. I'd expect Sixsense to come with an SDK where users
can configure their space.


Re: [webvr] Re: [gamepad] Missing VRPose for tracked controllers

2016-05-24 Thread Florian Bösch
You should discuss this on the WebVR ML as well, it's the main
communication channel for that, and since it has impact on their spec it
should be coordinated.

On Tue, May 24, 2016 at 9:22 AM, Sven Neuhaus <sven...@sven.de> wrote:

> Hello Florian,
>
> Thanks for pointing out the WebVR spec draft.
>
> The WebVR draft dated April 1st contains a Gamepad interface expansion
> (§2.11), however it only extends it by a DisplayId.
> It should also add a VRPose for tracked controllers.
>
> I think adding a VRPose could have benefits for non-VR applications as
> well (think about the Nintendo Wii controllers!), however. So my
> suggestion to add it to the Gamepad API still stands.
>
> Regards,
> -Sven Neuhaus
>
> Am 23.05.2016 um 15:52 schrieb Florian Bösch:
> > The WebVR API models HMD pose and will model the gesture controllers.
> > https://mozvr.com/webvr-spec/
> >
> > On Tue, May 17, 2016 at 9:41 AM, Sven Neuhaus <sven...@sven.de
> > <mailto:sven...@sven.de>> wrote:
>
> > I read the gamepad API description at
> > https://developer.mozilla.org/en-US/docs/Web/API/Gamepad
> >
> > I think the gamepad API should support a VRpose for gamepad
> controllers
> > like the ones included with the HTC Vive and the upcoming Oculus
> Touch
> > controllers.
> >
> > I suggest that you add a getPose() method that returns a VRPose
> object
> > for controllers that support tracking.
> >
> > The "orientation" property of the VRPose object could also be useful
> for
> > some gamepads that include IMUs for orientation tracking.
>
>


Re: [gamepad] Missing VRPose for tracked controllers

2016-05-23 Thread Florian Bösch
The WebVR API models HMD pose and will model the gesture controllers.
https://mozvr.com/webvr-spec/

On Tue, May 17, 2016 at 9:41 AM, Sven Neuhaus  wrote:

> Hello,
>
> I read the gamepad API description at
> https://developer.mozilla.org/en-US/docs/Web/API/Gamepad
>
> I think the gamepad API should support a VRpose for gamepad controllers
> like the ones included with the HTC Vive and the upcoming Oculus Touch
> controllers.
>
> I suggest that you add a getPose() method that returns a VRPose object
> for controllers that support tracking.
>
> The "orientation" property of the VRPose object could also be useful for
> some gamepads that include IMUs for orientation tracking.
>
> Best regards,
> -Sven Neuhaus
>
>
>


Re: [gamepad] New feature proposals: pose, touchpad, vibration

2016-04-25 Thread Florian Bösch
On Mon, Apr 25, 2016 at 4:31 AM, Chris Van Wiemeersch 
wrote:
>
> If you take a look at all the content libraries out there for the Gamepad
> API, there's a ridiculous amount of logic and special casing web developers
> are having to do just between the Firefox and Chrome implementations - and
> between Windows and Mac.
>
The existing specification is not adhered to, or contains no fixed language
how things are done (where browsers do differ), maybe it should be added?


Re: File API - where are the missing parts?

2016-02-23 Thread Florian Bösch
On Tue, Feb 23, 2016 at 7:06 PM, Joshua Bell <jsb...@google.com> wrote:

> On Tue, Feb 23, 2016 at 7:12 AM, Florian Bösch <pya...@gmail.com> wrote:
>
>> On Tue, Feb 23, 2016 at 2:48 AM, Jonas Sicking <jo...@sicking.cc> wrote:
>>
>>> Is the last bullet here really accurate? How can you use existing APIs
>>> to listen to file modifications?
>>>
>> I have not tested this on all UAs, but in Google Chrome what you can do
>> is to set an interval to check a files.lastModified date, and if a
>> modification is detected, read it in again with a FileReader and that works
>> fine.
>>
>
> Huh... we should probably specify and/or fix that.
>
Specify rather than fix, please.


> There are also APIs implemented in several browsers for opening a whole
>>> directory of files from a webpage. This has been possible for some time in
>>> Chrome, and support was also recently added to Firefox and Edge. I'm not
>>> sure how interoperable these APIs are across browsers though :(
>>>
>>
> IIRC, Edge's API[1] mimics Chrome's, and you can polyfill Firefox's API
> [2] on top of Chrome/Edge's[3]. So in theory if Firefox's pleases
> developers they can adopt the polyfill, and other browsers can transition
> to native support.
>
> [1] https://lists.w3.org/Archives/Public/public-wicg/2015Sep/.html
> [2] https://wicg.github.io/directory-upload/proposal.html
> [3] https://github.com/WICG/directory-upload/blob/gh-pages/polyfill.js
>
> ... or just read Ali's excellent summary:
>
> https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0245.html
>
> (But that's all a tangent to Florian's main use cases...)
>
It's good to know this on a standards track.

True, but if we determine that permissions must be granted then the API
> needs to be designed to handle it, e.g. entry points to the API surface are
> through a requestPermission() API, everything is async, etc.
>

Ack


> One concern is: what capabilities are granted by this action? Can the
> web-app re-save the file? Can it re-read the file? Does that permission
> persist across sessions? For example, if I save a document template from a
> site I would not expect the site to be able to read the file after I've
> edited with an unrelated file editor.
>
>>
>>- Save many files to a user pickable folder: same as above
>>- Working directory: this is more something that would go on in the
>>background of a UA, it would have to establish a "working directory" per
>>tab rather than UA-wide. No UX issues with that.
>>
>> Agreed. Likely doesn't even need to be specified - it'd just be a "least
> surprise" behavior by the UA.
>
The save to directory case is less easy to handle because it impinges on
overwrite. After some thought, I'd move that to the more difficult UX cases.


> * Since "File > Open" is supported today (via ) we must
> be careful about exposing functionality that has similar UX (i.e. a native
> file open dialog) but that implicitly grants extra permissions (e.g. being
> able to modify the file). This points to either additional UX during the
> action, UX when the app wants to write, or a more general permission
> granted to the origin for some scope (file? directory?).
>
I'd think this to be a non formative note on implementation for UAs. The
mechanism of denying an action by the API should be fairly straightforward.


> * Should permissions persist? If you're working in an editor and reload
> the tab, being hit with a flurry of permission prompts is less than ideal.
> But if you visit it again in a day or a year? And, similar to the
> "template" case above, what if you use a web-based editor to modify a file,
> then revisit the site a year later.
>
I don't think long persistence on file-location is a feasible idea. But the
option to choose persistence within a session seems a viable compromise.
You'll still need to click the dialog away once, but then you can work
uninterrupted.


Re: File API - where are the missing parts?

2016-02-23 Thread Florian Bösch
On Tue, Feb 23, 2016 at 2:48 AM, Jonas Sicking  wrote:

> Is the last bullet here really accurate? How can you use existing APIs to
> listen to file modifications?
>
I have not tested this on all UAs, but in Google Chrome what you can do is
to set an interval to check a files.lastModified date, and if a
modification is detected, read it in again with a FileReader and that works
fine.


> There are also APIs implemented in several browsers for opening a whole
> directory of files from a webpage. This has been possible for some time in
> Chrome, and support was also recently added to Firefox and Edge. I'm not
> sure how interoperable these APIs are across browsers though :(
>
There does not seem to be a standard about this, or is there? It's an
essential functionality to be able to import OBJ and Collada files because
they are composites of the main file and other files (such as material
definitions or textures).


> Another important missing capability is the ability to modify an existing
> file. I.e. write 10 bytes in the middle of a 1GB file, without having to
> re-write the whole 1GB to disk.
>
Good point


> However, before getting into such details, it is very important when
> discussing read/writing is to be precise about which files can be
> read/written.
>
> For example IndexedDB supports storing File (and Blob) objects inside
> IndexedDB. You can even get something very similar to incremental
> reads/writes by reading/writing slices.
>
> Here's a couple of libraries which implement filesystem APIs, which
> support incremental reading and writing, on top of IndexedDB:
>
> https://github.com/filerjs/filer
> https://github.com/ebidel/idb.filesystem.js
>
> However, IndexedDB, and thus any libraries built on top of it, only
> supports reading and writing files inside a origin-specific
> browser-sandboxed directory.
>
> This is also true for the the Filesystem API implemented in Google Chrome
> APIs that you are linking to. And it applies to the Filesystem API proposal
> at [1].
>
> Writing files outside of any sandboxes requires not just an API
> specification, but also some sane, user understandable UX.
>
> So, to answer your questions, I would say:
>
> The APIs that you are linking to does not in fact meet the use cases that
> you are pointing to in your examples. Neither does [1], which is the
> closest thing that we have to a successor.
>
> The reason that no work has been done to meet the use cases that you are
> referring to, is that so far no credible solutions have been proposed for
> the UX problem. I.e. how do we make it clear to the user that they are
> granting access to the webpage to overwrite the contents of a file.
>
> [1] http://w3c.github.io/filesystem-api/
>
To be clear, I'm referring specifically to the ability of a user to pick
any destination on his mass-storage device to manage his data. This might
not be as sexy and easy as IndexDB & Co. but it's an essential
functionality for users to be able to organize their files to where they
want to have them, with the minimum of fuss.

I'm aware that there's thorny questions regarding UX (although UX itself is
rarely if ever specified in a W3C standard is it?). But that does not
impact all missing pieces. Notably not these:

   - Save a file incrementally (and with the ability to abort): not a UX
   problem because the mechanism to save files exists, it's just
   insufficiently specified to allow for streaming writes.
   - Save as pickable destination: also not a UX problem, the standard
   solution here is to present the user with a standard operating system
   specific file save dialog.
   - Save many files to a user pickable folder: same as above
   - Working directory: this is more something that would go on in the
   background of a UA, it would have to establish a "working directory" per
   tab rather than UA-wide. No UX issues with that.

Additionally this should be minimally UX controversial:

   - Overwrite a file (either previously saved or opened): I think it'd be
   a legitimate implementation of the UX to show an appropriate dialog at the
   time of overwrite that indicates what is overwritten, it's just a
   fast-track to save as pick file (and the UX can be improved by persistence
   of choice if that is deemed an acceptable risk).

So it doesn't strike me that these missing features would create massive UX
problems, indeed, most of them create no UX problem at all.


Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-02 Thread Florian Bösch
1) Encryption between Alice and Bob by means of an asymmetric
public/private key exchange protocol cannot be secure if both also exchange
the keys and none has a method to verify the keys they got are the correct
ones. Chuck who might control the gateway over which either Alice or Bob
communicate can simply substitute his own public key for either of the two.
The whole point of certificates is that the sought out endpoint can provide
a set of credentials that're signed by a certificate authority, which is in
a chain of trust up to the root authority which is implicitly trusted.

2) You cannot obtain the benefits of UDP (out of order packages as fast as
they come) and yet retain the benefits of asymmetric public/private key
encryption schemes which rely on packages arriving in order. Attempting to
get both will result in detrimental performance or non existent security.

On Wed, Dec 2, 2015 at 2:05 PM, Aymeric Vitte <vitteayme...@gmail.com>
wrote:

>
>
> Le 02/12/2015 13:18, Florian Bösch a écrit :
> > On Wed, Dec 2, 2015 at 12:50 PM, Aymeric Vitte <vitteayme...@gmail.com
> > <mailto:vitteayme...@gmail.com>> wrote:
> >
> > Then you should follow your rules and apply this policy to WebRTC, ie
> > allow WebRTC to work only with http.
> >
> >
> > Just as a sidenote, WebRTC also does UDP and there's no TLS over UDP.
> > Also WebRTC does P2P, and there's no certificates/authorities there (you
> > could encrypt, but I don't think it does even when using TCP/IP (which
> > it doesn't in case of streaming video over UDP).
>
> See https://github.com/Ayms/node-Tor#security, WebRTC uses DTLS with
> self-signed certifcates + a third party mechanism supposed to secure the
> connection.
>
> As a matter of fact this is almost exactly the same mechanism used by
> the Tor network, where the CERTS cells use the long term ID key of a Tor
> node to make sure that you are discussing with that one.
>
> This does not prevent of course from discussing with a malicious node
> not identified as such with valid long term ID keys, which is not a
> problem for Tor (but is a problem for WebRTC), as long as it behaves as
> expected, and if it does not, this will be detected.
>
> The above mechanism is specific to the Tor network, for other uses of
> the Tor protocol an alternative is explained here:
> https://github.com/Ayms/node-Tor#pieces-and-sliding-window for WebRTC
>
> And again, adding a TLS layer on top of all this is of complete no use.
>
> --
> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
> Check the 10 M passwords list: http://peersm.com/findmyass
> Anti-spies and private torrents, dynamic blocklist:
> http://torrent-live.org
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>


Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-02 Thread Florian Bösch
>
> In DTLS, each handshake message is assigned a specific sequence
>number within that handshake.  When a peer receives a handshake
>message, it can quickly determine whether that message is the next
>message it expects.  If it is, then it processes it.  If not, it
>queues it for future handling once all previous messages have been
>received.
>
>
The point of receiving UDP packets with fresh unqueued data is performance.
If you queue the packet for future handling, you've thrown that away, and
it begs the question, why don't you just use TCP/IP (which guarantees
ordering)...

On Wed, Dec 2, 2015 at 3:54 PM, Richard Barnes <rbar...@mozilla.com> wrote:

>
>
> On Wed, Dec 2, 2015 at 9:36 AM, Florian Bösch <pya...@gmail.com> wrote:
>
>> 1) Encryption between Alice and Bob by means of an asymmetric
>> public/private key exchange protocol cannot be secure if both also exchange
>> the keys and none has a method to verify the keys they got are the correct
>> ones. Chuck who might control the gateway over which either Alice or Bob
>> communicate can simply substitute his own public key for either of the two.
>> The whole point of certificates is that the sought out endpoint can provide
>> a set of credentials that're signed by a certificate authority, which is in
>> a chain of trust up to the root authority which is implicitly trusted.
>>
>> 2) You cannot obtain the benefits of UDP (out of order packages as fast
>> as they come) and yet retain the benefits of asymmetric public/private key
>> encryption schemes which rely on packages arriving in order. Attempting to
>> get both will result in detrimental performance or non existent security.
>>
>
> I think that if you read the DTLS spec, you'll see that it handles
> reordering just fine.
>
> https://tools.ietf.org/html/rfc6347#section-3.2.2
>
>
>
>>
>> On Wed, Dec 2, 2015 at 2:05 PM, Aymeric Vitte <vitteayme...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> Le 02/12/2015 13:18, Florian Bösch a écrit :
>>> > On Wed, Dec 2, 2015 at 12:50 PM, Aymeric Vitte <vitteayme...@gmail.com
>>> > <mailto:vitteayme...@gmail.com>> wrote:
>>> >
>>> > Then you should follow your rules and apply this policy to WebRTC,
>>> ie
>>> > allow WebRTC to work only with http.
>>> >
>>> >
>>> > Just as a sidenote, WebRTC also does UDP and there's no TLS over UDP.
>>> > Also WebRTC does P2P, and there's no certificates/authorities there
>>> (you
>>> > could encrypt, but I don't think it does even when using TCP/IP (which
>>> > it doesn't in case of streaming video over UDP).
>>>
>>> See https://github.com/Ayms/node-Tor#security, WebRTC uses DTLS with
>>> self-signed certifcates + a third party mechanism supposed to secure the
>>> connection.
>>>
>>> As a matter of fact this is almost exactly the same mechanism used by
>>> the Tor network, where the CERTS cells use the long term ID key of a Tor
>>> node to make sure that you are discussing with that one.
>>>
>>> This does not prevent of course from discussing with a malicious node
>>> not identified as such with valid long term ID keys, which is not a
>>> problem for Tor (but is a problem for WebRTC), as long as it behaves as
>>> expected, and if it does not, this will be detected.
>>>
>>> The above mechanism is specific to the Tor network, for other uses of
>>> the Tor protocol an alternative is explained here:
>>> https://github.com/Ayms/node-Tor#pieces-and-sliding-window for WebRTC
>>>
>>> And again, adding a TLS layer on top of all this is of complete no use.
>>>
>>> --
>>> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
>>> Check the 10 M passwords list: http://peersm.com/findmyass
>>> Anti-spies and private torrents, dynamic blocklist:
>>> http://torrent-live.org
>>> Peersm : http://www.peersm.com
>>> torrent-live: https://github.com/Ayms/torrent-live
>>> node-Tor <https://github.com/Ayms/torrent-livenode-Tor> :
>>> https://www.github.com/Ayms/node-Tor
>>> GitHub : https://www.github.com/Ayms
>>>
>>
>>
>


Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-02 Thread Florian Bösch
On Wed, Dec 2, 2015 at 12:50 PM, Aymeric Vitte 
wrote:
>
> Then you should follow your rules and apply this policy to WebRTC, ie
> allow WebRTC to work only with http.
>

Just as a sidenote, WebRTC also does UDP and there's no TLS over UDP. Also
WebRTC does P2P, and there's no certificates/authorities there (you could
encrypt, but I don't think it does even when using TCP/IP (which it doesn't
in case of streaming video over UDP).


Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Florian Bösch
On Mon, Nov 30, 2015 at 10:45 PM, Richard Barnes 
wrote:

> 1. Authentication: You know that you're talking to who you think you're
> talking to.
>

And then Dell installs a their own root authority on machines they ship, or
your CA of choice gets pwn'ed or the NSA uses some undisclosed backdoor in
the EC they managed to smuggle into the constants, or somebody combines a
DNS poison/grab with a non verified (because piss poor CA) double
certificate, or you hit one of the myriad of bugs that've plaqued TLS
implementations (particularly certain large and complex ones that're
basically one big ball of gnud which shall remain unnamed).


Re: [pointerlock] Oct 2015 Pointer Lock Status

2015-11-01 Thread Florian Bösch
On Sun, Nov 1, 2015 at 9:08 PM, Vincent Scheib  wrote:
>
> Thanks for clarifying. Basic usage is demonstrated in the wild but some
> edge cases should have clear demonstration in the test suite. I will
> generate those as other project priorities allow (and would of course
> review any from others).
>

One of the things that frequently go wrong with pointerlock in the wild is
that authors request it, but don't check the result of the authorization.
They then go on to write applications that only work right with pointerlock
(for example FPS controls), and UAs deny pointerlock (because the user
needs to click an allow button). The dialog to allow pointerlock is not
modal and it's also presented small at the top of the screen, so it's
frequently missed.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary content-specific
 formats, but IMO the spec should at least give guidance on how to present
 the capability in a safe way.

Which is exactly the core of my question. If you intend to make it say,
safe to put OpenEXR into the clipboard (as opposed to letting an app just
put any bytes there), the UA has to understand OpenEXR. Since I don't see
how the UA can understand every conceivable format in existence both future
and past, I don't see how that should work.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
Or should we just place that into application/octet-stream and hope
whoever listens for the clipboard scans the magic bytes of an OpenEXR?

On Thu, Jun 25, 2015 at 2:56 PM, Florian Bösch pya...@gmail.com wrote:

 Well let's say some webapp generates an OpenEXR and wants to put it into
 the clipboard as image/x-exr which would make sense cause any eventual
 program that'd support OpenEXR would probably look for that mime type.
 You've said you're going to restrict image types to jpeg, png and gif, and
 so my question is, how exactly do you intend to support OpenEXR?

 On Thu, Jun 25, 2015 at 2:51 PM, Wez w...@google.com wrote:

 Sorry Florian, but I don't see what that has to do with whether or not
 the Clipboard Events spec mandates that web content can generate their own
 JPEG or PNG and place it directly on the local system clipboard.

 What is it that you're actually proposing?

 On Thu, 25 Jun 2015 at 13:31 Florian Bösch pya...@gmail.com wrote:

 No idea. Also doesn't matter jack. There could be some now or in the
 future. There's a variety of programs that support HDRi (photoshop,
 lightroom, hdri-studio, etc.). It's fairly logical that at some point some
 or another variant of HDR format will make its way into clipboards. The
 same applies to pretty much any other data format be that a file or
 something else.





Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
No idea. Also doesn't matter jack. There could be some now or in the
future. There's a variety of programs that support HDRi (photoshop,
lightroom, hdri-studio, etc.). It's fairly logical that at some point some
or another variant of HDR format will make its way into clipboards. The
same applies to pretty much any other data format be that a file or
something else.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
On Thu, Jun 25, 2015 at 2:58 PM, Florian Bösch pya...@gmail.com wrote:

 the magic bytes of an OpenEXR?


Which is 0x762f3101 btw.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
Well let's say some webapp generates an OpenEXR and wants to put it into
the clipboard as image/x-exr which would make sense cause any eventual
program that'd support OpenEXR would probably look for that mime type.
You've said you're going to restrict image types to jpeg, png and gif, and
so my question is, how exactly do you intend to support OpenEXR?

On Thu, Jun 25, 2015 at 2:51 PM, Wez w...@google.com wrote:

 Sorry Florian, but I don't see what that has to do with whether or not the
 Clipboard Events spec mandates that web content can generate their own JPEG
 or PNG and place it directly on the local system clipboard.

 What is it that you're actually proposing?

 On Thu, 25 Jun 2015 at 13:31 Florian Bösch pya...@gmail.com wrote:

 No idea. Also doesn't matter jack. There could be some now or in the
 future. There's a variety of programs that support HDRi (photoshop,
 lightroom, hdri-studio, etc.). It's fairly logical that at some point some
 or another variant of HDR format will make its way into clipboards. The
 same applies to pretty much any other data format be that a file or
 something else.




Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
Surely you realize that if the specification where to state to only
safely expose data to the clipboard, this can only be interpreted to deny
any formats but those a UA can interprete and deem well-formed. If such a
thing where to be done, that would leave any user of the clipboard no
recourse but to resort to application/octett-stream and ignore any other
metadata as the merry magic header guessing game gets underway. For all
you'd have achieved was to muddle any meaning of the mime-type and forced
applications to work around an unenforceable restriction.

On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec mandates
 that user agents let apps place JPEG, PNG or GIF directly on the local
 system clipboard. The spec doesn't currently mandate OpenEXR be supported,
 so it's currently up to individual user agents to decide whether they can
 support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary content-specific
 formats, but IMO the spec should at least give guidance on how to present
 the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it say,
 safe to put OpenEXR into the clipboard (as opposed to letting an app just
 put any bytes there), the UA has to understand OpenEXR. Since I don't see
 how the UA can understand every conceivable format in existence both future
 and past, I don't see how that should work.




Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
Browsers are very visible applications. Most other applications in
existence tend to work around their foibles in one fashion or another. If
Browsers where to sprout another such foible as to force people to discard
mime-type specification for content because browsers don't let them, it
would give rise to widely confusing and homebrewn workarounds till out of
that broil another mime-type standard emerged that browsers sought to
repress.

On Thu, Jun 25, 2015 at 4:30 PM, Florian Bösch pya...@gmail.com wrote:

 I'm pretty sure it can't be in the interest of this specification to force
 application authors to bifurcate the mime-type into one that can't be used
 reliably, and another informal one that's prepended to the octet-stream.
 Relevant XKCD quote omitted.

 On Thu, Jun 25, 2015 at 4:27 PM, Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to deny
 any formats but those a UA can interprete and deem well-formed. If such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide whether
 they can support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary
 content-specific formats, but IMO the spec should at least give guidance 
 on
 how to present the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it say,
 safe to put OpenEXR into the clipboard (as opposed to letting an app just
 put any bytes there), the UA has to understand OpenEXR. Since I don't see
 how the UA can understand every conceivable format in existence both future
 and past, I don't see how that should work.






Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
It's very simple. Applications need to know what's in the clipboard to know
what to do with it. There is also a vast variety of things that could find
itself in the clipboard in terms of formats, both formal and informal. Mime
types are one of these things that applications would use to do that.

If a UA where to restict what mime type you can put into the clipboard,
that forces the clipboard user to use application/octet-stream. And in
consequence, that forces any such-willing application to forgoe the
mime-type information from the OS'es clipboard API and figure out what's in
it from the content. In turn this would give rise to another way to markup
mime-types in-line with the content. And once you've forced such ad-hoc
solutions to emerge for meddling with what people can put in the clipboard,
you'll have no standing to put that geenie back in the bottle, again,
relevant XKCD quote omitted.

On Thu, Jun 25, 2015 at 4:48 PM, Wez w...@google.com wrote:

 You've mentioned resorting to application/octet-stream several times in
 the context of this discussion, where AFAICT the spec actually only
 describes using it as a fall-back for cases of file references on the
 clipboard for which the user agent is unable to determine the file type.

 So IIUC you're suggesting that user agents should implement
 application/octet-stream (as is also mandated by the spec, albeit without
 a clear indication of what it means in this context) by putting the content
 on the clipboard as an un-typed file?

 Again, I'm unclear as to what the alternative is that you're proposing?

 On Thu, 25 Jun 2015 at 15:27 Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to deny
 any formats but those a UA can interprete and deem well-formed. If such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide whether
 they can support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary
 content-specific formats, but IMO the spec should at least give guidance 
 on
 how to present the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it say,
 safe to put OpenEXR into the clipboard (as opposed to letting an app just
 put any bytes there), the UA has to understand OpenEXR. Since I don't see
 how the UA can understand every conceivable format in existence both future
 and past, I don't see how that should work.





Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
No, what I'm saying is that if you restrict mime types (or don't explicitly
prohibit such restriction), but require application/octet-stream, that
application/octet-stream becomes the undesirable mime-type dumping
ground. And that would be bad because that makes it much harder for
applications to deal with content. But if that's the only way UAs are going
to act, then applications will work around that by using elaborate guessing
code based on magic bytes, and perhaps some application developers will use
their own mime-type annotation pretended to the octet-stream.

If you inconvenience people, but don't make it impossible to work around
the inconvenience, then people will work around the inconvenience. It can't
be the intention to encourage them work around it. So you've got to either
not inconvenience them, or make working around impossible.

On Thu, Jun 25, 2015 at 5:07 PM, Wez w...@google.com wrote:

 Florian, you keep referring to using application/octet-stream - that's not
 a format that all user agents support (although the spec says they should
 ;), nor is there any mention in the spec of what it means to place content
 on the clipboard in that format (given that platform native clipboards each
 have their own content-type annotations).

 So it sounds like you're saying we should also remove
 application/octet-stream as a mandatory format?

 On Thu, 25 Jun 2015 at 15:55 Florian Bösch pya...@gmail.com wrote:

 It's very simple. Applications need to know what's in the clipboard to
 know what to do with it. There is also a vast variety of things that could
 find itself in the clipboard in terms of formats, both formal and informal.
 Mime types are one of these things that applications would use to do that.

 If a UA where to restict what mime type you can put into the clipboard,
 that forces the clipboard user to use application/octet-stream. And in
 consequence, that forces any such-willing application to forgoe the
 mime-type information from the OS'es clipboard API and figure out what's in
 it from the content. In turn this would give rise to another way to markup
 mime-types in-line with the content. And once you've forced such ad-hoc
 solutions to emerge for meddling with what people can put in the clipboard,
 you'll have no standing to put that geenie back in the bottle, again,
 relevant XKCD quote omitted.

 On Thu, Jun 25, 2015 at 4:48 PM, Wez w...@google.com wrote:

 You've mentioned resorting to application/octet-stream several times
 in the context of this discussion, where AFAICT the spec actually only
 describes using it as a fall-back for cases of file references on the
 clipboard for which the user agent is unable to determine the file type.

 So IIUC you're suggesting that user agents should implement
 application/octet-stream (as is also mandated by the spec, albeit without
 a clear indication of what it means in this context) by putting the content
 on the clipboard as an un-typed file?

 Again, I'm unclear as to what the alternative is that you're proposing?

 On Thu, 25 Jun 2015 at 15:27 Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to deny
 any formats but those a UA can interprete and deem well-formed. If such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide 
 whether
 they can support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary
 content-specific formats, but IMO the spec should at least give 
 guidance on
 how to present the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it
 say, safe to put OpenEXR into the clipboard (as opposed to letting an app
 just put any bytes there), the UA has to understand OpenEXR. Since I 
 don't
 see how the UA can understand every conceivable format in existence both
 future and past, I don't see how that should work.






Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
I'm pretty sure it can't be in the interest of this specification to force
application authors to bifurcate the mime-type into one that can't be used
reliably, and another informal one that's prepended to the octet-stream.
Relevant XKCD quote omitted.

On Thu, Jun 25, 2015 at 4:27 PM, Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to deny
 any formats but those a UA can interprete and deem well-formed. If such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide whether
 they can support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary content-specific
 formats, but IMO the spec should at least give guidance on how to present
 the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it say,
 safe to put OpenEXR into the clipboard (as opposed to letting an app just
 put any bytes there), the UA has to understand OpenEXR. Since I don't see
 how the UA can understand every conceivable format in existence both future
 and past, I don't see how that should work.





Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
Yet you restrict mime-types AND you support application/octet-stream?

On Thu, Jun 25, 2015 at 7:34 PM, Daniel Cheng dch...@google.com wrote:

 For reasons I've already mentioned, this isn't going to happen because
 there is no so-called dumping ground.

 No one is going to risk their paste turning into thousands of lines of
 gibberish because they tried to stuff binary data in text/plain.

 Daniel


 On Thu, Jun 25, 2015 at 8:23 AM Florian Bösch pya...@gmail.com wrote:

 No, what I'm saying is that if you restrict mime types (or don't
 explicitly prohibit such restriction), but require
 application/octet-stream, that application/octet-stream becomes the
 undesirable mime-type dumping ground. And that would be bad because that
 makes it much harder for applications to deal with content. But if that's
 the only way UAs are going to act, then applications will work around that
 by using elaborate guessing code based on magic bytes, and perhaps some
 application developers will use their own mime-type annotation pretended to
 the octet-stream.

 If you inconvenience people, but don't make it impossible to work around
 the inconvenience, then people will work around the inconvenience. It can't
 be the intention to encourage them work around it. So you've got to either
 not inconvenience them, or make working around impossible.

 On Thu, Jun 25, 2015 at 5:07 PM, Wez w...@google.com wrote:

 Florian, you keep referring to using application/octet-stream - that's
 not a format that all user agents support (although the spec says they
 should ;), nor is there any mention in the spec of what it means to place
 content on the clipboard in that format (given that platform native
 clipboards each have their own content-type annotations).

 So it sounds like you're saying we should also remove
 application/octet-stream as a mandatory format?

 On Thu, 25 Jun 2015 at 15:55 Florian Bösch pya...@gmail.com wrote:

 It's very simple. Applications need to know what's in the clipboard to
 know what to do with it. There is also a vast variety of things that could
 find itself in the clipboard in terms of formats, both formal and informal.
 Mime types are one of these things that applications would use to do that.

 If a UA where to restict what mime type you can put into the clipboard,
 that forces the clipboard user to use application/octet-stream. And in
 consequence, that forces any such-willing application to forgoe the
 mime-type information from the OS'es clipboard API and figure out what's in
 it from the content. In turn this would give rise to another way to markup
 mime-types in-line with the content. And once you've forced such ad-hoc
 solutions to emerge for meddling with what people can put in the clipboard,
 you'll have no standing to put that geenie back in the bottle, again,
 relevant XKCD quote omitted.

 On Thu, Jun 25, 2015 at 4:48 PM, Wez w...@google.com wrote:

 You've mentioned resorting to application/octet-stream several times
 in the context of this discussion, where AFAICT the spec actually only
 describes using it as a fall-back for cases of file references on the
 clipboard for which the user agent is unable to determine the file type.

 So IIUC you're suggesting that user agents should implement
 application/octet-stream (as is also mandated by the spec, albeit 
 without
 a clear indication of what it means in this context) by putting the 
 content
 on the clipboard as an un-typed file?

 Again, I'm unclear as to what the alternative is that you're proposing?

 On Thu, 25 Jun 2015 at 15:27 Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to 
 deny
 any formats but those a UA can interprete and deem well-formed. If such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any 
 other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on 
 the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide 
 whether
 they can support that format safely.

 On Thu, 25 Jun 2015 at 14:16 Florian Bösch pya...@gmail.com wrote:

 On Thu, Jun 25, 2015 at 3:13 PM, Wez w...@google.com wrote:

 I think there's obvious value in support for arbitrary
 content-specific formats, but IMO the spec should at least give 
 guidance on
 how to present the capability in a safe way.

 Which is exactly the core of my question. If you intend to make it
 say, safe to put OpenEXR into the clipboard

Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
I'm sure you're aware that you can encode any binary blob as UTF-8
text/plain. If you don't support application/octet-stream, then that just
becomes the dumping ground.

On Thu, Jun 25, 2015 at 7:39 PM, Daniel Cheng dch...@google.com wrote:

 No UA supports it today. No UA is likely to support it anytime soon.

 Daniel

 On Thu, Jun 25, 2015 at 10:38 AM Florian Bösch pya...@gmail.com wrote:

 Yet you restrict mime-types AND you support application/octet-stream?

 On Thu, Jun 25, 2015 at 7:34 PM, Daniel Cheng dch...@google.com wrote:

 For reasons I've already mentioned, this isn't going to happen because
 there is no so-called dumping ground.

 No one is going to risk their paste turning into thousands of lines of
 gibberish because they tried to stuff binary data in text/plain.

 Daniel


 On Thu, Jun 25, 2015 at 8:23 AM Florian Bösch pya...@gmail.com wrote:

 No, what I'm saying is that if you restrict mime types (or don't
 explicitly prohibit such restriction), but require
 application/octet-stream, that application/octet-stream becomes the
 undesirable mime-type dumping ground. And that would be bad because that
 makes it much harder for applications to deal with content. But if that's
 the only way UAs are going to act, then applications will work around that
 by using elaborate guessing code based on magic bytes, and perhaps some
 application developers will use their own mime-type annotation pretended to
 the octet-stream.

 If you inconvenience people, but don't make it impossible to work
 around the inconvenience, then people will work around the inconvenience.
 It can't be the intention to encourage them work around it. So you've got
 to either not inconvenience them, or make working around impossible.

 On Thu, Jun 25, 2015 at 5:07 PM, Wez w...@google.com wrote:

 Florian, you keep referring to using application/octet-stream - that's
 not a format that all user agents support (although the spec says they
 should ;), nor is there any mention in the spec of what it means to place
 content on the clipboard in that format (given that platform native
 clipboards each have their own content-type annotations).

 So it sounds like you're saying we should also remove
 application/octet-stream as a mandatory format?

 On Thu, 25 Jun 2015 at 15:55 Florian Bösch pya...@gmail.com wrote:

 It's very simple. Applications need to know what's in the clipboard
 to know what to do with it. There is also a vast variety of things that
 could find itself in the clipboard in terms of formats, both formal and
 informal. Mime types are one of these things that applications would use 
 to
 do that.

 If a UA where to restict what mime type you can put into the
 clipboard, that forces the clipboard user to use 
 application/octet-stream.
 And in consequence, that forces any such-willing application to forgoe 
 the
 mime-type information from the OS'es clipboard API and figure out what's 
 in
 it from the content. In turn this would give rise to another way to 
 markup
 mime-types in-line with the content. And once you've forced such ad-hoc
 solutions to emerge for meddling with what people can put in the 
 clipboard,
 you'll have no standing to put that geenie back in the bottle, again,
 relevant XKCD quote omitted.

 On Thu, Jun 25, 2015 at 4:48 PM, Wez w...@google.com wrote:

 You've mentioned resorting to application/octet-stream several
 times in the context of this discussion, where AFAICT the spec actually
 only describes using it as a fall-back for cases of file references on 
 the
 clipboard for which the user agent is unable to determine the file type.

 So IIUC you're suggesting that user agents should implement
 application/octet-stream (as is also mandated by the spec, albeit 
 without
 a clear indication of what it means in this context) by putting the 
 content
 on the clipboard as an un-typed file?

 Again, I'm unclear as to what the alternative is that you're
 proposing?

 On Thu, 25 Jun 2015 at 15:27 Florian Bösch pya...@gmail.com wrote:

 Surely you realize that if the specification where to state to only
 safely expose data to the clipboard, this can only be interpreted to 
 deny
 any formats but those a UA can interprete and deem well-formed. If 
 such a
 thing where to be done, that would leave any user of the clipboard no
 recourse but to resort to application/octett-stream and ignore any 
 other
 metadata as the merry magic header guessing game gets underway. For all
 you'd have achieved was to muddle any meaning of the mime-type and 
 forced
 applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't see what that has to do with whether the spec
 mandates that user agents let apps place JPEG, PNG or GIF directly on 
 the
 local system clipboard. The spec doesn't currently mandate OpenEXR be
 supported, so it's currently up to individual user agents to decide 
 whether
 they can support that format safely.

 On Thu

Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
My point is that if you leave no other way out, that is what will happen.

On Thu, Jun 25, 2015 at 7:57 PM, Daniel Cheng dch...@google.com wrote:

 That's the case today already, and I haven't seen this happening.

 Daniel

 On Thu, Jun 25, 2015 at 10:48 AM Florian Bösch pya...@gmail.com wrote:

 I'm sure you're aware that you can encode any binary blob as UTF-8
 text/plain. If you don't support application/octet-stream, then that just
 becomes the dumping ground.

 On Thu, Jun 25, 2015 at 7:39 PM, Daniel Cheng dch...@google.com wrote:

 No UA supports it today. No UA is likely to support it anytime soon.

 Daniel

 On Thu, Jun 25, 2015 at 10:38 AM Florian Bösch pya...@gmail.com wrote:

 Yet you restrict mime-types AND you support application/octet-stream?

 On Thu, Jun 25, 2015 at 7:34 PM, Daniel Cheng dch...@google.com
 wrote:

 For reasons I've already mentioned, this isn't going to happen because
 there is no so-called dumping ground.

 No one is going to risk their paste turning into thousands of lines of
 gibberish because they tried to stuff binary data in text/plain.

 Daniel


 On Thu, Jun 25, 2015 at 8:23 AM Florian Bösch pya...@gmail.com
 wrote:

 No, what I'm saying is that if you restrict mime types (or don't
 explicitly prohibit such restriction), but require
 application/octet-stream, that application/octet-stream becomes the
 undesirable mime-type dumping ground. And that would be bad because 
 that
 makes it much harder for applications to deal with content. But if that's
 the only way UAs are going to act, then applications will work around 
 that
 by using elaborate guessing code based on magic bytes, and perhaps some
 application developers will use their own mime-type annotation pretended 
 to
 the octet-stream.

 If you inconvenience people, but don't make it impossible to work
 around the inconvenience, then people will work around the inconvenience.
 It can't be the intention to encourage them work around it. So you've got
 to either not inconvenience them, or make working around impossible.

 On Thu, Jun 25, 2015 at 5:07 PM, Wez w...@google.com wrote:

 Florian, you keep referring to using application/octet-stream -
 that's not a format that all user agents support (although the spec says
 they should ;), nor is there any mention in the spec of what it means to
 place content on the clipboard in that format (given that platform 
 native
 clipboards each have their own content-type annotations).

 So it sounds like you're saying we should also remove
 application/octet-stream as a mandatory format?

 On Thu, 25 Jun 2015 at 15:55 Florian Bösch pya...@gmail.com wrote:

 It's very simple. Applications need to know what's in the clipboard
 to know what to do with it. There is also a vast variety of things that
 could find itself in the clipboard in terms of formats, both formal and
 informal. Mime types are one of these things that applications would 
 use to
 do that.

 If a UA where to restict what mime type you can put into the
 clipboard, that forces the clipboard user to use 
 application/octet-stream.
 And in consequence, that forces any such-willing application to forgoe 
 the
 mime-type information from the OS'es clipboard API and figure out 
 what's in
 it from the content. In turn this would give rise to another way to 
 markup
 mime-types in-line with the content. And once you've forced such ad-hoc
 solutions to emerge for meddling with what people can put in the 
 clipboard,
 you'll have no standing to put that geenie back in the bottle, again,
 relevant XKCD quote omitted.

 On Thu, Jun 25, 2015 at 4:48 PM, Wez w...@google.com wrote:

 You've mentioned resorting to application/octet-stream several
 times in the context of this discussion, where AFAICT the spec 
 actually
 only describes using it as a fall-back for cases of file references 
 on the
 clipboard for which the user agent is unable to determine the file 
 type.

 So IIUC you're suggesting that user agents should implement
 application/octet-stream (as is also mandated by the spec, albeit 
 without
 a clear indication of what it means in this context) by putting the 
 content
 on the clipboard as an un-typed file?

 Again, I'm unclear as to what the alternative is that you're
 proposing?

 On Thu, 25 Jun 2015 at 15:27 Florian Bösch pya...@gmail.com
 wrote:

 Surely you realize that if the specification where to state to
 only safely expose data to the clipboard, this can only be 
 interpreted to
 deny any formats but those a UA can interprete and deem well-formed. 
 If
 such a thing where to be done, that would leave any user of the 
 clipboard
 no recourse but to resort to application/octett-stream and ignore 
 any
 other metadata as the merry magic header guessing game gets 
 underway. For
 all you'd have achieved was to muddle any meaning of the mime-type 
 and
 forced applications to work around an unenforceable restriction.

 On Thu, Jun 25, 2015 at 3:21 PM, Wez w...@google.com wrote:

 And, again, I don't

Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-25 Thread Florian Bösch
I think you underestimate the integrative need that web-apps will acquire
and the lengths they will go to faced with a business need to make it work
once clipboard API becomes common developer knowledge.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-24 Thread Florian Bösch
And how exactly do you intend to support for instance OpenEXR?

On Wed, Jun 24, 2015 at 5:44 PM, Wez w...@google.com wrote:

 Hallvord,

 Yes, content would be limited to providing text, image etc data to the
 user agent to place on the clipboard, and letting the user agent synthesize
 whatever formats (JPEG, PNG etc) other apps require. That has the advantage
 of preventing malicious content using esoteric flags or features to
 compromise recipients, but conversely means that legitimate content cannot
 use format-specific features, e.g. content would not be able to write a
 JPEG containing a comment block, geo tags or timestamp information.



 Wez


 On Sat, 13 Jun 2015 at 11:57 Hallvord Reiar Michaelsen Steen 
 hst...@mozilla.com wrote:

 On Thu, Jun 11, 2015 at 7:51 PM, Wez w...@google.com wrote:

 Hallvord,

 The proposal isn't to remove support for copying/pasting images, but to
 restrict web content from placing compressed image data in one of these
 formats on the clipboard directly - there's no issue with content pasting
 raw pixels from a canvas, for example, since scope for abusing that to
 compromise the recipient is extremely limited by comparison to JPEG, PNG or
 GIF.


 Well, but as far as I can tell we don't currently *have* a way JS can
 place pixels from a canvas on the clipboard (except by putting a piece of
 data labelled as image/png or similar there). So if you're pushing back
 against the idea that JS can place random data on the clipboard and label
 it image/png, how exactly would you propose JS-triggered copy of image data
 to work? Say, from a CANVAS-based image editor?
 -Hallvord




Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-24 Thread Florian Bösch
No, but the specification doesn't require you to exclude it. So how're
applications going to swap OpenEXR if you only let em stick in jpegs, pngs
and gifs?

On Wed, Jun 24, 2015 at 8:46 PM, Wez w...@google.com wrote:

 I don't think OpenEXR is one of the formats required by the Clipboard
 Events spec, is it..?

 On Wed, Jun 24, 2015, 18:49 Florian Bösch pya...@gmail.com wrote:

 And how exactly do you intend to support for instance OpenEXR?

 On Wed, Jun 24, 2015 at 5:44 PM, Wez w...@google.com wrote:

 Hallvord,

 Yes, content would be limited to providing text, image etc data to the
 user agent to place on the clipboard, and letting the user agent synthesize
 whatever formats (JPEG, PNG etc) other apps require. That has the advantage
 of preventing malicious content using esoteric flags or features to
 compromise recipients, but conversely means that legitimate content cannot
 use format-specific features, e.g. content would not be able to write a
 JPEG containing a comment block, geo tags or timestamp information.



 Wez


 On Sat, 13 Jun 2015 at 11:57 Hallvord Reiar Michaelsen Steen 
 hst...@mozilla.com wrote:

 On Thu, Jun 11, 2015 at 7:51 PM, Wez w...@google.com wrote:

 Hallvord,

 The proposal isn't to remove support for copying/pasting images, but
 to restrict web content from placing compressed image data in one of these
 formats on the clipboard directly - there's no issue with content pasting
 raw pixels from a canvas, for example, since scope for abusing that to
 compromise the recipient is extremely limited by comparison to JPEG, PNG 
 or
 GIF.


 Well, but as far as I can tell we don't currently *have* a way JS can
 place pixels from a canvas on the clipboard (except by putting a piece of
 data labelled as image/png or similar there). So if you're pushing back
 against the idea that JS can place random data on the clipboard and label
 it image/png, how exactly would you propose JS-triggered copy of image data
 to work? Say, from a CANVAS-based image editor?
 -Hallvord





Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
What about JPEG 2000, Exif, TIFF, RIF, BMP, PM, PGM, PBM, PNM, HDR, EXR,
BPG, psd, xcf, etc.?


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
On Thu, Jun 11, 2015 at 8:32 PM, Daniel Cheng dch...@google.com wrote:

 On Thu, Jun 11, 2015 at 11:13 AM Florian Bösch pya...@gmail.com wrote:

 What about JPEG 2000, Exif, TIFF, RIF, BMP, PM, PGM, PBM, PNM, HDR, EXR,
 BPG, psd, xcf, etc.?

 I'm not sure what you're trying to say here.

What about raster image formats the browser doesn't happen to implement.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
Besides, if html clipboards will be crippled beyond usability by security
paranoia, you'll just use good'ol flash to copy your random bytes to the
clipboard again.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
If you can't put an image/png into a clipboard from JS, you just put it
into an application/octet-stream, which many image editors will load just
happily. If that doesn't work, you just stick your PNG into a plain/text,
which many image editors will still load just fine.


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
Wait, why are you talking about removing an ostensibly useful feature
(declaring a mimetype in a paste for certain mime types) because the end
result could land up in the users paste, where it could be pasted into
applications that're not equipped to handle random assemblages of bytes,
even though they are specifically written to handle random assemblages of
bytes...

Wouldn't you have to remove ctrl+c, right-click - copy-image etc. from
the UA as well?


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
On a further note. If UAs (which are among the more prevalent applications
out there being used) intentionally disable declaring mime-types for some
classes of content, so that it can't be pasted into applications that might
not be equipped to handle those mimetypes, application programmers (such as
adobe, gimp etc.) will do something else:


   - The first 4 bytes of a PNG: \89PNG
   - Bytes 9 trough 13 of a JPEG: JFIF
   - etc.

Every notable non text format in common use today contains magic headers
that make it easy to identify what a file is without having the mimetype or
file extension.

Omission of metadata information is

   - not going to address your security concern since applications do
   routinely read in random bytes and figure out what they are
   - it's not going to make applications behave any more securely (or
   reliably) as it'll promote even more of them to resort to guessing because
   information is omitted


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Florian Bösch
Oh, also while you're on crippling things, please also exclude copying any
text that contains http://:; cause that borks skype.


Re: PSA: publishing new WD of Gamepad on April 14

2015-04-10 Thread Florian Bösch
In principle I'm all for events on buttons, but here comes an additional
complication :).

Sometimes buttons are axes. They can be legitimate axes, like the triggers
on gamepads, or incidential axes, like every button on some weird steering
wheel I got. They can also be recognized as buttons by the driver/UA, or
they might wrongly recognized as just axes. The button might have a
physical trigger point (but still be an axis) or it might not have it.
Point is, it's messy.

Which means that it has to be possible to detect axis thresholds manually,
accurately. The driver/browser might do it, or it might not.

So events for axes as they come in from the driver are still useful. But in
order to satisfy both usecases (clearing the event queue for a frame and
getting convenient polled axis values) you need both the polling and the
events.

I'd imagine it could work like this:

navigator.gamepads.onChange = function(event){
  event.gamepad instanceof Gamepad
  event.axis instanceof GamepadAxis
  event.button instanceof GamepadButton
  event.timestamp
  event.type either navigator.gamepads.UP | navigator.gamepads.DOWN |
navigator.gamepads.MOVE
}

var update = function(){
  var gamepads = navigator.gamepads.poll(); // here all the events get
dispatched
  for(var i=0; igamepads.length; i++){
// the usual handling of the polled state, all events have been
processed by here
  }
}



On Fri, Apr 10, 2015 at 6:43 PM, Ashley Gullen ash...@scirra.com wrote:

 Well, how about just events for the pressable buttons then? That would
 alleviate most of my concerns with the polling model, and I think you're
 right it's harder to apply it to axes given that there are effectively two
 inputs working simultaneously.

 Ashley


 On 9 April 2015 at 22:29, Florian Bösch pya...@gmail.com wrote:

 The polling model for axes has a significant advantage as I'll illustrate.

 Suppose you're steering a cursor of some kind in 2 dimensions. That
 cursor would also draw a trail/line whatever. Here's what happens if you
 apply this logic on events per axis: You get a staircase. Why? Because the
 X-axis event arrives, you move the cursor to the side, draw a line, then
 the Y-axis event arrives, you move the cursor up, you draw a line - a
 staircase.

 If you poll the position of all axes, this effect will not happen, you
 draw the correct diagonal line.

 In order to emulate the correct behavior with axis events, you'd have to
 capture all events, and keep the value of axes separately so you can draw
 to the appropriate position. But that can't work, because you don't have
 control over the polling behavior, that is, you don't know when all events
 are processed.

 This problem is broadly in the category of correlated event updates, and
 polling the status (of at least all axes) at a given time solves this
 nicely. So this functionality should not go away. In fact, it could be well
 argued that even if you do dispatch events (where they're primarily useful
 as in buttons), you should still be able to initiate the poll as for when
 all events are delivered (instead of having them delivered at an
 unopportune/uncontrolled time).

 The main drawback of not having events isn't that you'll have to keep the
 state separately. That's easy. The main problem is that because multiple
 events make up one final state, you can miss things, such as a button press.

 On Thu, Apr 9, 2015 at 11:15 PM, Ashley Gullen ash...@scirra.com wrote:

 Why doesn't the Gamepad API fire events for button pushes or axis
 movements? For example when pressing a mouse button or moving the mouse the
 browser fires mousedown and mousemove. The Gamepad API however requires
 you to passively poll the state regularly (probably in rAF) and look for
 changes yourself. Why does it not fire events like gamepadbuttondown or
 gamepadaxischange? This would have a few advantages:

 - it would be consistent with the way all other input events are handled
 on the web platform
 - it is easier to program for. As it stands since there is nothing like
 a gamepadbuttondown event, so if you want one, you have to implement it
 yourself by polling the state, keeping the previous polled state, and
 comparing the differences looking for a previously up but currently down
 state and then run your handler.
 - browsers have a couple of important features that can only work in a
 user gesture, such as opening a popup window, copying to the clipboard,
 or - critically for games! - starting audio (or video) playback on mobile.
 Since the Gamepad API does not fire events, this does not integrate nicely
 with the existing user gesture model, and therefore currently no browser
 allows these features to be triggered by gamepad input. Considering the use
 case of a gamepad controlling a browser game on a tablet, it's pretty
 embarrassing that you can't play audio without resorting to some other kind
 of input, like regularly leaning forwards to touch the screen.

 This could involve significant changes

Re: PSA: publishing new WD of Gamepad on April 14

2015-04-09 Thread Florian Bösch
The polling model for axes has a significant advantage as I'll illustrate.

Suppose you're steering a cursor of some kind in 2 dimensions. That cursor
would also draw a trail/line whatever. Here's what happens if you apply
this logic on events per axis: You get a staircase. Why? Because the X-axis
event arrives, you move the cursor to the side, draw a line, then the
Y-axis event arrives, you move the cursor up, you draw a line - a
staircase.

If you poll the position of all axes, this effect will not happen, you draw
the correct diagonal line.

In order to emulate the correct behavior with axis events, you'd have to
capture all events, and keep the value of axes separately so you can draw
to the appropriate position. But that can't work, because you don't have
control over the polling behavior, that is, you don't know when all events
are processed.

This problem is broadly in the category of correlated event updates, and
polling the status (of at least all axes) at a given time solves this
nicely. So this functionality should not go away. In fact, it could be well
argued that even if you do dispatch events (where they're primarily useful
as in buttons), you should still be able to initiate the poll as for when
all events are delivered (instead of having them delivered at an
unopportune/uncontrolled time).

The main drawback of not having events isn't that you'll have to keep the
state separately. That's easy. The main problem is that because multiple
events make up one final state, you can miss things, such as a button press.

On Thu, Apr 9, 2015 at 11:15 PM, Ashley Gullen ash...@scirra.com wrote:

 Why doesn't the Gamepad API fire events for button pushes or axis
 movements? For example when pressing a mouse button or moving the mouse the
 browser fires mousedown and mousemove. The Gamepad API however requires
 you to passively poll the state regularly (probably in rAF) and look for
 changes yourself. Why does it not fire events like gamepadbuttondown or
 gamepadaxischange? This would have a few advantages:

 - it would be consistent with the way all other input events are handled
 on the web platform
 - it is easier to program for. As it stands since there is nothing like a
 gamepadbuttondown event, so if you want one, you have to implement it
 yourself by polling the state, keeping the previous polled state, and
 comparing the differences looking for a previously up but currently down
 state and then run your handler.
 - browsers have a couple of important features that can only work in a
 user gesture, such as opening a popup window, copying to the clipboard,
 or - critically for games! - starting audio (or video) playback on mobile.
 Since the Gamepad API does not fire events, this does not integrate nicely
 with the existing user gesture model, and therefore currently no browser
 allows these features to be triggered by gamepad input. Considering the use
 case of a gamepad controlling a browser game on a tablet, it's pretty
 embarrassing that you can't play audio without resorting to some other kind
 of input, like regularly leaning forwards to touch the screen.

 This could involve significant changes to the spec, but I think it's
 necessary. It looks a bit like a first draft that never got reconsidered.

 Ashley Gullen
 Scirra.com



 On 9 April 2015 at 12:52, Arthur Barstow art.bars...@gmail.com wrote:

 Hi All,

 A new Working Draft publication of Gamepad is planned for April 14 using
 the following version as the basis:

   https://w3c.github.io/gamepad/publish/gamepad.html

 If anyone has any major concerns about this, please reply right away.

 A few notes about this spec:

 * This spec is now using Github https://github.com/w3c/gamepad and the
 ED is https://w3c.github.io/gamepad/gamepad.html. PRs are welcome and
 encouraged.

 * The permissions of the spec's now obsolete Hg repository will be set to
 read-only.

 * After Ted copies the open Bugzilla bugs to Github, the spec's Bugzilla
 component will be marked `Historical` and set to read-only.

 -Thanks, ArtB






Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-02 Thread Florian Bösch
On Thu, Apr 2, 2015 at 2:40 PM, Anders Rundgren 
anders.rundgren@gmail.com wrote:

 Obviously we need a model where the code is vetted for
 DoingTheRightThing(tm).


This is essentially about two things: trust and the capability to vet.
Both of these things cannot be solved conclusively, or without severe
drawbacks as I'll show.

The prevailing model of trust for vetting apps is app-stores. There the
trust is hierarchical I trust Apple, therefore I trust what they put in
the app-store.  A slightly more elaborate hierarchical trust scheme is
SSL, but it's really the same thing. This model has several problems:

   - If Apple gets pwned, everybody who trusted apple is screwed. This
   might be judged as a six-sigma event in the case of apple, but in the case
   of SSL certificate authority it's a frequent occurence.
   - The one on top of the (shallow or deep) hierarchy of trust gets to
   extract rent from everybody else. Apple takes a $99/year + 30% with some
   conditions. Certificate authorities charge anything between $10 and several
   thousands for their services.
   - Responsibility of vetting flows to the top, where it creates a vetting
   bottleneck. It's for this reason that it can take you weeks, or months if
   you're unlucky, to get your app in the app store. It's quite perplexing to
   be technically able to push updates a dozen times a day, yet you can't
   because every update is gonna cost you money and two weeks (tm) till it
   hits your audience.

The only alternative of a hierarchical trust system is a graph of trust
relationships which is used to aggregate trust between two nodes in it.
This is in principle a fine system, however, it too has a severe flaw. It
cannot account for good nodes that successfully pretend to be good, and
then one day turn bad. The revocation of trust in such a graph takes
considerable time since it depends on all connected nodes to adjust their
trust relationship. By the time that has happened, considerable damage may
incur.

It's for these reasons that trust/vetting based solutions cannot be used in
a heterogenous M:N market that the web finds itself in. It creates hard to
quantify risks, inconveniences everyone and puts up barriers to entry.


Re: Pointer lock spec

2015-04-01 Thread Florian Bösch
On Wed, Apr 1, 2015 at 1:49 AM, Vincent Scheib sch...@google.com wrote:

 You raised this point in 2011, resulting in my adding this spec section
 you reference. The relevant bit being:
 
 ... a concern of specifying what units mouse movement data are provided
 in. This specification defines .movementX/Y precisely as the same values
 that could be recorded when the mouse is not under lock by changes in
 .screenX/Y. Implementations across multiple user agents and operating
 systems will easily be able to meet that requirement and provide
 application developers and users with a consistent experience. Further,
 users are expected to have already configured the full system of hardware
 input and operating system options resulting in a comfortable control the
 system mouse cursor. By specifying .movementX/Y in the same units mouse
 lock API applications will be instantly usable to all users because they
 have already settled their preferences.
 

As of yet nobody has provided higher resolution values though.


 As an application developer I agree the unprocessed data would be nice to
 have, but I don't see it as essential. The benefits of system calibrated
 movement are high. Not requiring users to configure every application is
 good. And, as the Chrome implementation maintainer who has been in
 conversation with several application developers (Unity, Unreal,
 PlayCanvas, GooTechnologies, Verold, come to mind easily) this has not been
 raised yet as an issue.

I distinctly remember playing games (and reading articles about) mouse
coordinates in pixel-clamped ranges. Particularly when buying my first high
resolution mouse this was quite an issue with some games. As now I had a
very high precision pointing instrument, but viewpoint changes where pixel
clamped. To get better resolution, I had to go to the OS setting, and
ratched up mouse sensitivity to the max, then go to the game settings, and
counteract that sensitivity setting so it became operable, hence extracting
more ticks out of a flawed system. Of course once I closed the game again,
I had to undo the OS mouse sensitivity setting again in order to make the
desktop usable.

It might be less of an issue then back when 1024x786 was the state of the
art. But 1920x1080 is less than twice the horizontal/vertical resolution
than back then, so I'm pretty sure this is still a significant issue, and
pointers haven't gotten any less precise since then (however OSes haven't
gotten any smarter with pointers).


 I'm not certain how to address this in the specification. I agree that
 poor rendering latency will impact the use of an application drawn cursor
 or other use, and that some applications and devices may have poor
 experiences. That said, what should we change in the specification, or what
 notes would you suggest are added to FAQ on this topic?

This is essentially a whole-system integration issue. In order to fix it,
the whole stack (drivers, I/O systems, kernels, shells, browsers, hardware)
needs to get its act together:

   - I/O RTT latencies (input - screen) of more than 10ms is not an
   appropriate state of affairs for the year 2015. The benchmark number would
   be OS cursors, which are around 30ms. Even native games struggle to get
   below 60ms, and for browsers it's usually worse.
   - *One* millisecond on a modern computer (PCIe, sandybridge, ssd drive)
   runs 6000 floating point operations, 16'000 transfers on the bus, 160'000
   cycles, ~40'000 ram load/stores
   - L1 latency ~0.25ms, ram latency ~0.8ms, SSD random access
   ~0.1ms

So to put that into perspective, the I/O latencies we have today (let's go
with ~100ms for browsers) is  1.25x million times bigger than ram latency,
1000x bigger than permanent storage latency. It's about 8x longer than it
takes you to ping google across 6 hops. IO latencies in todays systems are
insanely high. And the numbers haven't gotten much better in the last 20
years (in fact you could argue they've gotten a lot worse, but that's a
topic for another day).

So I think if you're writing any specification dealing with I/O. It should
contain very strong language addressing latency. It can specify whole
system latency requirements as a level of support queries. If a whole
system is able to achieve  10ms latencies, the API should be able to
indicate that fact (let's say support level gold), if it reaches  60ms,
that's say silver, and  60ms that's support level turd. What's simply not
sustainable is the frankly insane situation in regards to I/O latencies is
gone unmentioned, unfixed and not made transparent. We tried that for the
last oh 20 years. It. Doesn't. Work.


Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Florian Bösch
On Wed, Apr 1, 2015 at 11:22 AM, Nilsson, Claes1 
claes1.nils...@sonymobile.com wrote:

 Hi all,



 Related to the recent mail thread about the SysApps WG and its
 deliverables I would like to make a report of the status of the TCP and UDP
 Socket API, http://www.w3.org/2012/sysapps/tcp-udp-sockets/.



 Note that this specification is still being worked on. Latest merged PR
 was March 30. I think it is time for a new Public Working Draft.



 This API is used to send and receive data over the network using TCP or
 UDP.

 Examples of use cases for the API are:

- An email client which communicates with SMTP, POP3 and IMAP servers
- An irc client which communicates with irc servers
- Implementing an ssh app
- Communicating with existing consumer hardware, like internet
connected TVs
- Game servers
- Peer-to-peer applications
- Local network multicast service discovery, e.g. UPnP/SSDP and mDNS

 Some of these usecases are served suitably by WebSockets and WebRTC (once
it reaches good deployment state). Of course there's drawbacks to that (a
bit of overhead, some weird semantics, some restrictions and zero legacy
integration)

The TCP and UDP Socket API is a phase 1 deliverable of the SysApps WG.
 SysApps was originally chartered to provide a runtime and security model so
 that it would be possible to open up sensitive APIs to SysApps enabled
 runtimes. Accordingly, it was assumed that the TCP and UDP Socket API would
 be exposed to such a “trusted runtime”. Looking at existing TCP and UDP
 Socket APIs they are implemented in proprietary web runtimes, FFOS and
 Chrome, which provide a security model for installed packaged web runtimes.

I don't particularly like the idea of priviledged webapps unless absolutely
necessary.


 I recently added “permission methods”, partly inspired by the W3C Push
 API. A webapp could for example request permission to create a TCP
 connection to a certain host. The ambition is to isolate the permission
 system from the socket interfaces specifications and the manner in which
 permission to use this API is given differs depending on the type of web
 runtime the API is implemented in. For example, a web runtime for secure
 installed web applications may be able to open up this API so that no
 explicit user content is needed, while an implementation in a web browser
 may use a combination of web security mechanisms, such as secure transport
 (https:), content security policies (CSP), signed manifest, certificate
 pinning, and user consent to open up the API.

I'd like to point out the permissionities syndrome. There are two parts to
this syndrome, the first is the use of an ever growing list of complex
permissions for users to manage. Good examples of that are:
http://codeflow.org/issues/permissions.jpg , http://i.imgur.com/pTzdLnI.png
, http://i.imgur.com/MY5o9MP.png etc. The second part is recent research
has shown that showing people security prompts makes them turn off their
brain, literally,
http://www.extremetech.com/computing/201698-mri-scans-of-the-brain-show-why-we-ignore-security-warnings

Also note, most people don't know what a browser is, they certainly don't
know what a host is, and even if they knew, they couldn't gauge the
security implications of what it means they're saying yes to.


Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Florian Bösch
It's a fair point, but without an origin authoritative opt-in it's not
gonna happen no matter what. Imagine say the displeasure of
awesomeEmail2000.com if trough some manner of XSS exploit (say in google
adds) suddenly millions of web-visitors connect to their email server
simultaneously...

On Wed, Apr 1, 2015 at 6:44 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Apr 1, 2015 at 6:37 PM, Florian Bösch pya...@gmail.com wrote:
  On Wed, Apr 1, 2015 at 6:02 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  Not saying that we can use CORS to solve this, or that we should
  extend CORS to solve this. My point is that CORS works because it was
  specified and implemented across browsers. If we'd do something like
  what Domenic proposes, I think that would be true here too.
 
  However, in my experience the use case for the TCPSocket and UDPSocket
  APIs is to connect to existing hardware and software systems. Like
  printers or mail servers. Server-side opt-in is generally not possible
  for them.
 
  Isn't the problem that these existing systems can't be changed (let's
 say an
  IRC server) to support say WebSockets, and thus it'd be convenient to be
  able to TCP to it. I think that is something CORS-like could actually
 solve.
  You could deploy (on the same origin) a webserver that handles the opt-in
  for that origin/port/protocol and then the webserver can open a
 connection
  to it. For example:
 
  var socket = new Socket(); socket.connect('example.com', 194);
 
  -
 
  RAW-SOCKET-OPTIONS HTTP/1.1
  port: 194
  host: example.com
 
  -
 
  HTTP/1.1 200 OK
  Access-Control-Allow-Origin: example.com
 
  - browser opens a TCP connection to example.com 194.
 
  So you don't need to upgrade the existing system for server
 authorization.
  You just need to deploy a (http compatible) authorative source on the
 same
  origin that can give a browser the answer it desires.

 Again, the use case here is to enable someone to develop, for example,
 a browser base mail client which has support for POP/IMAP/SMTP.

 It's going to be very hard for that email client to get any
 significant user base if their install steps are:

 1. Go to awesomeEmail2000.com
 2. Contact your mail provider and ask them to install a http server on
 their mail server
 3. There is no step three :)

 / Jonas



Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Florian Bösch
On Wed, Apr 1, 2015 at 6:02 PM, Jonas Sicking jo...@sicking.cc wrote:

 Not saying that we can use CORS to solve this, or that we should
 extend CORS to solve this. My point is that CORS works because it was
 specified and implemented across browsers. If we'd do something like
 what Domenic proposes, I think that would be true here too.

 However, in my experience the use case for the TCPSocket and UDPSocket
 APIs is to connect to existing hardware and software systems. Like
 printers or mail servers. Server-side opt-in is generally not possible
 for them.

Isn't the problem that these existing systems can't be changed (let's say
an IRC server) to support say WebSockets, and thus it'd be convenient to be
able to TCP to it. I think that is something CORS-like could actually
solve. You could deploy (on the same origin) a webserver that handles the
opt-in for that origin/port/protocol and then the webserver can open a
connection to it. For example:

var socket = new Socket(); socket.connect('example.com', 194);

-

RAW-SOCKET-OPTIONS HTTP/1.1
port: 194
host: example.com

-

HTTP/1.1 200 OK
Access-Control-Allow-Origin: example.com

- browser opens a TCP connection to example.com 194.

So you don't need to upgrade the existing system for server authorization.
You just need to deploy a (http compatible) authorative source on the same
origin that can give a browser the answer it desires.


Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Florian Bösch
On Wed, Apr 1, 2015 at 9:00 PM, Anders Rundgren 
anders.rundgren@gmail.com wrote:

 Who would like to get something like that in their face when buying stuff
 on the web?


14% of users recognize changes in content of a security prompt. An MRI scan
shows that at the second security prompt in a session a clear drop in
visual processing occurs.

The authors of the study suggest that security prompts be made such that
they are as annoying and hard to habituate against as possible
(Permissionities syndrome on steroids).

This messages author suggests that if you do that, it's the fastest way to
annoy the heck out of everybody in no time flat and get them to stop using
whatever it is you're programming. Vista on steroids.


Re: Proposal for a Permissions API

2015-03-22 Thread Florian Bösch
On Sat, Mar 21, 2015 at 10:47 PM, Florian Bösch pya...@gmail.com wrote:

 2) MRI scans show that user attention dramatically drops when presented
 with a security prompt:
 http://arstechnica.com/security/2015/03/mris-show-our-brains-shutting-down-when-we-see-security-prompts/


It's also likely the case that (as others have suggested) if you're doing a
double security poll (or even a tripple one) ala:

   1. Hey we need this permission to get started, do you want to grant it?
   2. Click here to make us request this permission from you, remember to
   click the next dialog on allow
   3. Actual permission dialog asking the permission.

Then the attention loss effect is probably knocked on N-fold.


Re: Proposal for a Permissions API

2015-03-21 Thread Florian Bösch
Time to revise this topic. Two data points:

1) Particularly with pointerlock (but also with other permission prompts
that sneak up on the user) I often get the complaint from users along the
lines of I tried your stuff, but it didn't work. or I tried your stuff,
but it asked me to do X, I don't think it works.

2) MRI scans show that user attention dramatically drops when presented
with a security prompt:
http://arstechnica.com/security/2015/03/mris-show-our-brains-shutting-down-when-we-see-security-prompts/

Permission/Security prompts are bad UX. Particularly the kind you need to
prompt the user with along the way. And within that, even worse are the
ones that pop up again and again (like the fullscreen popup).

On Wed, Oct 1, 2014 at 7:34 PM, Jeffrey Yasskin jyass...@google.com wrote:

 On Wed, Sep 3, 2014 at 3:29 AM, Mounir Lamouri mou...@lamouri.fr wrote:
  On Wed, 3 Sep 2014, at 04:41, Jonas Sicking wrote:
  I'm generally supportive of this direction.
 
  I'm not sure that that the PermissionStatus thing is needed. For
  example in order to support bluetooth is might be better to make the
  call look like:
 
  permissions.has(bluetooth, fitbit).then(...);
 
  That's more Permission than PermissionStatus, right?
 
  What you proposed here would probably be something like that in WebIDL:
  Promise has(PermissionName name, any options);
 
  But really, we could make that option bag be a dictionary because it
  gives good context about what you are passing like what does fitbit
  means here? Is that a black listed device or a white listed one? The one
  you want to target?
 
  I agree that it might be unusual to have a required name than might
  often be used alone but it makes the API way more javascript-y and self
  explanatory. IMO, this call is nicer to read than the one you wrote
  above:
permissions.has({ name: 'bluetooth', devices: 'fitbit' });
  because I understand what the call is trying to do. In addition, as you
  pointed, it gives a lot of flexibility.

 Belatedly, I'd like to suggest a slightly different model. Instead of
 trying to stuff arbitrary queries into the permissions.has() call,
 maybe expose the current permissions as data, and let the application
 inspect them using custom code. This is likely to work better for
 Bluetooth, since we're planning to have pages request devices by the
 Services they expose, not their deviceIds, and a page may want to
 check for either an available device exposing some services, or that a
 device they've already opened hasn't been revoked.

 Getting permission revocation to update a UI correctly is also an
 interesting problem. You could expose an event on permission change,
 but given that templating frameworks are moving toward
 Object.observe() to update themselves in response to model object
 changes, that would require developers to write extra code to
 propagate the permission changes into their models.

 So what if navigator.permissions just _was_ a suitable model object?
 Make it, say, a Map from permission-name to an object defined by the
 permission's standard, and extend Map to expose enough synthetic
 change records that Object.observe(a_map) is useful.

 Jeffrey




Re: Pointer lock spec

2015-02-27 Thread Florian Bösch
I'd like to comment on the pointer lock functionality some.

12.4 notes that capturing a (a native) pointer inside of a rectangle is
difficult. I've done some research into this topic and I can attest that
it's not straightforward. Some platforms have support for this semantics,
others (I think it was OSX and Windows) would require repositioning the
pointer in an event loop which has some drawbacks.

12.5 states that there's difficulties in translation of
high-resolution/unaccelerated mouse pointer values to sensible units. While
I appreciate that concern, I don't think it matters nearly as much. The
first reason is that the user interaction with the pointing device isn't
aspect-distorted in the first place (that is a horizontal movement in the
pointing device covers the same distance on screen as a vertical movement).
What we're really discussing is mouse-sensitivity then, which could vary
greatly once mouse acceleration and sensitivity (by the OS) is taken out of
the picture. However, applications can (and will) provide their own
settings for these values. The reason to bypass acceleration/sensitivity of
the OS for a pointer is specifically because that treatment often doesn't
make sense (say for a camera control function in a 3D environment).

More general comments:

The specification does not discuss latency and pointer emulation. This is a
significant concern, as routinely native pointers have a latency of around
20-30ms, whereas the RTT for pointer-captured events and blit to display
have latencies between 70-150ms. This makes it difficult to use the pointer
lock API for emulating a mouse pointer (or indeed to use it for any kind of
pointing that requires accuracy and speed, like say FPS shooters).

A particular concern with a frequent usecase (3d interaction by mouse drag)
is the presence of pointerlock messages. Because we need native pointers
(in order to interact with DOM elements, to avoid high pointer input lag
and to allow a user to interact with things outside of the viewport (such
as tabs, file managers etc.)), it is desirable to stay out of a pointerlock
mode for general interaction. But when the application features a 3D
viewport (such as is found in CAD, 3D modelling etc.)  it is desirable to
enter pointer lock for mouse dragging in those viewports, as it avoids text
selection and screen border bump issues. A common way to do this is to
capture the mouse on mouse down, and release capture on mouse up. However,
this usecase faces some hurdles:

   1. If the user never entered pointerlock on the site, then pointerlock
   will not be entered until the user clicked allow in the confirmation
   dialog (this interaction is confusing to users)
   2. When he clicks allow, the mouse will be captured, but the release
   event might not have been dispatched, so that the event order suggests the
   pointer is captured, though that would run counter the semantic (I'm not
   sure what the current UA behavior there is, but this was the case in some
   UAs as of a year ago)
   3. Regardless of previous pointer capture confirmation, some UAs choose
   to display a popup with information everytime somebody enters pointerlock,
   which makes frequenty entry/exit fr

I'd also like to expand a bit on the topic of high resolution pointer
information, which means both high temporal and high spatial resolution.
For example The Razer Ouroboros mouse (
http://www.razerzone.com/gaming-mice/razer-ouroboros). It supports a
polling frequency of 1000hz and has a resolution of 8200dpi. OSes (after
they're done mangling pointers trough their machinery) clamp pointer
positions to pixels, and they clamp frequency of events at least to pixel
crossover of the pointer and to no more than 60hz or so. It would be highly
desirable to get sub-pixel accuracy of a pointing device (as is supported
by nearly all mice produced in the last 10 years) as well as higher spatial
frequency than the one imposed by the OSes pointer machinations. These
things are desirable because they allow an application that runs say a 60hz
display frequency, to temporally sub-sample a pointer location from say 16
different events. So even though at the start and end of a frame, the
pointer might be in the same position again, a movement intra-frame could
still be registered, smoothed and used. More generally this is useful to
provide a smooth feel and rapid response to applications that deal with a
need to use pointers for fast and/or accurate pointing (such as for
instance FPS shooters).


On Thu, Feb 26, 2015 at 8:21 PM, Vincent Scheib sch...@google.com wrote:

 Thanks, Philip, changes made.

 On Thu, Feb 26, 2015 at 10:58 AM, Philip Jägenstedt phil...@opera.com
 wrote:

 Also, the EventHandler type should not be nullable, it's already
 typedef'd to be nullable in https://html.spec.whatwg.org/#eventhandler

 On Fri, Feb 27, 2015 at 1:56 AM, Philip Jägenstedt phil...@opera.com
 wrote:
  https://dvcs.w3.org/hg/pointerlock/raw-file/default/index.html
 
  

Re: The futile war between Native and Web

2015-02-16 Thread Florian Bösch
On Mon, Feb 16, 2015 at 9:08 AM, Jeffrey Walton noloa...@gmail.com wrote:

 I'd hardly consider an account holder's data as high value. Medium at
 best and likely low value. But that's just me.

Of course if the data is compromised it means that an attacker can also
remote-control your e-banking interface, and issue payments and so forth.
I'm sure that's not high value either?


Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Florian Bösch
On Fri, Feb 6, 2015 at 7:38 PM, Michaela Merz michaela.m...@hermetos.com
wrote:

  it would be the job of the browser development community to find a way
 to make such calls less harmful.

If there was a way to make synchronous calls less harmful, it'd have been
implemented a long time ago. There isn't.

You could service synchronous semantics with co-routine based schedulers.
It wouldn't block the main thread, but there'd still be nothing going on
while your single-threaded code waits for the XHR to complete, and so it's
still bad UX. Solving the bad UX would require you to deal with the
scheduler (spawn microthreads that do other things so it's not bad UX).
Regardless, ES-discuss isn't fond of co-routines, so that's not gonna
happen.


Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Florian Bösch

 I had an Android device, but now I have an iPhone. In addition to the
 popup problem, and the fake X on ads, the iPhone browsers (Safari,
 Chrome, Opera) will start to show a site, then they will lock up for 10-30
 seconds before finally becoming responsive.


Via. Ask Slashdot:
http://ask.slashdot.org/story/15/02/04/1626232/ask-slashdot-gaining-control-of-my-mobile-browser

Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 / SeaMonkey
 2.27), synchronous requests on the main thread have been deprecated due to
 the negative effects to the user experience.



Via
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests

Heads up! The XMLHttpRequest2 spec was recently changed to prohibit sending
 a synchronous request whenxhr.responseType is set. The idea behind the
 change is to help mitigate further usage of synchronous xhrs wherever
 possible.


Via http://updates.html5rocks.com/2012/01/Getting-Rid-of-Synchronous-XHRs


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
On Thu, Feb 5, 2015 at 2:39 PM, Takeshi Yoshino tyosh...@google.com wrote:

 To prevent WebSocket from being abused to attack existing HTTP servers
 from malicious non-simple cross-origin requests, we need to have WebSocket
 clients to do some preflight to verify that the server is not an HTTP
 server that don't understand CORS. We could do e.g. when a custom header is
 specified,

No further specification is needed because CORS already covers the case of
endpoints that do not understand CORS (deny by default). Hence above
assertion is superfluous.


 So, anyway, I think we need to make some change on the WebSocket spec.

Also bogus assertion.


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
On Thu, Feb 5, 2015 at 2:35 PM, Anne van Kesteren ann...@annevk.nl wrote:

 Wouldn't that require the endpoint to support two protocols? That
 sounds suboptimal.


CORS and Websockets are two separate protocols which each work off and by
themselves, there is no change required to either to make one work with the
other, and both adequately deal with non-implementation by the endpoint. A
webserver with support for CORS and Websockets already implements both
protocols, and so no additional burden is imposed regardless.


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
The websocket wire protocol only comes into effect after a successful
handshake. The handshake involves a request to the endpoint by the client
(typically a GET) and a response by the endpoint (101 switching protocols).

As such websockets themselves do not concern themselves with headers and
the other miscalnea of HTTP beyond the handshake protocol (which is tacked
onto HTTP).

CORS is also tacked onto HTTP, and so if you enforce CORS on http, this
will automatically apply to the handshake of a websocket, which goes over
HTTP. And so still no change is required to the websocket protocol.

On Thu, Feb 5, 2015 at 2:41 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Feb 5, 2015 at 2:39 PM, Florian Bösch pya...@gmail.com wrote:
  On Thu, Feb 5, 2015 at 2:35 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
  Wouldn't that require the endpoint to support two protocols? That
  sounds suboptimal.
 
  CORS and Websockets are two separate protocols which each work off and by
  themselves, there is no change required to either to make one work with
 the
  other, and both adequately deal with non-implementation by the endpoint.
 A
  webserver with support for CORS and Websockets already implements both
  protocols, and so no additional burden is imposed regardless.

 The protocols in question are HTTP and WebSocket... CORS is very much
 unrelated to most of this. It's just the solution we have for HTTP at
 the moment and WebSocket has something similar, just not for headers
 and that is the problem.


 --
 https://annevankesteren.nl/



Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
On Thu, Feb 5, 2015 at 2:44 PM, Takeshi Yoshino tyosh...@google.com wrote:

 IIUC, CORS prevents clients from issuing non-simple cross-origin request
 (even idempotent methods) without verifying that the server understands
 CORS. That's realized by preflight.


Incorrect, the browser will perform idempotent requests (for instance img
or XHR GET) across domains without a preflight request. It will however not
make the data available to the client (javascript specifically) unless CORS
is satisfied (XHR GET will error out, and img will throw a glError on
gl.texImage2D if CORS isn't satisfied).


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
Well,

1) Clients do apply CORS to WebSocket requests already (and might've
started doing so quite some time ago) and everything's fine and you don't
need to change anything.

2) Clients do not apply CORS to WebSocket requests, and you're screwed,
because any change you make will break existing deployments.

Either way, this will result in no change made, so you can burry it right
here.

On Thu, Feb 5, 2015 at 2:12 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Feb 5, 2015 at 1:27 PM, Florian Bösch pya...@gmail.com wrote:
  CORS is an adequate protocol to allow for additional headers, and
 websocket
  requests could be subjected to CORS (I'm not sure what the current client
  behavior is in that regard, but I'm guessing they enforce CORS on
 websocket
  requests as well).

 I think you're missing something. A WebSocket request is subject to
 the WebSocket protocol, which does not take the same precautions as
 the Fetch protocol does used elsewhere in the platform. And therefore
 we cannot provide this feature until the WebSocket protocol is fixed
 to take the same precautions.


 --
 https://annevankesteren.nl/



Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
On Thu, Feb 5, 2015 at 12:59 PM, Anne van Kesteren ann...@annevk.nl wrote:

 That is not sufficient to allow custom headers. Cross-origin (and
 WebSocket is nearly always cross-origin I think) custom headers
 require a preflight and opt-in on a per-header basis.

Access-Control-Allow-Headers is not a preflight request per header, it's
one preflight request for all custom headers.

CORS allows idempotent requests to be made without a preflight request. A
websocket setup is a GET request with the necessary headers for the
handshake set.

Please don't break websockets and HTTP as they're specified and implemented
today. Thank you.


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Florian Bösch
On Thu, Feb 5, 2015 at 1:22 PM, Anne van Kesteren ann...@annevk.nl wrote:

 I'm not sure how this is relevant. We are discussing adding the
 ability to the WebSocket API to set custom headers and whether the
 current protocol is adequate for that.


CORS is an adequate protocol to allow for additional headers, and websocket
requests could be subjected to CORS (I'm not sure what the current client
behavior is in that regard, but I'm guessing they enforce CORS on websocket
requests as well).


Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

 First: We need signed script code. We are doing a lot of stuff with
 script - we could safely do even more, if we would be able to safely
 deliver script that has some kind of a trust model.

TLS exists.


 I am thinking about
 signed JAR files - just like we did with java applets not too long ago.
 Maybe as an extension to the CSP enviroment .. and a nice frame around
 the browser telling the user that the site is providing trusted / signed
 code.

Which is different than TLS how?


 Signed code could allow more openness, like true full screen,

Fullscreen is possible today,
https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode


 or simpler ajax downloads.

Simpler how?


 Second: It would be great to finally be able to accept incoming
 connections.

WebRTC allows the browser to accept incoming connections. The WebRTC data
channel covers both TCP and UDP connectivity.


 There's access to cameras and microphones - why not allow
 us the ability to code servers in the browser?

You can. There's even P2P overlay networks being done with WebRTC. Although
they're mostly hampered by the existing support for WebRTC data channels,
which isn't great yet.


Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

 If signed code would allow
 special features - like true fullscreen

https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode



 or direct file access

http://www.html5rocks.com/en/tutorials/file/filesystem/


Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 6:35 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

  Well .. it would be a all scripts signed or no script signed kind of
 a deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.


On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:

 it would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.


On Wed, Nov 19, 2014 at 6:41 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:

 TLS doesn't protect you against code that has been altered server side -
 without the signers consent. It would alert the user, if unsigned updates
 would be made available.


Signing allows you to verify that an entity did produce a run of bytes, and
not another entity. Entity here meaning the holder of the private key who
put his signature onto that run of bytes. How do you know this entity did
that? Said entity also broadcast their public key, so that the recipient
can compare.

TLS solves this problem somewhat by securing the delivery channel. It
doesn't sign content, but via TLS it is (at least proverbially) impossible
for somebody to deliver content over a channel you control.

Ajax downloads still require a download link (with the bloburl) to be
 displayed requiring an additional click. User clicks download .. ajax
 downloads the data, creates blob url as src which the user has to click to
 'copy' the blob onto the userspace drive. Would be better to skip the final
 part.

Signing, technically would have an advantage where you wish to deliver
content over a channel that you cannot control. Such as over WebRTC, from
files, and so forth.

In regard to accept: I wasn't aware of the fact that I can accept a socket
 on port 80 to serve a HTTP session. You're saying I could with what's
 available today?

You cannot. You can however let the browser accept an incoming connection
under the condition that they're browsing the same origin. The port doesn't
matter as much, as WebRTC largely relegates it to an implementation detail
of the channel negotiator so that two of the same origins can communicate.



Suppose you get a piece of signed content, over whatever way it was
delivered. Suppose also that this content you got has the ability to read
all your private data, or reformat your machine. So it's basically about
trust. You need to establish a secure channel of communication to obtain a
public key that matches a signature, in such a way that an attackers
attempt to self-sign malicious content is foiled. And you need to have a
way to discover (after having established that the entity is the one who
was intended and that the signature is correct), that you indeed trust that
entity.

These are two very old problems in cryptography, and they cannot be solved
by cryptography. There are various approaches to this problem in use today:

   - TLS and its web of trust: The basic idea being that there is a
   hierarchy of signatories. It works like this. An entity provides a
   certificate for the connection, signing it with their private key. Since
   you cannot establish a connection without a public key that matches the
   private key, verifying the certificate is easy. This entity in turn, refers
   to another entity which provided the signature for that private key. They
   refer to another one, and so forth, until you arrive at the root. You
   implicitly trust root. This works, but it has some flaws. At the edge of
   the web, people are not allowed to self-sign, so they obtain their (pricey)
   key from the next tier up. But the next tier up can't go and bother the
   next tier up everytime they need to provide a new set of keys to the edge.
   So they get blanket permission to self-sign, implying that it's possible
   for the next tier up to establish and maintain a trust relationship to
   them. As is easily demonstratable, this can, and often does, go wrong,
   where some CA gets compromised. This is always bad news to whomever
   obtained a certificate from them, because now a malicious party can pass
   themselves off as them.
   - App-stores and trust royalty: This is really easy to describe, the app
   store you obtain something from signs the content, and you trust the
   app-store, and therefore you trust the content. This can, and often does go
   wrong, as android/iOS malware amply demonstrates.

TSL cannot work perfectly, because it is built on implied trust along the
chain, and this can get compromised. App-stores cannot work perfectly
because the ability to review content is quickly exceeded by the flood of
content. Even if app-stores where provided with the full source, 

Re: What I am missing

2014-11-18 Thread Florian Bösch
There are some models that are a bit better than trust by royalty
(app-stores) and trust by hirarchy (TLS). One of them is trust flowing
along flow limited edges in a graph (as in Advogato). This model however
isn't free from fault, as when a highly trusted entity gets compromised,
there's no quick or easy way to revoke that trust for that entity. Also, a
trust graph such as this doesn't solve the problem of stake. We trust say,
the twitter API, because we know that twitter has staked a lot into it. If
they violate that trust, they suffer proportionally more. A graph doesn't
solve that problem, because it cannot offer a proof of stake.

Interestingly, there are way to provide a proof of stake (see various
cryptocurrencies that attempt to do that). Of course proof of stake
cryptocurrencies have their own problems, but that doesn't entirely
invalidate the idea. If you can prove you have a stake of a given size,
then you can enhance a flow limited trust graph insofar as to make it less
likely an entity gets compromised. The difficulty with that approach of
course is, it would make aquiring high levels of trust prohibitively
expensive (as in getting the priviledge to access the filesystem could run
you into millions of $ of stake shares).


Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi marc.fa...@gmail.com wrote:

 So there is no way for an unsigned script to exploit security holes in a
 signed script?

Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.


 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard. Failing that, you can monitor
what APIs a piece of software makes use of, and restrict access to those.
However, that's already satisfied without signing by sandboxing.
Furthermore, it doesn't entirely solve the problem as any android user will
know. You get a ginormeous list of premissions a given piece of software
would like to use and the user just clicks yes. Alternatively, you get
malware that's not trustworthy, that nobody managed to properly review,
because the non trusty part was burried/hidden by the author somewhere deep
down, to activate only long after trust extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable model,
what do you define as malicious? Reformatting your machine would be
malicious, but so would be posting on your facebook wall. What constitutes
a malicious pattern is actually more of a social than a technical problem.


Re: [gamepad] Add an event-based input mechanism

2014-10-13 Thread Florian Bösch
Note that events for axis input can (when wrongly handled) lead to
undesirable behavior. For instance, suppose you have a 2-axis input you use
to plot a line on screen. If you poll the positions and then draw a line
from the last position to the current position, you will get a smooth line.
However if you receive events for axes, each arriving separately, and use
each to plot a line, you will get a stepped/stairs line.

If nothing but an event based mechanism is present, this would force an
author to catch all axes events and collate them when no more events are
queued. That'd require knowing when are no more events coming, which
cannot be done in a callback based mechanism (as the event queue can't be
queried).

It's conceivable that some use-cases of buttons can run into similar
scenarios (combos and whatnot). Although in these cases, a developer could
probably keep track of the pressed buttons themselves. However, this means
that not only button presses have to be provided, but also button releases,
such that combos could be reliably detected by the developer.

On Mon, Oct 13, 2014 at 9:03 AM, Chad Austin caus...@gmail.com wrote:

 Just as I mentioned in my previous email to this list, I recently was
 asked to review the Gamepad API draft specification.  My background is
 games though I've done some scientific computing with alternate input
 devices too.

 http://chadaustin.me/2014/10/the-gamepad-api/

 I'd like to make a second proposed change to the gamepad API, an
 event-based mechanism for receiving button and axis changes.

 The full rationale is explained in the linked article, but the summary is:

 1) an event-based API entirely avoids the issue of missed button presses
 2) event-based APIs don't require non-animating web pages to use
 requestAnimationFrame just to poll gamepad state
 3) an event-based API could (and should!) give access to high-precision
 event timing information for use in gesture recognition and scientific
 computing
 4) event-based APIs can reduce processing latency when a button changes
 state rapidly in low-frame-rate situations

 --
 Chad Austin
 http://chadaustin.me



Re: Proposal for a Permissions API

2014-09-05 Thread Florian Bösch
On Fri, Sep 5, 2014 at 11:14 AM, Mounir Lamouri mou...@lamouri.fr wrote:

 Note that the Permissions API model isn't requiring all APIs to abide by
 its model. Having no permissions at all for an API is a decent model if
 possible. For example, having a permission concept for input
 type='file' doesn't make much sense. Other APIs could use the
 permission model but have some UA mostly returning 'granted' because
 they have an opt-out model instead of opt-in, such as most
 implementations of fullscreen.


A thought on that. Entering fullscreen/pointerlock is still annoying
(because it comes up with that dialog). But the user has already signaled
his intent by pressing a button. The problem the dialogs show is to prevent
users from being tricked/clickjacked (at least somewhat). At least for
fullscreen this could be made smoother by having a permissible UI overlay
or something, of elements that you can't style but put into your webpage
(for instance a canonical fullscreen button). When the user clicks it,
fullscreen is entered, and no further prompts would be required.


Re: Proposal for a Permissions API

2014-09-04 Thread Florian Bösch
This is an issue to use, for a user.

   - http://codeflow.org/issues/permissions.html
   - http://codeflow.org/issues/permissions.jpg
   - In firefox it's a succession of popup

It's also an issue to use for a developer, because the semantics and
methods for requesting, getting, being denied and managing permissions
differ. Sometimes permissions aren't queryable.

It's my stated opinion that ignoring these issue will not make them go
away. And delaying addressing UX and consistency issues just contributes to
a proliferation of bad UX and inconsistent and difficult to use APIs.



On Thu, Sep 4, 2014 at 9:05 PM, Kis, Zoltan zoltan@intel.com wrote:

 Hello,

 On Thu, Sep 4, 2014 at 8:23 PM, Edward O'Connor eocon...@apple.com
 wrote:
  Mounir wrote:
 
  Permissions API would be a single entry point for a web page to check
  if using API /foo/ would prompt, succeed or fail.
 
  It would be a mistake to add such an API to the platform. A unified API
  for explicit permissioning is an attractive nuisance which future spec
  authors will be drawn to.
 
  We should be avoiding adding features to the platform that have to
  resort to explicit permissioning. Instead of adding features which
  require prompting for permission, we should be designing features—like
  drag  drop or input type=file—that don't require prompting for
  permission at all.
 
  I don't think much has changed since this last came up, in the context
  of Notifications:
 
 
 http://lists.w3.org/Archives/Public/public-web-notification/2012Mar/0029.html
 

 This makes sense when applicable, but I think the number of uses cases
 where permissions can be inferred from user actions is rather small.

 Although I agree that too many prompts (and prompts in general) are
 annoying, the proposals in the referred minutes [1] actually address
 that annoyance by allowing apps to skip them, if they are playing
 along some rules. For both developers and users this gives enough
 incentive to be taken seriously.

 I like the idea presented in Mounir's doc about separating permission
 semantics from the API/mechanism of handling them. Granularity of
 needed permissions has always been a hard compromise to set, and would
 be difficult to standardize. For instance, take your example where you
 talk about separating permission grants (something I always hoped
 for). When the user can deny some of the permissions, it may cause
 dependency issues, which the apps could resolve either silently (in
 good case), or through a user dialog - but for the latter it would
 -again- need an API. Also, an API could help user decision, e.g. by
 the ability to give a short description on how exactly the feature is
 used (e.g. how/when the camera is used), and taking it further, if
 that could be expressed in a both presentable and formalized way, then
 it could be even enforced by the system. That is where the needed
 granularity plays an important role. Standardizing that would be hard,
 and it's not independent from the set of policies which need to be
 supported.

 Speaking about policies, choosing one (e.g. the remember for a day
 or similar) policy is not universal, and there may be smarter ones in
 a platform, e.g. an algorithm which chooses about prompting policy as
 referred in the mentioned minutes [1]. Probably we don't need to
 support an infinity of them either, but a certain set of web
 policies could be supported. Mounir's doc addresses some of the things
 needed for this, and fuels the slightly ambitious hope of
 standardizing a mechanism making possible to implement multiple
 policies (or no policies).
 Let's see, but I wouldn't like to see it cut off this early :).

 [1]
 https://docs.google.com/a/intel.com/document/d/1sPNHXRy7tkj0F2gdTtRHj9oFRNhQkaQy2A9t-JnSvsU/preview

 Best regards,
 Zoltan





Re: Proposal for a Permissions API

2014-09-04 Thread Florian Bösch
On Thu, Sep 4, 2014 at 10:18 PM, Marcos Caceres mar...@marcosc.com wrote:

 This sets up an unrealistic straw-man. Are there any real sites that would
 need to show all of the above all at the same time?

Let's say you're writing a video editor, you'd like:

   - To get access to the locations API so that you can geotag the videos
   - Get access to the notifications API so that you can inform the user
   when rendering has finished.
   - Get user media to capture material
   - Put a window in fullscreen (perhaps on a second monitor) or to view
   footage without other decorations

Of course it's a bit contrived, but it's an example of where we're steering
to. APIs don't stop being introduced as of today, and some years down the
road, I'm sure more APIs that require permissions will be introduced, which
increases the likelihood of moving such an example from the realm of
unlikely to pretty common.


Re: Proposal for a Permissions API

2014-09-02 Thread Florian Bösch
I welcome this proposal because the permission dialog creep is certainly
worrying.

Opponents of some kind of permission management have pointed out that
collated dialogs tend to just get ignored by users and blindly approved (as
an example they list Android permission handling).

While that may be true to some extent, this, for me isn't really about
educating users. Popping up a series of permission dialogs isn't any better
than a collated dialog (just more annoying to the user).

It's becoming sort of a dire situation, because permissions are required
today for:

   - Geo location API
   - Push API
   - Notifications API
   - Ability to store more than 5mb with WebSQL/IndexedDB/Filesystem
   - Use a webcam
   - Go into fullscreen
   - Capture the mouse pointer
   - Clipboard API
   - WebVR HMD access
   - and many more

Additionally, there are APIs which currently do not ask for permission for
UX reasons, and implement different means to get permission. For instance
the gamepad API, which derives permission from the user interacting with a
gamepad compatible device. These may benefit from a unified permission API.

There are also some challenges in permission handling due to legacy, where
the individualized way that apps handle permissions can create
incompatibilities. For instance, the fullscreen API polls the user for
permission, but it goes into fullscreen even so, but the user has to
dismiss the dialog with yes or no, in case of yes the dialog vanished, in
case of no, fullscreen is exited. The reason for this behavior has to do
with two contradictory requirements: Fullscreen (as in video for instance)
is a very common operation and it would be annoying from a UX point of view
to make a user poll a requirement to be able to enter it. However
fullscreen is also a security risk, because a malicious developer could
pretend to be the browser (by placing appropriate window decorations) and
then trick the user to enter confidential details.

As a short requirement I think these tasks should be taken over by a
permission framework:

   - Dialog for the user to manage permissions (and their persistence)
   - query of the list of permissions that can be requested of the current
   context
   - query the status of the permissions of the current context
   - poll the user for a set of new permissions
   - get notified when permissions change during the runtime of a context
   outside of polling the user
   - Dialog for the user to grant permissions when requested, either all in
   one go, or individually, including the option to remember the choice.
   - get notified of the permissions status as a consequence of a user poll
   - a way to reconcile the particular idiosyncracies that currently exist
   in the variety of permission handling into a cohesive behavior that does
   not invalidate the original decision to implement the permission for a
   particular API in that fashion (otherwise the authors of these APIs will
   revolt, as often the solution that was arrived at was hard fought).



On Tue, Sep 2, 2014 at 11:10 PM, Dave Raggett d...@w3.org wrote:

 Hi Mounir,

 Have you considered making this return a promise, as per Nikhil's proposal:

https://github.com/w3c/push-api/issues/3#issuecomment-42997477

 p.s. I will bring your idea to the trust  permissions in the open web
 platform meeting, we're holding in Paris this week, see:

 http://www.w3.org/2014/07/permissions/

 Cheers,

Dave


 On 02/09/14 14:51, Mounir Lamouri wrote:

 TL;DR:
 Permissions API would be a single entry point for a web page to check if
 using API /foo/ would prompt, succeed or fail.

 You can find the chromium.org design document in [1].

 # Use case #

 The APIs on the platform are lacking a way to check whether the user has
 granted them. Except the Notifications API, there is no API that I know
 of that expose such information (arguably, somehow, the Quota API does).
 The platform now has a lot of APIs using permissions which is expressed
 as permission prompts in all browsers. Without the ability for web pages
 to know whether a call will prompt, succeeded or fail beforehand,
 creating user experience on par with native applications is fairly hard.

 # Straw man proposal #

 This proposal is on purpose minimalistic and only contains features that
 should have straight consensus and strong use cases, the linked document
 [1] contains ideas of optional additions and list of retired ideas.

 ```
 /* Note: WebIDL doesn’t allow partial enums so we might need to use a
 DOMString
   * instead. The idea is that new API could extend the enum and add their
   own
   * entry.
   */
 enum PermissionName {
 };

 /* Note: the idea is that some APIs would extend this dictionary to add
 some
   * API-specific information like a “DOMString accuracy” for an
   hypothetical
   * geolocation api that would have different accuracy granularity.
   */
 dictionary Permission {
required PermissionName name;
 };

 /* Note: the name 

Re: [Gamepad] Liveness of Gamepad objects

2014-04-30 Thread Florian Bösch
There's two aspects that should not be overlooked.

   1. Some events only make sense in unison. For instance the input of a
   2-axis knob. On many OS implementations, change events for each axis arrive
   separately in short succession. However to an application programmer,
   getting first the X axis change, and then the Y axis change may not make
   sense. A common result of doing this would be that instead of getting a
   diagonal movement, a stepped movement is displayed. The technique employed
   by many native application programmers to this kind of problem is to
   coalesce events they deem to belong together into one event.
   2. Input - Output latency is a significant concern for input devices
   used to produce an output. One way to minimize the latency is called time
   warp in which most of the frames tasks that are not directly influenced by
   the input are completed. Then the program idles till nearly the end of the
   frame, polls the input (which should arrive fresh) and then completes the
   rest of the (ideally constant time) tasks just before the frame runs out.


Re: [Gamepad] Liveness of Gamepad objects

2014-04-29 Thread Florian Bösch
I think both semantics are workable. I'd likely prefer the gamepad state to
be immutable from JS, because assigning state there is smelly. I'd also
prefer the option that incurs less GC overhead if possible. Beyond that, I
just think the implementations should be semantically and symbolically
identical.


On Tue, Apr 29, 2014 at 9:19 PM, Mark S. Miller erig...@google.com wrote:




 On Tue, Apr 29, 2014 at 11:07 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/29/14, 1:46 PM, Mark S. Miller wrote:

 How would either make GC observable?


 Consider the following code:

   navigator.getGamepads()[0].foo = 5;
   var intervals = 0;
   var id = setInterval(function() {
 ++intervals;
 if (navigator.getGamepads()[0].foo != 5) {
   alert(What happened after  + intervals +  intervals?);
   clearInterval(id);
 }
   }, 1000);

 In Chrome's current implementation, where getGamepads() returns a new
 object each time getGamepads()[0] is a new object each time this will
 consistently alert What happened after 1 intervals?.

 In Firefox's current implementation this will not alert at all unless the
 set of connected gamepads changes.

 In an implementation which brokenly GCed and lazily recreated the JS
 reflections of Gamepad objects, the alert could happen after some random
 number of intervals, depending on GC timing.


 I see. Let's not do that then ;).




 Does that help?

 -Boris




 --
 Cheers,
 --MarkM



Re: [gamepad] Haptic Feedback/Controller Vibration

2014-04-04 Thread Florian Bösch
(note that when I list an inconceivable amount of ridiculous device APIs to
add, it's meant as satire of the idea that you should make a specialized
API for every assemblage of sensors, motors and displays)


On Fri, Apr 4, 2014 at 4:45 PM, Florian Bösch pya...@gmail.com wrote:

 On Fri, Apr 4, 2014 at 4:35 PM, Kostiainen, Anssi 
 anssi.kostiai...@intel.com wrote:

 One way to spec that would be to make Vibration its own interface, and
 say GamePad implements Vibration. For more advanced use cases (borrowing
 one from your list):

 interface SteeringWheel {
   readonly attribute Vibration[] vibras;
 };


 I think that the idea of a list of vibrators is fine. I'm explicitely
 against adding a specific device interface for every conceivable device
 configuration (of which there are probably thousands). I'd like the API to
 be simple and non monolythic*** and to concentrate on the components,
 of which there are about half a dozen, 2 being already covered (buttons and
 axes) and a 3rd being what this thread is about (vibrators), such that more
 could be added in the future.


 Based on what I hear, it sounds like we'd likely need its own interface
 that is more capable than the current Vibration.

 I don't know what the vibrators specification specifies, long as it's got
 speed that seems fine.



Re: [gamepad] Haptic Feedback/Controller Vibration

2014-04-03 Thread Florian Bösch
On Thu, Apr 3, 2014 at 5:19 PM, Ted Mielczarek t...@mozilla.com wrote:

 Spec'ing standard rumble motors that are found on all modern controllers
 seems sensible. Spec'ing a way to access a microphone/speaker that's
 present on a controller seems sensible. I think anything more complicated
 than that is likely out of scope.


 The gamepad API today supports Joysticks, Racing Wheels, Gamepads and a
variety of oddball devices. Do you want to restrict this to just gamepads?
If you do, you're still in trouble because:

   - The PS4 gamepad has a multi touch pad (up to 2 touches), a built in
   speaker and a configurable multicolor frontlight
   - The steam controller featured a multi touch pad and screen (not in the
   latest prototypes)
   - The Ouya controller has a single touch pad
   - The Razer Sabertooth comes with an integrated OLED screen

But even so, what do you want to end up with?

   - Jostick API
   - Racing Wheel API
   - Flight Simulator API
  - Sub API for pedals
  - Sub API for thrust quadrants
  - Sub API for instrument panels
  - Sub API for radio controls
  - Sub API for trim panel
  - Sub API for missile selection
  - Sub API for HUD knobs
   - Gamepad API
   - 3D mice API
   - Tablet API
   - Volume Knob API
   - Devices we don't know what they are yet API


Re: [gamepad] Haptic Feedback/Controller Vibration

2014-04-03 Thread Florian Bösch
Every controller is an assemblage of input and output hardware. It's vastly
much easier to to oversee what kind of devices you'll get which are:

   - Buttons (down or up)
   - Axes (scalar value)
   - Rumbles (speed)
   - Motors (negative/positive force and/or scalar value for position)
   - Screens (bitmap)
   - Lights (intensity and/or color)
   - Touchpads (touch events)
   - Text displays (text)

That's not a monolythic API, that's how every other game controller API
in existence on any platform works (like evdev, DirectInput, etc.). The
attempt to wrap specific usecases is what is a monolythic monster API that
tries to be everything and the kitchensink in the end.


On Thu, Apr 3, 2014 at 5:55 PM, Patrick H. Lauke re...@splintered.co.ukwrote:

 On 03/04/2014 16:43, Florian Bösch wrote:

 But even so, what do you want to end up with?


 What do YOU want to end up with? A single monolithic API that covers each
 imaginable type of controller (which may also feature output components
 like additional lights, OLED displays, etc)?

 P
 --
 Patrick H. Lauke

 www.splintered.co.uk | https://github.com/patrickhlauke
 http://flickr.com/photos/redux/ | http://redux.deviantart.com
 twitter: @patrick_h_lauke | skype: patrick_h_lauke




Re: [gamepad] Haptic Feedback/Controller Vibration

2014-04-03 Thread Florian Bösch
On Thu, Apr 3, 2014 at 8:07 PM, Ted Mielczarek t...@mozilla.com wrote:

  Note: DirectInput has been deprecated in favor of XInput, a much simpler
 API that maps directly to the Xbox 360 controller:

 http://msdn.microsoft.com/en-us/library/windows/desktop/ee417001%28v=vs.85%29.aspx


XInput is kinda useless for a wide variety of controllers.



 Trying to design an API that's everything to everyone is extremely hard
 and likely to produce unsatisfactory results. We chose to focus on the most
 useful subset of things that are common to all controllers. I believe the
 API we've spec'ed is useful in spite of not covering everything that exists
 in the world. I am not opposed to the idea of extending it to cover other
 common features of game controllers, but I do think we'll stick to concepts
 that are widely accepted (like vibration) and not bleeding-edge things
 supported by a single device (touchpads, colored LED). If these things gain
 traction then it makes sense to discuss spec'ing them. Otherwise you run
 the risk of spec'ing something that has only one hardware example, thus
 making the API hard to generalize to other devices that introduce similar
 (but not identical) features in the future.


I don't think you need to design an API that's everything to everyone. I
don't even think you need to be specific in the usecase, at all. The
concept of offering lists of axes and lists of buttons is, and should
remain to be, fairly generic and useful. It *today* support Joysticks, 3D
mice, the microsoft strategic commander, etc.

You can add rumble support in the same way that you enumerate other
capabilities. A device might have one or two rumbles, so you offer a list
of rumblers. In the future you could add a list of feedback servos, a list
of touchpads, a list of screens and so on.

Device class specificity in an API isn't a terribly good idea on the
support level. You can always prop up more specific APIs on top of the
underlying API. But if you fuck up the underlying API you make it hard for
the ecosystem to sort out convenient specific solutions.


Re: Proposal Virtual Reality View Lock Spec

2014-03-27 Thread Florian Bösch
Replied to Brandon Jones but not public, reposted below.

On Wed, Mar 26, 2014 at 7:18 PM, Brandon Jones bajo...@google.com wrote:

 As for things like eye position and such, you'd want to query that
 separately (no sense in sending it with every device), along with other
 information about the device capabilities (Screen resolution, FOV, Lens
 distortion factors, etc, etc.) And you'll want to account for the scenario
 where there are more than one device connected to the browser.


There isn't an easy way to describe the FOV and lens distortion in a few
numbers. People are starting to experiment with 220° fresnel lens optics,
they'll have a distortion that's going to be really weird. I think you'll
want to have a device identifier so you always have a fallback you can hack
together manually for devices that couldn't spec themselves into
distortion factors.


 Also, if this is going to be a high quality experience you'll want to be
 able to target rendering to the HMD directly and not rely on OS mirroring
 to render the image. This is a can of worms in and of itself: How do you
 reference the display? Can you manipulate a DOM tree on it, or is it
 limited to WebGL/Canvas2D? If you can render HTML there how do the
 appropriate distortions get applied, and how do things like depth get
 communicated? Does this new rendering surface share the same Javascript
 scope as the page that launched it? If the HMD refreshes at 90hz and your
 monitor refreshes at 60hz, when does requestAnimationFrame fire? These are
 not simple questions, and need to be considered carefully to make sure that
 any resulting API is useful.


The OS-mirror split-screen solution has a lot going for it.
Stereoscopic/multi-head rendering is pretty much a driver nightmare, and
it's slow as hell most of the time.


 Even if your code is rendering at a consistent 60hz that means you're
 seeing ~67ms of lag, which will result in a motion-sickness-inducing
 swimming effect where the world is constantly catching up to your head
 position. And that's not even taking into account the question of how well
 Javascript/WebGL can keep up with rendering two high resolution views of a
 moderately complex scene, something that even modern gaming PCs can
 struggle with.

I believe the lag to be substantially higher than 67ms in browsers.


 That's an awful lot of work for technology that, right now, does not have
 a large user base and for which the standards and conventions are still
 being defined. I think that you'll have a hard time drumming up support for
 such an API until the technology becomes a little more widespread.

The devkit 1 sold around 7000 units, over the campaign at a rate of about
one order every 3-4 minutes. The devkit 2 which only went on sale 7 days
ago, was announced to have sold 75'000 units the day before yesterday, so a
rate of around one every 5-6 seconds. I believe that's more devices than
google glass shifted in its entire multi-year history so far...


On Wed, Mar 26, 2014 at 7:18 PM, Brandon Jones bajo...@google.com wrote:

 So there's a few things to consider regarding this. For one, I think your
 ViewEvent structure would need to look more like this:

 interface ViewEvent : UIEvent {
 readonly attribute Quaternion orientation; // Where Quaternion is 4
 floats. Prevents gimble lock.
 readonly attribute float offsetX; // offset X from the calibrated
 center 0 in millimeters
 readonly attribute float offsetY; // offset Y from the calibrated
 center 0 in millimeters
 readonly attribute float offsetZ; // offset Z from the calibrated
 center 0 in millimeters
 readonly attribute float accelerationX; // Acceleration along X axis
 in m/s^2
 readonly attribute float accelerationY; // Acceleration along Y axis
 in m/s^2
 readonly attribute float accelerationZ; // Acceleration along Z axis
 in m/s^2
 }

 You have to deal with explicit units for a case like this and not
 clamped/normalized values. What would a normalized offset of 1.0 mean? Am I
 slightly off center? At the other end of the room? It's meaningless without
 a frame of reference. Same goes for acceleration. You can argue that you
 can normalize to 1.0 == 9.8 m/s^2 but the accelerometers will happily
 report values outside that range, and at that point you might as well just
 report in a standard unit.

 As for things like eye position and such, you'd want to query that
 separately (no sense in sending it with every device), along with other
 information about the device capabilities (Screen resolution, FOV, Lens
 distortion factors, etc, etc.) And you'll want to account for the scenario
 where there are more than one device connected to the browser.

 Also, if this is going to be a high quality experience you'll want to be
 able to target rendering to the HMD directly and not rely on OS mirroring
 to render the image. This is a can of worms in and of itself: How do you
 reference the display? Can you manipulate a DOM tree on it, or is it
 

Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 1:16 AM, Thibaut Despoulain
thib...@artillery.comwrote:

 I've written a test for this here:
 http://codeflow.org/issues/software-cursor.html

 My observation from testing on linux is that I can't distinguish latency
 for the software cursor from the OS cursor (or not by much anyway) in
 google chrome. In firefox however there's noticable lag. Mileage may vary
 for other platforms.


 This is true, but sadly your test runs on an empty animation frame. If
 your main thread is doing a lot of work already (barely hitting the 60fps
 mark, as it is the case for demanding games), the lag will be much more
 perceptible as you will most likely drop a frame every now and then.


I regard dropping below 60fps as an application defect for a realtime
interactive application where the users input is time critical.

*Reasons things drop below 60fps*

   - You trigger GC-ing
   - Your JS main thread takes up more than 16ms
   - Your GPU processing time takes up more than 16ms

*Detecting the problem*

   - You'd detect GC-ing by measuring the rate at which
   requestAnimationFrame is fired. Gaps that appear periodically are typically
   GC related.
   - JS main thread time can be accurately measure with performance.now()
   - GPU time will be measurable with EXT_disjoint_timer_query
   http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/

*Solving the problem*

   - Incremental GCs for JS will help. But they're not here yet, so one way
   you can do something right now is to be very careful about the use of [],
   {}, function(){}, new, DOM, createElement, innerHTML etc. It is possible to
   eliminate GC caused frame drop almost entirely that way.
   - Set yourself a time budget for the JS-main thread (that's well below
   16ms) that you'll want to reach even on the lower-end of the expected
   hardware. First try to keep everything below that budget in tests. If
   something simply cannot be done in that time, split up into multi-frame
   processing or shuffle it out to a webworker. Finally, if you still run into
   the problem, keep constant tap on the JS time per frame, and when you run
   over it reduce some processing (as in LOD, quality degradation etc.)
   - EXT_disjoint_timer_query is not yet available, but once it does become
   available you can use it to perform accurate testing on the lower end of
   the hardware spectrum to identify rendering issues. And you can also use it
   to dynamically react to pending performance issues by dialing down the LOD.

*Critical features for this use-case vendors have to offer, and soon*

   - Showing the OS cursor, but containing it into a bounding region as
   determined by a dom element (such as body, other elements)
   - Implementing incremental GCs
   - Implementing EXT_disjoint_timer_query
   - Reducing input - output lag

I will point out that I have consistently called for fixes in these areas
during the last 2-3 years on the various discussions/specs/tickets on these
topics. So it's not surprising to me that these are issues now somebody
actually tries to make a real world, real time application in a browser.
I'm very glad that Artillery is doing the fine work of pushing this
boundary. I'd also like to ask the vendors to go the last mile as quickly
as feasible. Solving these issues is in the end, not just crucial for
realtime applications. Web applications get a lot of criticism for not
being able to compete in quality with native applications. This stems in
large part, from the fact that it is hard to make web applications as
responsive and snappy as native applications.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 5:18 PM, Glenn Maynard gl...@zewt.org wrote:

 It's not the application's job to keep the mouse cursor responsive, it's
 the system's.  Hiding the system mouse cursor and drawing one manually is
 always a bad idea.


That's a wonderful argument. And now we look at an FPS game,  or an Oculus
Rift title, or something that requires something else than a picture cursor
like say, an on terrain selection, a bounding box selection, a 3D ray
picking selection/cursor, or anything like that.

Always a bad idea, sure. How about you think about that again hm?


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 5:40 PM, Glenn Maynard gl...@zewt.org wrote:

 (More reasons: it's very likely that you'll end up implementing a cursor
 with different motion and acceleration, a different feel, than the real
 mouse cursor.  It also breaks accessibility features, like mouse trails.)

Oh I agree, if your usecase fits a mouse cursor of the style that the OS
offers, it's definitely preferable to have the OS mouse cursor. And I
distinctly remember arguing prior to the pointerlock specification that it
would be immensely useful to have the ability to show the cursor, but
capture the pointer inside an area. And I was pointed, at the time, to the
solution to draw a software mouse cursor... So now what we're having this
discussion, I find appropriate to have pointed out as a possibility (to
draw the software cursor), which met some, let's call them difficulties.
Now, I also just made a nice list which mentions 4 things to do, among
them, add this ability. I consider this now settled, because I think we all
agree on this. We might just need to agree on the limitations as they
partain to relative mouse movement reporting when you're showing the OS
cursor. And I think that's relatively easy to agree on, you can't rely on
relative motion outside of the constrained area if you show the OS cursor.


 This doesn't seem to relate to the discussion, which is about mouse
 pointers.

It is about input - output, essentially. Intput as it comes from your
pointing device, and output, as it reflects in your application. The OS
mouse cursor is but one example of such reflection. There are many more
usecases that would like to benefit from a low input - output latency, one
example of which is the variety of software cursors which may take the
shape of a picture, or something else completely. But moreover there are
usecases such as viewing controls that have the exact same need, notably
FPS shooters, for instance, or Oculus rift titles and so forth. So, it
matters a great deal to the, because you can alleviate some of the issues
while simultaneously solving a whole host of other issues.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 6:47 PM, Brendan Eich bren...@secure.meer.netwrote:

 Glenn Maynard wrote:

 It's not the application's job to keep the mouse cursor responsive, it's
 the system's.  Hiding the system mouse cursor and drawing one manually is
 always a bad idea.


 Agreed!


Like I say, some usecases are fine with OS cursors. But that doesn't mean
that somehow, vendors are absolved from improving input - output latency
issues even if pointerlock is updated to allow OS cursor showing, which I'm
all for.  There are a lot of usecases that involve pointing devices, and
pointing metaphors, or view controls, virtual helmets, and so forth, that
cannot properly function with a high input - output latency. For this
reason it's imperative not only to address the ability to make the OS
cursor visible, but also to continue working on low latency input - output.


 In the same vein, programmers cannot avoid GC pauses without relying on
 pause-free or at least incremental GC (which BTW some browsers' JS engines
 have already, e.g., SpiderMonkey in Firefox), or as a real alternative,
 cross-compiling C or C++ for example via Emscripten, to allocate heap
 memory from a typed array.

Florian, your goals are good, but the means to those ends must involve
 better runtimes or compilers -- not on JS programmers working harder to
 avoid GC while still somehow allocating objects frequently and even
 implicitly.


I agree that things aren't today how they should be for realtime
applications with GCs. And it's true that GCs are getting better. But, it
is the status of today, that a JS programmer has to work harder to make a
glitch/stutter/jerk free realtime applications. A better GC can improve
this situation. However, that doesn't mean that you can forget about GCing
and frame budgets. A realtime programmer will always have to be conscious
not to overload the GC even if it's incremental. Because when the
incremental GC would not manage to get rid of the garbage faster than it is
produced, it would have to resort to more drastic pauses to rectify that
situation. Fortunately the act of being GC conscious for a non incremental
GC, and for an incremental one, is very similar. You try to avoid
triggering it. So in that, you can start that work today, it will not be in
vain once GCs get better in some far flung future.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 7:07 PM, Vincent Scheib sch...@google.com wrote:

 Windows has ClipCursor() and Linux has XGrabPointer(). Once we know we can
 implement the functionality, we can discuss how to express this in an API.


Would using Quarz CGWarpMouseCursorPosition work where you'd clamp the
passed position into the desired rectangle?

https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/Quartz_Services_Ref/Reference/reference.html#//apple_ref/c/func/CGWarpMouseCursorPosition


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
Right, so you'd CGAssociateMouseAndCursorPosition(false) and then use
CGWarpMouseCursorPosition or CGDisplayMouseCursorToPoint to move it where
you want it to be, but clamped inside the rect. As long as you keep pumping
the event loop separately that does this (as fast as possible) it shouldn't
be perceptively different from an OS cursor.


On Mon, Feb 24, 2014 at 7:41 PM, Vincent Scheib sch...@google.com wrote:




 On Mon, Feb 24, 2014 at 10:37 AM, Florian Bösch pya...@gmail.com wrote:

 On Mon, Feb 24, 2014 at 7:07 PM, Vincent Scheib sch...@google.comwrote:

 Windows has ClipCursor() and Linux has XGrabPointer(). Once we know we
 can implement the functionality, we can discuss how to express this in an
 API.


 Would using Quarz CGWarpMouseCursorPosition work where you'd clamp the
 passed position into the desired rectangle?


 https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/Quartz_Services_Ref/Reference/reference.html#//apple_ref/c/func/CGWarpMouseCursorPosition



 I believe no, because it would allow the pointer to escape the region
 before being warped back, permitting escape if clicked at that time as well.



Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-24 Thread Florian Bösch
On Mon, Feb 24, 2014 at 8:21 PM, Glenn Maynard gl...@zewt.org wrote:

 I think that going fullscreen is the right approach, since locking the
 mouse into the window while not fullscreen is really weird and rare, at
 least in Windows.

It's quite common for games to have a cursor, grab the pointer and not be
fullscreen. Of course most games that allow for this, use software cursors,
and are apparently not having much problems with it.


  By going fullscreen, this hooks into the same UI design to allow the user
 to escape.  Even if this was supported in a window, there'd still have to
 be some UI to tell the user how to exit, which could end up having the same
 problem.

A sidenote, if you have more than one monitor, going fullscreen will not
lock the pointer on screen, obviously. It's not terribly common perhaps to
have more than one monitor, but it's also not that rare.


I've been annoyed by the edge-of-screen browser behavior too.  It's a part
 of the screen where you might want to put something, like navigation UI.  I
 haven't come up with a better solution, though.  I don't think having a
 stronger fullscreen mode that asks the user for more permission will fly.
  Browsers try very hard to avoid asking for special permissions--people
 will just agree without reading it, then won't know how to escape from the
 app.

 I think that for your use case of edge scrolling, having a fullscreen
 notice appear at the top is OK (if a little ugly), as long as it's
 transparent to mouse events so you can still see the mouse moving around
 (or else you might see the mouse move to 20 pixels from the top, then never
 see it actually reach the top, so you'd never start scrolling).  Menus and
 address bars appearing seems like a bug.  That makes sense for the
 fullscreen you get by hitting F11 in Windows or Command-Shift-F in OSX, but
 application fullscreen should just act like a game, and keep as much as
 possible out of the way.


For a fast paced game where you might click and select and do whatnot,
having a slidedown from the top of the window when you hit the border is
not acceptable. People will click it accidentally a lot for instance when
doing a selection. Not being able to offer a fullscreen button in the game
is also bad UX. You end up with explanations for the user to Please press
F11 to get into fullscreen. You should never have to explain to a user
what ritual he has to perform, if instead you can trigger that action
without the ritual.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-23 Thread Florian Bösch
On Sun, Feb 23, 2014 at 1:57 AM, Thibaut Despoulain thib...@artillery.com
 wrote:

 The issue with pointerlock is that it requires the app to draw its own
 cursor instead of the OS cursor

I fully agree with motivation, it is usually preferrable to give the user
an OS-themed cursor (not always, but often).


- Significant pointer lag between movement input and actual movement
on screen compared to an OS-drawn cursor (yes, even at 60fps),

 I've written a test for this here:
http://codeflow.org/issues/software-cursor.html

My observation from testing on linux is that I can't distinguish latency
for the software cursor from the OS cursor (or not by much anyway) in
google chrome. In firefox however there's noticable lag. Mileage may vary
for other platforms.



- Inability to use DOM elements with mouse events for a game
overlay/HUD.

 The test I've written here 
 http://codeflow.org/issues/software-cursor.htmlalso tests mouse event 
 synthesis (as hinted at by an example in the
pointerlock spec) and it works satisfactory in chrome and firefox on linux.
To get all subtleties right one would also have to do most other mouse
events I guess (mouseover, mouseout, mousemove etc.).


On Sun, Feb 23, 2014 at 2:15 AM, Brandon Jones bajo...@google.com wrote:

 Chrome a software cursor will experience visible lag, which these types of
 games are highly sensitive to.

Not as much lag as I expected, see this test:
http://codeflow.org/issues/software-cursor.html


 I'm not sure about the latency of Firefox or others

Pretty bad, see this test: http://codeflow.org/issues/software-cursor.html



 , but in general a hardware cursor is preferable any time you have a
 visible cursor. As a result this is still a use case worth considering.

It's not always preferrable (sometimes people want custom cursors). But in
general it is preferrable.


When providing the OS cursor there's several issues to be addressed:

   - If you are in fullscreen, capture the cursor to the viewport but show
   the OS cursor, that does *NOT* mean you'll want the annoying slide-down
   that some browsers provide if you hit the top edge. If you wanted that,
   you'd never have restricted the cursor in the first place. I would suggest
   as a resolution that if somebody requests cursor restriction, and gets it,
   that the slide down is never shown regardless if the OS cursor is shown or
   not.
   - It should be possible to show and hide the OS cursor separately, but
   retain pointerlock. The reason for that is that you might want to avoid
   synthesizing your own cursor, but you may also want to keep pointerlock
   because your game switched to some UI interaction for instance from an FPS
   interaction mode. I would suggest this could be resolved with an additional
   API call of the form: document.hideCursor() and document.showCursor() or
   something similar. If the sequence is myContainer.requestPointerLock();
   document.showCursor(); the cursor is shown, but confined to the container.
   Outside of being in pointerlock, these calls would have no effect.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-23 Thread Florian Bösch
On Sun, Feb 23, 2014 at 9:55 AM, Florian Bösch pya...@gmail.com wrote:


- Inability to use DOM elements with mouse events for a game
overlay/HUD.

 The test I've written here
 http://codeflow.org/issues/software-cursor.html also tests mouse event
 synthesis (as hinted at by an example in the pointerlock spec) and it works
 satisfactory in chrome and firefox on linux. To get all subtleties right
 one would also have to do most other mouse events I guess (mouseover,
 mouseout, mousemove etc.).


I did some more testing on this. Synthetic mouse events are limited in that
they cannot trigger some state like hover styles. Other things like
focus/blur require special handling. It's also left to the synthetic event
creator to synthesize compound events like over/out/enter/leave and their
semantic as supplying the primitive mouse events (click, mousemove) doesn't
automatically synthesize the others.


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-23 Thread Florian Bösch
On Sun, Feb 23, 2014 at 4:18 PM, Brandon Jones bajo...@google.com wrote:

 - it's possible to theme the OS cursor using custom images with CSS.
 https://developer.mozilla.org/en-US/docs/Web/CSS/cursor/url

Although that doesn't absolve vendors from fixing the latency issue even if
native pointers where to be made available during pointerlock. The reason
is that cursors come in more flavors than an image. For example they could
come in some variety of 3D rendered representations useful for the game in
question.


 - The reason the cursor is hidden when the pointer is locked is that some
 OSes don't have the ability to report relative mouse movement correctly at
 screen edges. This requires the cursors to constantly be reset to the
 center of the screen, which obviously would look strange if the cursor was
 visible.

Isn't that only a concern if you want to capture the cursor, not when you
display the OS cursor?


 - You already mentioned some issues with synthetic mouse events, but
 unfortunately it's worse than you suspect. For example: sending a synthetic
 click event to a checkbox doesn't actually change it's checked state. (Not
 the last time I tried anyway) select controls also have a hard time with
 synthetic events, and there's a whole host of other sub rely broken things.
 :(

Is there a motivation not to make it work? Clickjacking?


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-22 Thread Florian Bösch
On Sat, Feb 22, 2014 at 11:22 PM, Florian Bösch pya...@gmail.com wrote:

 Caveat, I think Firefoxes implementaiton of the Pointerlock API might not
 follow the specification yet to make it possible to have pointerlock
 without fullscreen.


Just checked this against a test I wrote a while ago
http://codeflow.org/issues/pointerlock-test.html and consulted the
pointerlock stats for IE. You can get pointerlock in both firefox and
chrome without fullscreen (you can also get pointerlock and fullscreen
simultaneously). IE seems to support pointerlock in versions that have
webgl, but they seem to have added it only recently so support is still low
(32%).


Re: [fullscreen] Problems with mouse-edge scrolling and games

2014-02-22 Thread Florian Bösch
Pointerlock should solve these problems in the following fashion:

   - When the user clicks into the app, request pointerlock
   - Use it to give him a cursor drawn by you
   - That way you can keep the interaction inside your game and accurately
   detect borders etc.

I run http://webglstats.com/ and It also records the availability of
pointerlock. This is relative to everybody who has webgl and it currently
stands at 94.1%.

Caveat, I think Firefoxes implementaiton of the Pointerlock API might not
follow the specification yet to make it possible to have pointerlock
without fullscreen. I know that it's possible to get pointerlock in google
chrome without fullscreen.

Links:

   - http://www.html5rocks.com/en/tutorials/pointerlock/intro/
   - http://www.w3.org/TR/pointerlock/




On Sat, Feb 22, 2014 at 2:00 AM, Ian Langworth i...@artillery.com wrote:

 Hi everyone,

 We're building a console- and native-quality game in the browser using
 JavaScript and WebGL. You can see a very early version of the game in this
 video: https://youtu.be/NiCy5igO9-I . It's a realtime strategy (RTS)
 game, like StarCraft[1] or Command  Conquer[2], and moving the cursor to
 the edge of the screen is the primary way that users move around the map.

 We have a single requirement: Moving the cursor to, and possibly past, the
 edge of the window or screen should pan the camera in that direction.

 In windowed mode there are a couple of problems: Players might click
 outside the window accidentally, pixels outside the window are wasted
 screen real estate, mousemove events aren't fired when the cursor is
 outside the window, and sometimes the cursor moves so quickly that it
 misses the area of the window where we do edge detection.

 The better option is to go fullscreen with the Fullscreen API, but this
 has problems as well. There are certain behaviors that occur when moving
 the cursor to the top and bottom of the screen, which I've illustrated in
 the following videos using a quick edge-scrolling demo[3].. (Please ignore
 the problems with cursor detection in the Windows videos. It only occurred
 while the screen recording software was active.):

Chrome, Windows 7
https://www.dropbox.com/s/t9kyo8s5am76ezg/chrome%20win7%20edges.avi
The fullscreen notice appears every time the cursor is moved to the top.

Chrome, Mac OS X
https://www.dropbox.com/s/ke1mr5te2dwgvor/chrome%20osx%20edges.mov
A poor experience with the menu and address bars appearing at the top
 and Dock appearing at the bottom.

Firefox, Windows 7

 https://www.dropbox.com/s/iku8croaphsgcwd/firefox%20win7%20edges.avihttps://www.dropbox.com/s/iku8croaphsgcwd/firefox%20win7%20edges..avi
No problems with the fullscreen experience!

Firefox, Mac OS X
https://www.dropbox.com/s/0bkdx71ir0yhw88/firefox%20osx%20edges.mov
Menu bar appears at top, along with some mysterious window chrome, and
 the Dock appears at the bottom as well.

 These behaviors are to remind the user about fullscreen and lessen the
 chance of phishing. But, as you might imagine, they are troublesome when
 you're in the heat of battle and are trying to crush your opponent. I would
 like to solicit the list for opinions on how we can improve the fullscreen
 UI for games and give players the best experience possible. Some options
 that come to mind:

   (A) Provide an alternate fullscreen API or option which provides a more
 complete experience, but with more dire warnings to the user. For example,
 pass in a flag to requestFullscreen() which gives the page full control
 over the screen (no fullscreen reminder notifications, menu bars, or Docks)
 and even the keyboard (so games can use the Escape key), but entering this
 mode presents a much more intense warning to the user like the invalid
 certificate warnings.

   (B) Maintain the current API, but put an option in the preferences or
 flags which makes the browser's fullscreen mode more complete and without
 interruption. Hardcore gamers will likely accept this extra step if it
 means getting an optimal experience.

   (C) Propose a new Mouse-Edge Detection API, which might solve these
 problems and provide games with better cursor detection overall.

 I appreciate any and all feedback.

 [1] http://youtu.be/Qb0VzbxdP4U?t=10m14s

 [2] http://youtu.be/l41hG-fVDN4?t=3m34s

 [3] http://jsfiddle.net/statico/F8sjw/show/




Re: [Gamepad] spec status

2013-05-08 Thread Florian Bösch
I think the user unfriendlyness derives from that you can't open that page
which you've played before and have it just work. Maybe the UA could
remember the devices you enabled?


Re: ZIP archive API?

2013-05-07 Thread Florian Bösch
On Tue, May 7, 2013 at 8:09 AM, Jonas Sicking jo...@sicking.cc wrote:

  You're arguing for allowing accessing files inside ZIPs by URL, which
 means
  you're going to have to do the work anyway, since you'd be able to
 create a
  blob URL, reference a file inside it using XHR, and get a Blob as a
 result.
  This is a small subset of that.

 No, the work to write and maintain an API for ZIP
 compress/decompression is pretty orthogonal, implementation-wise, to a
 protocol handler for ZIP decompression.


In order to implement zip you would use one of the ready made libraries
supporting it, such as libzip (http://www.nih.at/libzip/index.html),
minizip (http://www.winimage.com/zLibDll/minizip.html),

Both libraries (and any other library you might encounter) already defines
an API and an implementation. In order to support ZIP URLs you would use
the API. In order to support a JS zip API, you would expose the API. These
are not orthogonal, they are correlated.


Re: ZIP archive API?

2013-05-06 Thread Florian Bösch
The main reason to use an archive (other than the space-savings) for me is
to be able to transfer tens of thousands of small items that go into
producing WebGL applications of non trivial scope.


On Mon, May 6, 2013 at 1:27 PM, Robin Berjon ro...@w3.org wrote:

 On 03/05/2013 21:05 , Florian Bösch wrote:

 It can be implemented by a JS library, but the three reasons to let the
 browser provide it are Convenience, speed and integration.


 Also, one of the reasons we compress things is because they're big.*
 Unpacking in JS is likely to mean unpacking to memory (unless the blobs are
 smarter than that), whereas the browser has access to strategies to
 mitigate this, e.g. using temporary files.

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.



 * Captain Obvious to the rescue!


 --
 Robin Berjon - http://berjon.com/ - @robinberjon



Re: ZIP archive API?

2013-05-03 Thread Florian Bösch
I'm interested a JS API that does the following:

Unpacking:
- Receive an archive from a Dataurl, Blob, URL object, File (as in
filesystem API) or Arraybuffer
- List its content and metadata
- Unpack members to Dataurl, Blob, URL object, File or Arraybuffer

Packing:
- Create an archive
- Put in members passing a Dataurl, Blob, URL object, File or Arraybuffer
- Serialize archive to Dataurl, Blob, URL object, File or Arraybuffer

To avoid the whole worker/proxy thing and to allow authors to selectively
choose how they want to handle the data, I'd like to see synchronous and
asynchronous versions of each. I'd make synchronicity an argument/flag or
something to avoid API clutter like packSync, packAsync, writeSync,
writeAsync, and rather like write(data, callback|boolean).

- Pythons zipfile API is ok, except the getinfo/setinfo stuff is a bit over
the top: http://docs.python.org/3/library/zipfile.html
- Pythons tarfile API is less clutered and easier to use:
http://docs.python.org/3/library/tarfile.html
- zip.js isn't really usable as it doesn't support the full range of types
(Dataurl, Blob, URL object, File or Arraybuffer) and for asynchronous
operation needs to rely on a worker, which is bothersome to setup:
http://stuk.github.io/jszip/

My own implementation of the tar format only targets array buffers and
works synchronously, as in.

var archive = new TarFile(arraybuffer);
var memberArrayBuffer = archive.get('filename');



On Fri, May 3, 2013 at 2:37 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, May 2, 2013 at 1:15 AM, Paul Bakaus pbak...@zynga.com wrote:
  Still waiting for it as well. I think it'd be very useful to transfer
 sets
  of assets etc.

 Do you have anything in particular you'd like to see happen first?
 It's pretty clear we should expose more here, but as with all things
 we should do it in baby steps.


 --
 http://annevankesteren.nl/



Re: ZIP archive API?

2013-05-03 Thread Florian Bösch
It can be implemented by a JS library, but the three reasons to let the
browser provide it are Convenience, speed and integration.

Convenience is the first reason, since browsers by far and large already
have complete bindings to compression algorithms and archive formats,
letting the browser simply expose the software it already ships makes good
sense rather than requiring every JS user to supply his own version.

Speed may not matter to much on some platforms, but it matters a great deal
on underpowered devices such as mobiles.

Integration is where the support for archives goes beyond being an API,
where URLs (to link.href, script.src, img.src, iframe.src, audio.src,
video.src, css url(), etc.) could point into an archive. This cannot be
done in JS.



On Fri, May 3, 2013 at 8:04 PM, Jonas Sicking jo...@sicking.cc wrote:

 The big question we kept running up against at Mozilla is why couldn't
 this simply be implemented as a JS library?

 If performance is the argument we need to back that up with data.

 / Jonas
 On May 3, 2013 10:51 AM, Paul Bakaus pbak...@zynga.com wrote:

  Hi Anne, Florian,

  I think the first baby step, or MVP, is the unpacking that Florian
 mentions below. I would definitely like to have the API available on both
 workers and normal context.

  Thanks,
 Paul

   From: Florian Bösch pya...@gmail.com
 Date: Fri, 3 May 2013 14:52:36 +0200
 To: Anne van Kesteren ann...@annevk.nl
 Cc: Paul Bakaus pbak...@zynga.com, Charles McCathie Nevile 
 cha...@yandex-team.ru, public-webapps WG public-webapps@w3.org,
 Andrea Marchesini amarches...@mozilla.com
 Subject: Re: ZIP archive API?

  I'm interested a JS API that does the following:

  Unpacking:
 - Receive an archive from a Dataurl, Blob, URL object, File (as in
 filesystem API) or Arraybuffer
 - List its content and metadata
 - Unpack members to Dataurl, Blob, URL object, File or Arraybuffer

  Packing:
 - Create an archive
 - Put in members passing a Dataurl, Blob, URL object, File or Arraybuffer
 - Serialize archive to Dataurl, Blob, URL object, File or Arraybuffer

  To avoid the whole worker/proxy thing and to allow authors to
 selectively choose how they want to handle the data, I'd like to see
 synchronous and asynchronous versions of each. I'd make synchronicity an
 argument/flag or something to avoid API clutter like packSync, packAsync,
 writeSync, writeAsync, and rather like write(data, callback|boolean).

  - Pythons zipfile API is ok, except the getinfo/setinfo stuff is a bit
 over the top: http://docs.python.org/3/library/zipfile.html
 - Pythons tarfile API is less clutered and easier to use:
 http://docs.python.org/3/library/tarfile.html
 - zip.js isn't really usable as it doesn't support the full range of
 types (Dataurl, Blob, URL object, File or Arraybuffer) and for asynchronous
 operation needs to rely on a worker, which is bothersome to setup:
 http://stuk.github.io/jszip/

  My own implementation of the tar format only targets array buffers and
 works synchronously, as in.

  var archive = new TarFile(arraybuffer);
 var memberArrayBuffer = archive.get('filename');



 On Fri, May 3, 2013 at 2:37 PM, Anne van Kesteren ann...@annevk.nlwrote:

 On Thu, May 2, 2013 at 1:15 AM, Paul Bakaus pbak...@zynga.com wrote:
  Still waiting for it as well. I think it'd be very useful to transfer
 sets
  of assets etc.

  Do you have anything in particular you'd like to see happen first?
 It's pretty clear we should expose more here, but as with all things
 we should do it in baby steps.


 --
 http://annevankesteren.nl/





Re: [Gamepad] spec status

2013-05-02 Thread Florian Bösch
I'd like to note that the current semantic (in google chrome) of press
button to connect device is not very user friendly. Not all buttons
register as buttons (some register as axes) and won't do anything. Some
devices are also devoid of buttons (like the oculus rift) to press.


On Thu, May 2, 2013 at 4:45 PM, Ted Mielczarek t...@mozilla.com wrote:

 Hello there,

 The Gamepad spec hasn't seen a lot of activity lately, and it's
 primarily my fault. I got tied up in other work and didn't have time to
 work on my implementation in Firefox or the spec. I dug out of that and
 managed to land an initial implementation which is currently available
 in Firefox nightly builds, and I've been making small changes to the
 spec to clean up some issues. While fixing up my implementation to match
 the spec and testing vs. Scott's implementation in Chrome I've found a
 few spec issues that we'll have to work through. I'll detail these in
 separate emails shortly.

 I'd like to get both the spec and my implementation into a shippable
 state in the very near future, so I don't want to add any significant
 new features. I'd just like to clarify and tighten up the language so we
 can be sure to ship interoperable implementations and also not make
 future spec work difficult.

 Regards,
 -Ted






Re: ZIP archive API?

2013-04-30 Thread Florian Bösch
I am very interested in working with archives. I'm currently using it as a
delivery from server (like quake packs), import and export format for WebGL
apps.


On Tue, Apr 30, 2013 at 3:18 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Apr 30, 2013 at 1:07 PM, Charles McCathie Nevile
 cha...@yandex-team.ru wrote:
  Hi all, at the last TPAC there was discussion of a ZIP archive proposal.
  This has come and gone in various guises.
 
  Are there people currently interested in being able to work with ZIP in a
  web app? Are there implementors and is there an editor?

 We have https://wiki.mozilla.org/WebAPI/ArchiveAPI which is
 implemented as well (cc'd Andrea, who implemented it).

 There's also https://bugzilla.mozilla.org/show_bug.cgi?id=681967 about
 a somewhat-related proposal by Paul.


 --
 http://annevankesteren.nl/




Re: The need to re-subscribe to requestAnimationFrame

2013-03-08 Thread Florian Bösch
Btw. just as a sidenote, the document in document.requestAnimationFrame
kind of matters. If you're calling it from the document that the canvas
isn't in, then you'll get flickering. That may sound funny, but it's
actually not that far fetched and is a situation you can run into if you're
transfering a canvas to a popup window or iframe. With a requestInterval
kind of function you're pretty much screwed in that case.


On Fri, Mar 8, 2013 at 7:43 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mar 2, 2013 6:32 AM, Florian Bösch pya...@gmail.com wrote:
 
  You can also wrap your own requestAnimationFrameInterval like so:
 
  var requestAnimationFrameInterval = function(callback){
var runner = function(){
  callback();
  requestAnimationFrame(runner);
};
runner();
  }
 
  This will still stop if there's an exception thrown by callback, but it
 lets you write a cleaner invocation like so:
 
  requestAnimationFrameInterval(function(){
// do stuff
  });
 
  It does not give you a way to stop that interval (except throwing an
 exception), but you can add your own if you're so inclined.
 
  Notably, you could not flexibly emulate requestAnimationFrame (single)
 via requestAnimationFrameInterval, so if you're gonna pick one semantic to
 implement, it's the former rather than the latter.

 For what it's worth, this would have been another (maybe better) way to
 address the concern that current spec tries to solve by requiring
 reregistration.

 I.e. we could have defined a

 id = requestAnimationFrameInterval(callback)
 cancelAnimationFrameInterval(id)

 Set of functions which automatically cancel the interval if an exception
 is thrown.

 That reduces the current risk that people write code that reregister at
 the top, and then has a bug further down which causes an exception to be
 thrown.

 / Jonas

 
 
  On Sat, Mar 2, 2013 at 3:15 PM, Glenn Maynard gl...@zewt.org wrote:
 
  On Sat, Mar 2, 2013 at 5:03 AM, David Bruant bruan...@gmail.com
 wrote:
 
  If someone wants to reuse the same function for
 requestionAnimationFrame, he/she has to go through:
  requestAnimationFrame(function f(){
  requestAnimationFrame(f);
  // do stuff
  })
 
 
  FYI, this pattern is cleaner, so you only have to call
 requestAnimationFrame in one place:
 
  function draw() {
  // render
  requestAnimationFrame(draw);
  }
  draw();
 
  --
  Glenn Maynard
 
 



Re: The need to re-subscribe to requestAnimationFrame

2013-03-02 Thread Florian Bösch
You can also wrap your own requestAnimationFrameInterval like so:

var requestAnimationFrameInterval = function(callback){
  var runner = function(){
callback();
requestAnimationFrame(runner);
  };
  runner();
}

This will still stop if there's an exception thrown by callback, but it
lets you write a cleaner invocation like so:

requestAnimationFrameInterval(function(){
  // do stuff
});

It does not give you a way to stop that interval (except throwing an
exception), but you can add your own if you're so inclined.

Notably, you could not flexibly emulate requestAnimationFrame (single) via
requestAnimationFrameInterval, so if you're gonna pick one semantic to
implement, it's the former rather than the latter.


On Sat, Mar 2, 2013 at 3:15 PM, Glenn Maynard gl...@zewt.org wrote:

 On Sat, Mar 2, 2013 at 5:03 AM, David Bruant bruan...@gmail.com wrote:

 If someone wants to reuse the same function for requestionAnimationFrame,
 he/she has to go through:
 requestAnimationFrame(function f(){
 requestAnimationFrame(f);
 // do stuff
 })


 FYI, this pattern is cleaner, so you only have to call
 requestAnimationFrame in one place:

 function draw() {
 // render
 requestAnimationFrame(draw);
 }
 draw();

 --
 Glenn Maynard




Re: Re: Keyboard events for accessible RIAs and Games

2013-02-24 Thread Florian Bösch
This looks fine to me, I have question though. Are we sure that the locale
dependent printable keys are uniquely identified by code alone (and not
also location on the keyboard)?


On Sun, Feb 24, 2013 at 5:36 PM, Gary Kacmarcik (Кошмарчик) 
gary...@chromium.org wrote:

 I've updated the UIEvents document with an initial draft for
 queryKeyCap/queryLocale

 https://dvcs.w3.org/hg/d4e/raw-file/tip/source_respec.htm (Section 4.1)


 On Mon, Feb 18, 2013 at 8:09 AM, Gary Kacmarcik (Кошмарчик) 
 gary...@chromium.org wrote:

 I'll be updating the document this week. I'll send an update to the list
 after that happens.


 On Sat, Feb 16, 2013 at 7:40 AM, Florian Bösch pya...@gmail.com wrote:

 Any progress on the speccing of queryKeyCap?


 On Fri, Feb 1, 2013 at 9:14 PM, Gary Kacmarcik (Кошмарчик) 
 gary...@chromium.org wrote:

 On Fri, Feb 1, 2013 at 11:42 AM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  I think we should give it another try by including it in our UI
 Events spec. I like the idea of adding the static queryKeyCap(code) API to
 Keyboard events. I wonder about the name though. A key capability 
 doesn't
 sound right. Are we querying for a key's locale name? e.g.,
 queryKeyLocaleName(code)?


  SGTM. I'll add a section to the spec for this.

 The KeyCap name refers to the cap placed over the keyswitch of the
 physical keyboard.  It's not a great name since there's no guarantee that
 the physical keyboard matches the current locale (although they usually
 do). However, the other (more accurate) names that I was able to come up
 with at the time were all rather unwieldy.

 Taking your name as a base, I think we'd need something like
 queryLocaleChar(locale, code) or queryLocaleKey since we'd be returning the
 equivalent of the 'char' (or 'key') attribute.

 Thinking briefly about this now:
 * If we return 'char' equivalents, we won't return dead keys or other
 virtual keys.
 * If we return 'key' values, then we need to address how to handle
 non-printable keys like Shift and Home that people might expect to be
 translated.
 * I think we'll also want the ability to specify modifier keys to apply
 to the 'code' (to generate shifted or AltGr'ed versions).

 I'll formalize this a bit more and send something out for comments.

 -Gary







Re: Re: Keyboard events for accessible RIAs and Games

2013-02-16 Thread Florian Bösch
Any progress on the speccing of queryKeyCap?


On Fri, Feb 1, 2013 at 9:14 PM, Gary Kacmarcik (Кошмарчик) 
gary...@chromium.org wrote:

 On Fri, Feb 1, 2013 at 11:42 AM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  I think we should give it another try by including it in our UI Events
 spec. I like the idea of adding the static queryKeyCap(code) API to
 Keyboard events. I wonder about the name though. A key capability doesn't
 sound right. Are we querying for a key's locale name? e.g.,
 queryKeyLocaleName(code)?


 SGTM. I'll add a section to the spec for this.

 The KeyCap name refers to the cap placed over the keyswitch of the
 physical keyboard.  It's not a great name since there's no guarantee that
 the physical keyboard matches the current locale (although they usually
 do). However, the other (more accurate) names that I was able to come up
 with at the time were all rather unwieldy.

 Taking your name as a base, I think we'd need something like
 queryLocaleChar(locale, code) or queryLocaleKey since we'd be returning the
 equivalent of the 'char' (or 'key') attribute.

 Thinking briefly about this now:
 * If we return 'char' equivalents, we won't return dead keys or other
 virtual keys.
 * If we return 'key' values, then we need to address how to handle
 non-printable keys like Shift and Home that people might expect to be
 translated.
 * I think we'll also want the ability to specify modifier keys to apply to
 the 'code' (to generate shifted or AltGr'ed versions).

 I'll formalize this a bit more and send something out for comments.

 -Gary




Re: Allow ... centralized dialog up front

2013-02-06 Thread Florian Bösch
On Wed, Feb 6, 2013 at 2:09 AM, Charles McCathie Nevile 
cha...@yandex-team.ru wrote:

 **
 This may be true. But pointer-lock is an example of something that needs
 the entire UX to be thought through. simply switching from one to the other
 without the user knowing is also poor UX, since it risks making the user
 think their system is broken. Add to this a user working with e.g.
 mousekeys, or a magnifier at a few hundred percent plus high-contrast.

 The problems are not simple, and it is unlikely the solutions will be
 either. Ian's claim that everything can be done seamlessly without making
 it seem like a security dialog may be over-confident, and as Robin points
 out the first UI developed (well, the second actually) might not be the
 best approach in the long run, but it is certainly the direction we should
 be aiming.

This particular problem is exceedingly simple.

A pointerlock can be used in two ways: #1 it can be used to permanently
lock the pointer until the user escapes it (by esc or other means). That is
a model well suited for games. #2 A pointerlock has uses besides games
(such as controlling viewports, value input controls, high precision color
selectors, etc.) mainly found in productivity applications. Permanently
locking the pointer would require the user to explicitely initiate an exit
of pointerlock whenever he wants to use the mouse outside of the
productivity application. That is the antithesis of productive.

The current pointerlock request model has progblems for usecase #2
- The dialog to enable pointerlock is only presented at first use. First
use was trying to drag something, which will always fail under that
circumstance.
- The users attention was anywhere but at the top of the browser for
notifications, the permission dialog to enable pointerlock often goes
unnoticed
- Triggering the pointerlock dialog up front is impossible since it has to
be the result of a user interaction

Usecase #2 is not broken, per se. But it does require a prospective web
application developer to put in his own up-front dialog explaining how the
browser is broken and what series of incantations and buttons the user has
to push, to operate the black box (the browser) so it works. This in short
is the antithesis of UX. You find yourself explaining how to do something
awkward in order to make things work, rather than doing something not
awkward.

I suspect that similar issues can be found in a lot of the extended
permissible functionality where one usecase works fine with whatever
paradign happens to be currently implemented, but others lead to a
cul-de-sac of having to do awkward things without any way to make them not
awkward.


 So where are we? The single up front dialogue doesn't work. We know
 that. Mutliple contextual requests go from being effective to being
 counter-productive at some magic tipping point that is hard to predict.

 To take an example, let's say I have a chat application that can use
 web-cam and geolocation. Some user agents might decide to put the
 permissions up front when you first load the app. And some users will be
 fine with that. Some will be happy to let it use geolocation when it wants,
 but will want to turn the camera on and off explicitly (note that Skype -
 one of the best-known video chat apps there is - allows this as a matter of
 course. I don't know of anyone who has ever complained).

 Some app stores might refuse to offer the service unless you have
 already accepted that you will let any app from the store use geolocation
 and camera. Others will be quite happy with a user agent that (like skype -
 or Opera) puts the permissions interface in front of the user to modify at
 will. And there are various other possible configurations.

 At any rate, having a way of declaring the things that will be requested
 (as I mentioned a zillion messages ago, most platforms have implemented
 this somewhere, sometimes several times) would at least simplify the task
 for implementors of deciding which approach to use, or how to blend the
 various different possibilities.

I agree with this, and just to clarify it, again.

Browsers stand no chance to do anything about these really difficult UX
issues without a better API that is not that is not locked into one
particular UI idea of piecemeal accumulation of permissions.


Re: Allow ... centralized dialog up front

2013-02-03 Thread Florian Bösch
So how exactly do you imagine this going down when an application that uses
half a dozen such capabilities starts? Clicking trough half a dozen allow
- allow - allow - allow - allow - allow, you really think the user's
gonna bother what the 5th or sixth allow is about? You'll end up annoying
the user, the developer and scaring people off a page. Somehow I can't see
that as the function of new capabilities you can offer on a page.
Furthermore, some capabilities (like pointerlock) actively interfere with
the idea that when you need it you can click it (such as the concept of
pointer-lock-drag which requests pointerlock on mousedown and releases it
on mouseup) where your click it when you need it idea will always fail
the first usage. Not exactly confidence inspiring either, as a UX.


On Mon, Feb 4, 2013 at 1:28 AM, Tobie Langel to...@fb.com wrote:

 On 2/2/13 12:16 PM, Florian Bösch pya...@gmail.com wrote:

 Usually games (especially 3D applications) would like to get capabilities
 that they can use out of the way up front so they don't have to care
 about it later on.

 This is not an either / or problem.

 First, lets clarify that the granting of a permission (and for how long it
 is granted) can be dependent on a variety of factors defined by the user
 and or the user agent and is out of control of the developer and of any
 spec body to standardize.

 Different User Agents will behave differently depending on what market
 they target. Different users will react differently depending on their
 security and privacy thresholds, the trust relationship they have with the
 URL they're visiting, etc.

 The permission to carry out a certain task on the user's behalf (such as
 taking a picture) might change at any time for any number of reasons (such
 as the device's camera being unplugged or broken). There's only one
 solution to this: code defensively.

 APIs that require specific user permissions are designed so that the
 user's can be prompted every time the API is required to be used. Whether
 the device chooses to do so or not is implementation specific (and again,
 depends on external factors such as user settings, etc.).

 Generally, this solution has proven to be both flexible and secure.

 Handling permissions up front has three unwanted effects:

 1. Users tend to not read the upfront permission settings that much thus
 often accidentally granting more privileges than they would like to.
 That's a security and privacy issue.
 2. Users tend to reject apps which have too many permission requests or
 permission requests that feel out of scoop of the app. Eg. A chess game
 asking for permissions to use the camera is rather off-putting until you
 realize it uses it to take snapshots of a chess-board and suggest next
 moves. This awareness generally comes with app usage, or because you're
 aware of the feature set of the app through information provided by the
 developer (marketing) or third parties (reviews, friends, etc.).
 3. Upfront permission lists rapidly get out of sync with real application
 requirements. What happens then?

 In fact, Upfront permission requirement only really makes sense when the
 user has already built a relationship of trust with the developer of the
 application or trusts a third party that has means of enforcing good
 behavior from the app developer (e.g. through an app store system).

 A hybrid approach that considers upfront permissions as hints of
 permission requirements to come offers the best of both worlds. It allows
 developers to ask permissions upfront for things that make sense given the
 context (e.g. a camera app would require camera access upfront) and at a
 later stage for features that might not be so obviously connected to the
 app's main use case or present a bigger risk for the user. It also allows
 the User Agent to treat these hints as it wishes, e.g. by prompting the
 user upfront, by automatically granting some permissions using various
 kinds of heuristics, or by deciding to only prompt the user when the
 feature is actually going to be used.

 It is worth noting that the developer will still need to code defensively
 for such an approach, as the user (or user agent) might very well not
 grant all permissions upfront and still require prompting at a later
 stage. Previously granted permissions might also be recalled at any time.

 This approach doesn't require the User Agent to let the developer know
 which permissions the user has granted upfront nor would that be useful
 given permissions can change at any time.


 --tobie




Re: Allow ... centralized dialog up front

2013-02-02 Thread Florian Bösch
I do not particularly care what research you will find to support the
UI-flow that the existence of a requestAPIs API will eventually give rise
to. I do say simply this, the research presented, and pretty much common
sense as well easily shows that the current course is foolhardy and ungainy
on both user and developer.


On Sat, Feb 2, 2013 at 3:37 AM, Charles McCathie Nevile 
cha...@yandex-team.ru wrote:

 **
 On Fri, 01 Feb 2013 15:29:16 +0100, Florian Bösch pya...@gmail.com
 wrote:

 Repetitive permission dialog popups at random UI-flows will not solve the
 permission fatique any more than a centralized one does. However a
 centralized permission dialog will solve the following things fairly
 effectively:

 - repeated popup fatique


 Sure. And that is valuable in principle.

 - extension of trust towards a site regardless of what they ask for (do I
 trust that Indie game developer? Yes! Do I trust google? No! or vice versa)


 I don't think so. As Adrienne said, as I have experienced myself, without
 understanding what the permission is for trust can be reduced as easily as
 increased.

  - make it easy for developers not to add UI-flows into their application
 leading to things the user didn't want to give (Do we want a menu entry
 save to local storage if the user checked off the box to allow local
 storage? I think not.)


 - make it easy for developers to not waste users time by pretending to
 have a working application, which requires things the user didn't want to
 give. (Do we really want to offer our geolocated, web-camera using chat app
 to users who didn't want to give permission to to either? I think not. Do
 we want to make him find that out after he's been entering our UI-flow and
 been pressing buttons 5 minutes later? I think not.)

 These are not so clear. As a user, I *do* want to have applications to
 which I will give, and revoke, at my discretion, certain rights. Twitter
 leaps to mind as something that wants access to geolocation, something I
 occasionally grant. for specific requests but blanket refuse in general.
 The hypothetical example you offer is something that in general it seems
 people are happy to offer to a user who has turned off both capabilities.

 I think the ability for a page to declare permission requests in a
 standard way, the same as applications and extensions, is worth pursuing,
 because there are now a number of vendors using stuff that seems to only
 differ by syntax.

 The user agent presentation is a more complex question. I believe there is
 more research done and being done than you seem to credit, and as Hallvord
 said, I think this is an area where users evolve too.

 For the reasons outlined already in the thread I don't think an
 Android-style here are all the requests is as good a solution in practice
 as it seems, and there is a need for continued research as well as
 implementations we can test.

 cheers

 Chaals





 On Fri, Feb 1, 2013 at 3:22 PM, Charles McCathie Nevile 
 cha...@yandex-team.ru wrote:

 On Fri, 01 Feb 2013 15:16:04 +0100, Florian Bösch pya...@gmail.com
 wrote:

 On Fri, Feb 1, 2013 at 3:02 PM, Adrienne Porter Felt 
 adriennef...@gmail.com wrote:

 My user research on Android found that people have a hard
 time connecting upfront permission requests to the application feature that
 needs the permission. This meant that people have no real basis by which to
 allow or deny the request, except for their own supposition.  IMO, this
 implies that the better plan is to temporally tie the permission request to
 the feature so that the user can connect the two.

 In some circumstances this works, in others, it does not. Consider that
 not every capability has a UI-flow, and that some UI flows are fairly
 obscure. More often than not a page will initiate a flurry of permission
 dialogs up front to get it out of the way. Some of the UI-flows to use a
 capability happen deep inside an application activity and can be severely
 distracting, or crippling to the application.

 If a developer wants to use the blow-by-blow popup dialogs, he can still
 do so by simply not calling an API to get done with the business up front.
 But for those who know their application will not work without features X,
 Y, Z, A, B and C there is no point. They already know their app is not
 going to work. They already know they would have to pester the user 6 times
 with successive popups. They already know that they will severely distract
 the user or cripple themselves by making the user click trough 6 popups
 whenver it becomes necessary. They already know that 80% of their users
 will quit their page after the 3rd popup asking random questions. Why
 should there not be a way to prevent all that from happening?

 The stock answer (and I think it is too glib, and we should be thinking
 harder about this) is

 because those who just want the user to agree to give away their
 security and privacy will be able to rely on permission fatigue. Which they
 can create

Re: Allow ... centralized dialog up front

2013-02-02 Thread Florian Bösch
I thought this was obvious but maybe not. Of course I had in mind that:

- A user gets some centralized place to manage his sites
- He can change permissions
- If the sites preferences change, the permissions pop up again.
- Some way for the user to re-engage the permission dialog for the site
he's on on his own.

But none of this is in the domain of a standard really, it's up to the
vendors how to best structure their UX.

Now, indexability/markup, I kinda like that. I've thought (at times) to
create a news site or search crawler that looks for examples of
technologies. And sometimes I wish I could google filter out pages that
entertain certain technologies when searching for something. It would be a
nice semantic.

Now there are still use-cases that might occur where a developer wants to
group a bunch of permissions up front, and later come with a different
bunch. Or where a developer derives the set of permissions he'll need from
the set of permissions his frameworks/libraries advise. Which would favor a
permission API. On the other hand most developers would probably be happy
to state permissions once for the page, and the markup could just be a
remote-control for the API.


On Sat, Feb 2, 2013 at 10:38 AM, Keean Schupke ke...@fry-it.com wrote:

 I would like the permissions to be changeable. Not a one time dialog that
 appears and irrevocably commits me to my choices, but a page with
 enable/disable toggles I can return and review the permissions and change
 at any time.

 How about instead of a request API the required permissions are in tags so
 they can be machine readable on page load.

 The browser can read the required permissions tags as page load and create
 a settings page for the app where each permission can be toggled.

 This had the advantage that search engines etc can include permission
 requirements in searches. (I want a diary app that does not use my
 camera...)

 Cheers,
 Keean.

 Cheers,
 Keean.
 On 2 Feb 2013 09:09, Florian Bösch pya...@gmail.com wrote:

 I do not particularly care what research you will find to support the
 UI-flow that the existence of a requestAPIs API will eventually give rise
 to. I do say simply this, the research presented, and pretty much common
 sense as well easily shows that the current course is foolhardy and ungainy
 on both user and developer.


 On Sat, Feb 2, 2013 at 3:37 AM, Charles McCathie Nevile 
 cha...@yandex-team.ru wrote:

 **
 On Fri, 01 Feb 2013 15:29:16 +0100, Florian Bösch pya...@gmail.com
 wrote:

 Repetitive permission dialog popups at random UI-flows will not solve the
 permission fatique any more than a centralized one does. However a
 centralized permission dialog will solve the following things fairly
 effectively:

 - repeated popup fatique


 Sure. And that is valuable in principle.

 - extension of trust towards a site regardless of what they ask for (do I
 trust that Indie game developer? Yes! Do I trust google? No! or vice versa)


 I don't think so. As Adrienne said, as I have experienced myself, without
 understanding what the permission is for trust can be reduced as easily as
 increased.

  - make it easy for developers not to add UI-flows into their application
 leading to things the user didn't want to give (Do we want a menu entry
 save to local storage if the user checked off the box to allow local
 storage? I think not.)


 - make it easy for developers to not waste users time by pretending to
 have a working application, which requires things the user didn't want to
 give. (Do we really want to offer our geolocated, web-camera using chat app
 to users who didn't want to give permission to to either? I think not. Do
 we want to make him find that out after he's been entering our UI-flow and
 been pressing buttons 5 minutes later? I think not.)

 These are not so clear. As a user, I *do* want to have applications to
 which I will give, and revoke, at my discretion, certain rights. Twitter
 leaps to mind as something that wants access to geolocation, something I
 occasionally grant. for specific requests but blanket refuse in general.
 The hypothetical example you offer is something that in general it seems
 people are happy to offer to a user who has turned off both capabilities.

 I think the ability for a page to declare permission requests in a
 standard way, the same as applications and extensions, is worth pursuing,
 because there are now a number of vendors using stuff that seems to only
 differ by syntax.

 The user agent presentation is a more complex question. I believe there
 is more research done and being done than you seem to credit, and as
 Hallvord said, I think this is an area where users evolve too.

 For the reasons outlined already in the thread I don't think an
 Android-style here are all the requests is as good a solution in practice
 as it seems, and there is a need for continued research as well as
 implementations we can test.

 cheers

 Chaals





 On Fri, Feb 1, 2013 at 3:22 PM, Charles

Re: Allow ... centralized dialog up front

2013-02-02 Thread Florian Bösch
On Sat, Feb 2, 2013 at 11:16 AM, Keean Schupke ke...@fry-it.com wrote:

 I think a static declaration is better for security, so if a permission is
 not there I don't think it should be allowed to request it later. Of course
 how this is presented to the user is entirely separate, an the UI could
 defer the request until the first time the restricted feature is used, or
 allow all permissions that might be needed to be reviewed and
 enabled/disabled in one place.

That kills any benefit a developer could derive. The very idea is that you
can figure out up front what your user is gonna let you do, and take
appropriate steps (not adding parts of the UI, presenting a suitable
message that the application won't work etc.) as well as that if a user has
agreed up front, that you can rely on that API and don't need to
double-check at every step and add a gazillion pointless
onFeatureYaddaYaddaAllowCallback handlers.


  1   2   3   >