Leaving Mozilla

2016-08-05 Thread Jonas Sicking
Hi All,

A little over a month ago I got married. My wife and I are planning on
doing an extended honeymoon, starting now and ending sometime early next
year.

I'm not certain where we'll end up after the honeymoon, or what either of
us will work with. Because of this, my last day at Mozilla was Wednesday
Aug 3rd.

Working on the web with you all has been an amazing experience. Please keep
moving the web forward and make it a pleasant experience for all developers
to build amazing and beautiful content that is enjoyable for users to take
part of.

I'll unsubscribe to this and other w3c lists shortly, but if you want to
reach me I'll still be on this email address.

For those who are interested, here's a video of, not my wife's and my first
dance, but of our first circus performance .

/ Jonas


Re: [XHR]

2016-03-20 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 10:29 AM, Tab Atkins Jr.  wrote:
> No, streams do not solve the problem of "how do you present a
> partially-downloaded JSON object".  They handle chunked data *better*,
> so they'll improve "text" response handling,

Also binary handling should be improved with streams.

> but there's still the
> fundamental problem that an incomplete JSON or XML document can't, in
> general, be reasonably parsed into a result.  Neither format is
> designed for streaming.

Indeed.

> (This is annoying - it would be nice to have a streaming-friendly JSON
> format.  There are some XML variants that are streaming-friendly, but
> not "normal" XML.)

For XML there is SAX. However I don't think XML sees enough usage
these days that it'd be worth adding native support for SAX to the
platform. Better rely on libraries to handle that use case.

While JSON does see a lot of usage these days, I've not heard of much
usage of streaming JSON. But maybe others have?

Something like SAX but for JSON would indeed be cool, but I'd rather
see it done as libraries to demonstrate demand before we add it to the
platform.

/ Jonas



Re: [XHR]

2016-03-19 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 1:54 PM, Gomer Thomas
 wrote:
> but I need a cross-browser solution in the near future

Another solution that I think would work cross-browser is to use
"text/plain;charset=ISO-8859-15" as content-type.

That way I *think* you can simply read xhr.responseText to get an ever
growing string with the data downloaded so far. Each character in the
string represents one byte of the downloaded data. So to get the 15th
byte, use xhr.responseText.charAt(15).

/ Jonas



Re: [XHR]

2016-03-19 Thread Jonas Sicking
Sounds like you want access to partial binary data.

There's some propitiatory features in Firefox which lets you do this
(added ages ago). See [1]. However for a cross-platform solution we're
still waiting for streams to be available.

Hopefully that should be soon, but of course cross-browser support
across all major browsers will take a while. Even longer if you want
to be compatible with old browsers still in common use.

[1] https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseType

/ Jonas

On Wed, Mar 16, 2016 at 12:27 PM, Gomer Thomas
 wrote:
>In my case the object being transmitted is an ISO BMFF file (as a 
> blob), and I want to be able to present the samples in the file as they 
> arrive, rather than wait until the entire file has been received.
>Regards, Gomer
>
>--
>Gomer Thomas Consulting, LLC
>9810 132nd St NE
>Arlington, WA 98223
>Cell: 425-309-9933
>
>
>-Original Message-
> From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com]
> Sent: Wednesday, March 16, 2016 4:04 AM
> To: Gomer Thomas 
> Cc: WebApps WG 
> Subject: Re: [XHR]
>
>On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
>  wrote:
>
>> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse
>> the chunked transfer coding”. The logical interpretation of this is
>> that whenever possible HTTP recipients should deliver the chunks to
>> the application as they are received, rather than waiting for the
>> entire response to be received before delivering anything.
>>
>> In the latest version this can only be done for “text” responses. For
>> any other type of response, the “response” attribute returns “null”
>> until the transmission is completed.
>
>How would you parse for example an incomplete JSON source to expose an 
> object? Or incomplete XML markup to create a document? Exposing partial 
> responses for text makes sense - for other types of data perhaps not so much.
>-Hallvord
>
>



Re: File API - where are the missing parts?

2016-02-23 Thread Jonas Sicking
On Tue, Feb 23, 2016 at 10:06 AM, Joshua Bell  wrote:
> I'm also very interested in hearing from other browser implementers; Chrome
> is in the odd position of having made investments in related areas
> (FileSystem API and FileWriter API) that did not see adoption in other
> browsers, which is a disincentive to proceed here.

I think we should really separate the type of APIs that Chrome
implemented, which grants access to sandboxed filesystems, from APIs
that lets a page interact with the user's filesystem.

The two are generally much more different than they are similar.

I think for mozilla implementing APIs which provide access to an
origin-sandboxed filesystem is relatively small right now.

I think we're much more interested in APIs which allow reading and
writing to the user's filesystem. I couldn't give any timelines for
when we'd implement any proposals, but I'd personally be happy to
provide feedback to drafts. And I'd be even more happy to see other
browsers experiment with the various UX challenges involved.

/ Jonas



Re: File API - where are the missing parts?

2016-02-23 Thread Jonas Sicking
On Tue, Feb 23, 2016 at 7:12 AM, Florian Bösch <pya...@gmail.com> wrote:
> On Tue, Feb 23, 2016 at 2:48 AM, Jonas Sicking <jo...@sicking.cc> wrote:
>>
>> Is the last bullet here really accurate? How can you use existing APIs to
>> listen to file modifications?
>
> I have not tested this on all UAs, but in Google Chrome what you can do is
> to set an interval to check a files.lastModified date, and if a modification
> is detected, read it in again with a FileReader and that works fine.

That is in violation of the spec. file.lastModified should be constant
for the lifetime of the file object.

Sadly the File object is grossly missnamed. It doesn't represent a OS
filesystem file at all. It's more like large chunk of data with some
metadata. Really we likely should have just had Blob and NamedBlob :(

>> There are also APIs implemented in several browsers for opening a whole
>> directory of files from a webpage. This has been possible for some time in
>> Chrome, and support was also recently added to Firefox and Edge. I'm not
>> sure how interoperable these APIs are across browsers though :(
>
> There does not seem to be a standard about this, or is there? It's an
> essential functionality to be able to import OBJ and Collada files because
> they are composites of the main file and other files (such as material
> definitions or textures).

There is no standard for this no. But there's clearly appetite for one
given that all browsers now implement this functionality.

>> However, before getting into such details, it is very important when
>> discussing read/writing is to be precise about which files can be
>> read/written.
>>
>> For example IndexedDB supports storing File (and Blob) objects inside
>> IndexedDB. You can even get something very similar to incremental
>> reads/writes by reading/writing slices.
>>
>> Here's a couple of libraries which implement filesystem APIs, which
>> support incremental reading and writing, on top of IndexedDB:
>>
>> https://github.com/filerjs/filer
>> https://github.com/ebidel/idb.filesystem.js
>>
>> However, IndexedDB, and thus any libraries built on top of it, only
>> supports reading and writing files inside a origin-specific
>> browser-sandboxed directory.
>>
>> This is also true for the the Filesystem API implemented in Google Chrome
>> APIs that you are linking to. And it applies to the Filesystem API proposal
>> at [1].
>>
>> Writing files outside of any sandboxes requires not just an API
>> specification, but also some sane, user understandable UX.
>>
>> So, to answer your questions, I would say:
>>
>> The APIs that you are linking to does not in fact meet the use cases that
>> you are pointing to in your examples. Neither does [1], which is the closest
>> thing that we have to a successor.
>>
>> The reason that no work has been done to meet the use cases that you are
>> referring to, is that so far no credible solutions have been proposed for
>> the UX problem. I.e. how do we make it clear to the user that they are
>> granting access to the webpage to overwrite the contents of a file.
>>
>> [1] http://w3c.github.io/filesystem-api/
>
> To be clear, I'm referring specifically to the ability of a user to pick any
> destination on his mass-storage device to manage his data.

I suspected you were. But you should then know that the Google Chrome
proposals that you were referring to does not relate to those use
cases at all.

This is a very common misconception sadly.

> I'm aware that there's thorny questions regarding UX (although UX itself is
> rarely if ever specified in a W3C standard is it?). But that does not impact
> all missing pieces. Notably not these:
>
> Save a file incrementally (and with the ability to abort): not a UX problem
> because the mechanism to save files exists, it's just insufficiently
> specified to allow for streaming writes.

Indeed. This would be a solvable problem without introducing new UX
concepts. The main missing piece here is likely the lack of Stream
objects, which is what you'd want for incremental writing.

Fortunately that part is much closer to a solved problem these days.

So likely the main thing holding this use case up is simply having a proposal.

> Save as pickable destination: also not a UX problem, the standard solution
> here is to present the user with a standard operating system specific file
> save dialog.

I'm not sure I understand how this is different from what  does?

Is the problem that some browsers does not let the user choose
directory to save in?

It would be interesting to know why browsers don't let users choose
directory for , and if they'd have concern supporting
someth

Re: File API - where are the missing parts?

2016-02-22 Thread Jonas Sicking
On Sun, Feb 21, 2016 at 3:32 AM, Florian Bösch  wrote:

>
> *What this covers*
>
>- Read one or many files in their entirety in one go (in python that
>would be open('foobar').read())
>- Save a completed binary string in its entirety in one go to the
>download folder (no overwrite)
>- Listen to file modifications
>
> Is the last bullet here really accurate? How can you use existing APIs to
listen to file modifications?

There are also APIs implemented in several browsers for opening a whole
directory of files from a webpage. This has been possible for some time in
Chrome, and support was also recently added to Firefox and Edge. I'm not
sure how interoperable these APIs are across browsers though :(

I would also add that existing APIs does allow partial reads. You can slice
a blob to get a smaller blob, and then read that. Slicing is a cheap
operation and should not copy any data.

Using this you can actually emulate incremental reads, though it is not as
easy as full streaming reads, and it might have some slight performance
differences.



> *What is missing*
>
>- Read a file incrementally (and with the ability to abort)
>- Save a file incrementally (and with the ability to abort)
>- Overwrite a file (either previously saved or opened)
>- Save as pickable destination
>- Save many files to a user pickable folder
>- Working directory
>
> Another important missing capability is the ability to modify an existing
file. I.e. write 10 bytes in the middle of a 1GB file, without having to
re-write the whole 1GB to disk.

However, before getting into such details, it is very important when
discussing read/writing is to be precise about which files can be
read/written.

For example IndexedDB supports storing File (and Blob) objects inside
IndexedDB. You can even get something very similar to incremental
reads/writes by reading/writing slices.

Here's a couple of libraries which implement filesystem APIs, which support
incremental reading and writing, on top of IndexedDB:

https://github.com/filerjs/filer
https://github.com/ebidel/idb.filesystem.js

However, IndexedDB, and thus any libraries built on top of it, only
supports reading and writing files inside a origin-specific
browser-sandboxed directory.

This is also true for the the Filesystem API implemented in Google Chrome
APIs that you are linking to. And it applies to the Filesystem API proposal
at [1].

Writing files outside of any sandboxes requires not just an API
specification, but also some sane, user understandable UX.

So, to answer your questions, I would say:

The APIs that you are linking to does not in fact meet the use cases that
you are pointing to in your examples. Neither does [1], which is the
closest thing that we have to a successor.

The reason that no work has been done to meet the use cases that you are
referring to, is that so far no credible solutions have been proposed for
the UX problem. I.e. how do we make it clear to the user that they are
granting access to the webpage to overwrite the contents of a file.

[1] http://w3c.github.io/filesystem-api/

/ Jonas


Re: Art steps down - thank you for everything

2016-01-28 Thread Jonas Sicking
On Thu, Jan 28, 2016 at 7:45 AM, Chaals McCathie Nevile
 wrote:
> Thanks Art for everything you've done for the group for so long.

Hi Art,

Yes, thank you very much for chairing the WG for so long. This group
has under your chairing been one of the W3C WGs that has moved the web
forward the most.

Thank you so much for your ever calm leadership! It's been great to
get to work with you!

/ Jonas



Re: [Editing] [DOM] Adding static range API

2016-01-10 Thread Jonas Sicking
On Sat, Jan 9, 2016 at 6:55 PM, Ryosuke Niwa  wrote:
>
>> On Jan 9, 2016, at 6:25 PM, Olli Pettay  wrote:
>>
>> Hard to judge this proposal before seeing an API using StaticRange objects.
>>
>> One thing though, if apps were to create an undo stack of their own, they 
>> could easily have their own Range-like API implemented in JS. So if that is 
>> the only use case, probably not worth to add anything to make the platform 
>> more complicated. Especially since StaticRange API might be good for some 
>> script library, but not for some other.
>
> The idea is to return this new StaticRange object in `InputEvent` interface's 
> `targetRanges` IDL attribute, and let Web apps hold onto them without 
> incurring the cost of constant updates.

Could you simply return a dictionary with the parent nodes and parent indexes?

/ Jonas



Re: Callback when an event handler has been added to a custom element

2015-11-06 Thread Jonas Sicking
On Fri, Nov 6, 2015 at 12:44 PM, Olli Pettay  wrote:
> On 11/06/2015 09:28 PM, Justin Fagnani wrote:
>>
>> You can also override addEventListener/removeEventListener on your
>> element. My concern with that, and possibly an event listener change
>> callback, is
>> that it only works reliably for non-bubbling events.
>
> How even with those? One could just add capturing event listener higher up
> in the tree.
> You need to override addEventListener on EventTarget, and also relevant
> onfoo EventHandler setters on Window and Document and *Element prototypes,
> but unfortunately even that doesn't catch onfoo content attributes ( onclick="doSomething">). But one could use MutationObserver then to
> observe changes to DOM.

This problem also applies to the original proposal in this thread.

Even if we add the ability to detect when an eventlistener is added to
a custom element, what happens if an eventlistener is added to an
ancestor node? Or if the custom element is moved into a new element
which already has an ancestor with such an event listener.

/ Jonas



Re: Indexed DB + Promises

2015-09-29 Thread Jonas Sicking
On Tue, Sep 29, 2015 at 10:51 AM, Domenic Denicola  wrote:
> This seems ... reasonable, and quite possibly the best we can do. It has a 
> several notable rough edges:
>
> - The need to remember to use .promise, instead of just having functions 
> whose return values you can await directly

One way that we could solve this would be to make IDBRequest a
thenable. I.e. put a .then() function directly on IDBRequest.

> - The two-stage error paths (exceptions + rejections), necessitating 
> async/await to make it palatable

I'd be curious to know if, in the case of IDB, this is a problem in
practice. I do agree that it's good for promise based APIs to only
have one error path, but IDB is pretty conservative about when it
throws.

Do people actually wrap their IDB calls in try/catch today?

Certainly throwing at all isn't perfect, but is it a big enough
problem that it warrants using a library?

> - The waitUntil/IIAFE pattern in the incrementSlowly example, instead of a 
> more natural `const t = await openTransaction(); try { await 
> useTransaction(t); } finally { t.close(); }` structure

I'm actually quite concerned about using a t.close() pattern. It seems
to make it very easy to end up with minor bugs which totally hang an
application because a transaction is left open.

But maybe developers prefer it strongly enough that they'll use a
library which provides it.

> I guess part of the question is, does this add enough value, or will authors 
> still prefer wrapper libraries, which can afford to throw away backward 
> compatibility in order to avoid these ergonomic problems?

Yeah, I think this is a very good question. I personally have no idea.

/ Jonas



Re: [WebIDL] T[] migration

2015-07-16 Thread Jonas Sicking
On Thu, Jul 16, 2015 at 8:45 AM, Travis Leithead
travis.leith...@microsoft.com wrote:
 Recommendations:
 ·HTML5
 ·Web Messaging

 Other references:
 ·CSS OM
 ·Web Sockets
 ·WebRTC

Note that in practice I would think that most implementations return
objects which have a .item() function on them. So replacing with just
a plain FrozenArrayT likely would not be web compatible :(

We likely need to introduce something like FrozenArrayWithItemT
which returns a subclass of Array that has a .item function.

Or we need to add Array.prototype.item, but that would require
convincing TC39 that that's a good idea, and would need testing for
web compatibility.

/ Jonas



Re: The key custom elements question: custom constructors?

2015-07-16 Thread Jonas Sicking
On Thu, Jul 16, 2015 at 9:49 AM, Domenic Denicola d...@domenic.me wrote:
 From: Anne van Kesteren [mailto:ann...@annevk.nl]

 I think the problem is that nobody has yet tried to figure out what 
 invariants
 that would break and how we could solve them. I'm not too worried about
 the parser as it already has script synchronization, but cloneNode(), ranges,
 and editing, do seem problematic. If there is a clear processing model,
 Mozilla might be fine with running JavaScript during those operations.

 Even if it can be specced/implemented, should it? I.e., why would this be OK 
 where MutationEvents are not?

I think there were two big problems with MutationEvents.

From an implementation point of view, the big problem was that we
couldn't use an implementation strategy like:

1. Perform requested task
2. Get all internal datastructures and invariants updated.
3. Fire MutationEvents callback.
4. Return to JS.

Since step 4 can run arbitrary webpage logic, it's fine that step 3,
which is run right before, does as well. I.e. we could essentially
treat step 3 and 4 as the same.

This was particularly a problem for DOMNodeRemoved since it was
required to run *before* the required task was supposed to be done.
But it was also somewhat a problem for DOMNodeInserted since it could
be interpreted as something that should be done interweaved with other
operations, for example when a single DOM API call caused multiple
nodes to be inserted.

Like Anne says, if it was better defined when the callbacks should
happen, and that it was defined that they all happen after all
internal datastructures had been updated, but before the API call
returns, then that would have been much easier to implement.


The second problem is that it causes webpages to have to deal with
reentrancy issues. Synchronous callbacks are arguably just as big of a
problem for webpages as it is for browser engines. It meant that the
callback which is synchronously called when a node is inserted might
remove that node. Or might remove some other node, or do a ton of
other changes.

Callbacks which are called synchronously have a huge responsibility to
not do crazy things. This gets quite complex as code bases grow. A
synchronous callback might do something that seems safe in and of
itself, but that in turn triggers a couple of other synchronous
callbacks, which trigger yet more callbacks, which reenters something
unexpected.

The only way to deal with this is for webpages to do the absolute
minium thing they can in the synchronous callback, and do anything
else asynchronously. That is what implementations tries to do. The
code that's run during element construction tries to only touch a
minimal number of things in the rest of the outside world, ideally
nothing.

This is a problem inherent with synchronous callbacks and I can't
think of a way to improve specifications or implementations to help
here. It's entirely the responsibility of web authors to deal with
this complexity.

/ Jonas



Re: The key custom elements question: custom constructors?

2015-07-16 Thread Jonas Sicking
On Thu, Jul 16, 2015 at 12:16 PM, Domenic Denicola d...@domenic.me wrote:
 From: Jonas Sicking [mailto:jo...@sicking.cc]

 Like Anne says, if it was better defined when the callbacks should happen,
 and that it was defined that they all happen after all internal 
 datastructures
 had been updated, but before the API call returns, then that would have
 been much easier to implement.

 Right, but that's not actually possible with custom constructors. In 
 particular, you need to insert the elements into the tree *after* calling out 
 to author code that constructs them. What you mention is more like the 
 currently-specced custom elements lifecycle callback approach.

 Or am I misunderstanding?

No, that is a good point. The constructor callback does indeed need to
happen between an element is created and the element is inserted.

I think though that in this case it might be possible that we depend
on very little mutable state between the callbacks. I.e. that none of
the state that we depend on while the algorithm is running can be
mutated by the page.

In essence the only state we need to keep is the pointers to the
elements, and the parents that the elements should be inserted into.
And since pages can't delete any objects, those pointers will remain
valid throughout.

The only thing that I could think of might be tricky is around tables,
where sometimes we don't insert elements last in the childlist of
their parent. In that case we'll also carry state about where to
insert a given child. That state might get outdated since the page can
mess around with the DOM.

 This is a problem inherent with synchronous callbacks and I can't think of a
 way to improve specifications or implementations to help here. It's entirely
 the responsibility of web authors to deal with this complexity.

 Well, specifications could just not allow synchronous callbacks of this sort, 
 which is kind of what we're discussing in this thread.

Agreed. What I meant was that if we're specifying synchronous
callbacks, there's nothing we can do in the specification to help with
this type of problem.

 That would help avoid the horrors of

 class XFoo extends HTMLElement {
   constructor(stuff) {
 super();

 // Set up some default content that happens to use another custom element
 this.innerHTML = `x-barp${stuff}/p/x-bar`;

 // All foos should also appear in a list off on the side!
 // Let's take care of that automatically for any consumers!
 document.querySelector(#list-of-foos).appendChild(this.cloneNode(true));
   }
 }

 which seems like a well-meaning thing that authors could do, without knowing 
 what they've unleashed.

Exactly.

/ Jonas



Re: Cross-page locking mechanism for indexedDB/web storage/FileHandle ?

2015-07-15 Thread Jonas Sicking
Yeah, I think a standalone primitive for asynchronous atomics. The
big risk is of course that deadlocks can occur, but there's no real
way to completely avoid that while keeping a flexible platform. These
deadlocks would be asynchronous, so no thread will hang, but you can
easily end up with two pages looking hung to the user.

The best thing that I think we can do to help here is to enable pages
to issue a single function call which grabs multiple locks.

We could also enable registering a callback which is called when the
platform detects a deadlock. However we won't always be able to detect
deadlocks, so I'm not sure that it's a good idea.

I also don't have enough experience to know which atomics are worth
exposing, and which ones we should leave to libraries.

/ Jonas



On Wed, Jul 15, 2015 at 10:12 AM, Joshua Bell jsb...@google.com wrote:
 Based on similar feedback, I've been noodling on this too. Here are my
 current thoughts:

 https://gist.github.com/inexorabletash/a53c6add9fbc8b9b1191

 Feedback welcome - I was planning to send this around shortly anyway.

 On Wed, Jul 15, 2015 at 3:07 AM, 段垚 duan...@ustc.edu wrote:

 Hi all,

 I'm developing an web-based editor which can edit HTML documents locally
 (stored in indexedDB).
 An issue I encountered is that there is no reliable way to ensure that at
 most one editor instance (an instance is a web page) can open a document at
 the same time.

 * An editor instance may create a flag entry in indexedDB or localStorage
 for each opened document to indicate this document is locked, and remove
 this flag when the document is closed. However, if the editor is closed
 forcibly, this flag won't be removed, and the document can't be opened any
 more!

 * An editor instance may use storage event of localStorage to ask is this
 document has been opened by any other editor instance. If there is no
 response for a while, it can open the document. However, storage event is
 async so we are not sure about how long the editor has to wait before
 opening the document.

 * IndexedDB and FileHandle do have locks, but such locks can only live for
 a short time, so can't lock an object during the entire lifetime of an
 editor instance.

 In a native editor application, it may use file locking
 (https://en.wikipedia.org/wiki/File_locking) to achieve this purpose.
 So maybe it is valuable to add similar locking mechanism to indexedDB/web
 storage/FileHandle?

 I propose a locking API of web storage:

   try {
 localStorage.lock('file1.html');
 myEditor.open('file1.html'); // open and edit the document
   } catch (e) {
 alert('file1.html is already opened by another editor');
   }

 Storage.lock() lock an entry if it has not been locked, and throw if it
 has been locked by another page.
 The locked entry is unlocked automatically after the page holding the lock
 is unloaded. It can also be unlocked by calling Storage.unlock().

 What do you think?

 Regards,
 Duan Yao







Re: Directory Upload Proposal

2015-05-14 Thread Jonas Sicking
On Wed, May 13, 2015 at 11:51 PM, Jonas Sicking jo...@sicking.cc wrote:
 One important question though is what would input type=file
 directories do on platforms that don't have a directory UI concept?
 Like most mobile platforms?

Err.. that should say:

What would input type=directory do on platforms that don't have a
directory UI concept?

/ Jonas



Re: Directory Upload Proposal

2015-05-14 Thread Jonas Sicking
On Wed, May 13, 2015 at 11:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 On that note, there is actually a 5th option that we can entertain. We could 
 have three different kinds of file inputs: one type for files, another for 
 directories, and yet another for handling both files and directories. This 
 means that if a developer has a use case for only wanting a user to pick a 
 directory (even on Mac OS X), they would have that option.

 Like with options 2 and 3, I would definitely be ok with this option too.

One more thought.

I think option 5 is essentially option 2 plus also add support for
input type=directory.

I.e. if we do option 5 I'd prefer to do it by having input type=file
directories work as described in option 2, but then also have input
type=directory.

One important question though is what would input type=file
directories do on platforms that don't have a directory UI concept?
Like most mobile platforms?

/ Jonas



Re: Directory Upload Proposal

2015-05-14 Thread Jonas Sicking
On Wed, May 13, 2015 at 2:19 PM, Ali Alabbas a...@microsoft.com wrote:
 Thank you for the feedback Jonas. After speaking to the OneDrive team at 
 Microsoft, they let me know that their use case for this would involve hiding 
 the file input and just spoofing a click to invoke the file/folder picker. 
 This makes me believe that perhaps there should be less of an emphasis on UI 
 and more on providing the functionality that developers need.

I do agree with you that most websites will likely want to hide the
browser-provided UI. And for those websites options 2 and 3 are pretty
much equivalent.

But my theory has been that some websites will want to use the
browser-provided UI. For those websites I think option 2 is
advantageous.

But I don't really have much data to go on, other than that I still do
see quite a few websites using the browser-provided UI for input
type=file.

 On that note, there is actually a 5th option that we can entertain. We could 
 have three different kinds of file inputs: one type for files, another for 
 directories, and yet another for handling both files and directories. This 
 means that if a developer has a use case for only wanting a user to pick a 
 directory (even on Mac OS X), they would have that option.

Like with options 2 and 3, I would definitely be ok with this option too.

It isn't clear to me though that there are actually use-cases for
enabling the user to *just* pick a directory, even on platforms where
the file-or-directory picker is available?

/ Jonas


 Thanks,
 Ali

 On Friday, May 8, 2015 at 3:29 PM, Jonas Sicking jo...@sicking.cc wrote:

On Tue, May 5, 2015 at 10:50 AM, Ali Alabbas a...@microsoft.com wrote:
 I recommend that we change the dir attribute to directories and keep 
 directory the same as it is now to avoid clashing with the existing dir 
 attribute on the HTMLInputElement. All in favor?

There's no current directory attribute, and the current dir
attribute stands for direction and not directory, so I'm not sure which 
clash you are worried about?

But I'm definitely fine with directories. I've used that in the examples 
below.

 As for the behavior of setting the directories attribute on a file input, 
 we have the following options:

 1) Expose two buttons in a file input (choose files and choose
 directory) (see Mozilla's proposal [1])
 - To activate the choose directory behavior of invoking the
 directory picker there would need to be a method on the
 HTMLInputElement e.g. chooseDirectory()
 - To activate the choose files behavior of invoking the files
 picker, we continue to use click() on the file input

 2) Expose two buttons in file input for Windows/Linux (choose files
 and choose directory) and one button for Mac OS X (choose files and
 directories)
 - Allows users of Mac OS X to use its unified File/Directory picker
 - Allows users of Windows/Linux to specify if they want to pick files
 or a directory
 - However, there would still need to be a method to activate the
 choose directory button's behavior of invoking the directory picker
 (e.g. chooseDirectory() on the HTMLInputElement)
 - This results in two different experiences depending on the OS

 3) Expose only one button; Windows/Linux (choose directory) and Mac
 OS X (choose files and directories)
 - Allows users of Mac OS X to use its unified File/Directory picker
 - Windows/Linux users are only able to select a directory
 - click() is used to activate these default behaviors (no need for an
 extra method such as chooseDirectory() on the HTMLInputElement
 interface)
 - For Windows/Linux, in order to have access to a file picker,
 app/site developers would need to create another file input without
 setting the directories attribute
 - Can have something like isFileAndDirectorySupported so that
 developers can feature detect and decide if they need to have two
 different file inputs for their app/site (on Windows/Linux) or if they
 just need one (on Mac OS X) that can allow both files and directories

 4) Expose only one button (choose directory)
 - User must select a directory regardless of OS or browser (this
 normalizes the user experience and the developer design paradigm)
 - To make users pick files rather than directories, the developer
 simply does not set the directories attribute which would show the
 default file input
 - Developers that want to allow users the option to select directory
 or files need to provide two different inputs regardless of OS or
 browser

Hi Ali,

I think the only really strong requirement that I have is that I'd like to 
enable the platform widget on OSX which allows picking a file or a directory. 
This might also be useful in the future on other platforms if they grow such 
a widget.

I understand that this will mean extra work on the part of the developer, 
especially in the case when the developer render their own UI. However 
authors generally tend to prefer to optimize a good UI experience over saving 
a few lines of code. This seems

Re: Directory Upload Proposal

2015-05-08 Thread Jonas Sicking
On Tue, May 5, 2015 at 10:50 AM, Ali Alabbas a...@microsoft.com wrote:
 I recommend that we change the dir attribute to directories and keep 
 directory the same as it is now to avoid clashing with the existing dir 
 attribute on the HTMLInputElement. All in favor?

There's no current directory attribute, and the current dir
attribute stands for direction and not directory, so I'm not sure
which clash you are worried about?

But I'm definitely fine with directories. I've used that in the
examples below.

 As for the behavior of setting the directories attribute on a file input, 
 we have the following options:

 1) Expose two buttons in a file input (choose files and choose directory) 
 (see Mozilla's proposal [1])
 - To activate the choose directory behavior of invoking the directory 
 picker there would need to be a method on the HTMLInputElement e.g. 
 chooseDirectory()
 - To activate the choose files behavior of invoking the files picker, we 
 continue to use click() on the file input

 2) Expose two buttons in file input for Windows/Linux (choose files and 
 choose directory) and one button for Mac OS X (choose files and 
 directories)
 - Allows users of Mac OS X to use its unified File/Directory picker
 - Allows users of Windows/Linux to specify if they want to pick files or a 
 directory
 - However, there would still need to be a method to activate the choose 
 directory button's behavior of invoking the directory picker (e.g. 
 chooseDirectory() on the HTMLInputElement)
 - This results in two different experiences depending on the OS

 3) Expose only one button; Windows/Linux (choose directory) and Mac OS X 
 (choose files and directories)
 - Allows users of Mac OS X to use its unified File/Directory picker
 - Windows/Linux users are only able to select a directory
 - click() is used to activate these default behaviors (no need for an extra 
 method such as chooseDirectory() on the HTMLInputElement interface)
 - For Windows/Linux, in order to have access to a file picker, app/site 
 developers would need to create another file input without setting the 
 directories attribute
 - Can have something like isFileAndDirectorySupported so that developers can 
 feature detect and decide if they need to have two different file inputs for 
 their app/site (on Windows/Linux) or if they just need one (on Mac OS X) that 
 can allow both files and directories

 4) Expose only one button (choose directory)
 - User must select a directory regardless of OS or browser (this normalizes 
 the user experience and the developer design paradigm)
 - To make users pick files rather than directories, the developer simply does 
 not set the directories attribute which would show the default file input
 - Developers that want to allow users the option to select directory or files 
 need to provide two different inputs regardless of OS or browser

Hi Ali,

I think the only really strong requirement that I have is that I'd
like to enable the platform widget on OSX which allows picking a file
or a directory. This might also be useful in the future on other
platforms if they grow such a widget.

I understand that this will mean extra work on the part of the
developer, especially in the case when the developer render their own
UI. However authors generally tend to prefer to optimize a good UI
experience over saving a few lines of code. This seems especially true
in this case given that the author has chosen to provide their own UI
rather than use the browser-provided one.

So that leaves us with options 2 and 3.

I think both of these are options that I can live with. And for
authors that render their own UI the difference is only one of syntax.
And most likely authors will wrap our API in a library since none of
the APIs here are particularly nice.

However I do think that 3 has some disadvantages for authors that *do*
use the browser provided UI.

The main one being that the UI gets kind of awkward when rendering one
input type=file (multiple) and one input type=file directories.
From a user point of view, it ends up being sort of half-way between a
UI where you select one thing, and a UI where you can add as many
files/directories as you want. Specifically the user can select no
more than 2 things, but require that one and only one is a directory.

So the double-input UI ends up neither being a good UI to allow the
user to select one thing, nor to allow the user to select an
arbitrary number of things.

The other downside with 3 when used with the browser-provided UI is
that the website will still have to use javascript to dynamically
generate one or two inputs. This is needed in order to avoid having
two exactly alike UI widgets on old browsers or browsers which doesn't
have a separate directory picker.

That said, 2 definitely has the disadvantage that it'll force us to
cram two buttons into the small layout size that is used by a input
type=file. But I think this is solvable. One simple solution is to
initially render two 

Re: Shadow DOM: state of the distribution API

2015-05-06 Thread Jonas Sicking
On Wed, May 6, 2015 at 11:07 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, May 6, 2015 at 7:57 PM, Jonas Sicking jo...@sicking.cc wrote:
 Has at-end-of-microtask been debated rather than 1/2? Synchronous
 always has the downside that the developer has to deal with
 reentrancy.

 1/2 are triggered by the component author.

So component author code isn't triggered when a webpage does
element.appendChild() if 'element' is a custom element?

FWIW, I am by no means trying to diminish the task that adding async
layout APIs would be. Though it might not be that different in size
compared to the '3' proposal.

/ Jonas



Re: Permissions API vs local APIs

2015-05-06 Thread Jonas Sicking
On Wed, May 6, 2015 at 9:00 AM, Doug Turner do...@mozilla.com wrote:
 The way I would look at this is based on timeframe -- if we're not 
 implementing the Permissions API until 2017 or something, i'd just leave the 
 functionality in the PushAPI spec.  If the Permission API is right around the 
 corner, I would remove it form the PushAPI spec.

FWIW, the permission API as it currently stands is pretty trivial to
implement. So I don't see a reason to delay until 2017 or even Q3
2015.

/ Jonas



Re: Shadow DOM: state of the distribution API

2015-05-06 Thread Jonas Sicking
On Wed, May 6, 2015 at 2:05 AM, Anne van Kesteren ann...@annevk.nl wrote:

 1) Synchronous, no flattening of content. A host element's shadow
 tree has a set of slots each exposed as a single content element to
 the outside. Host elements nested inside that shadow tree can only
 reuse slots from the outermost host element.

 2) Synchronous, flattening of content. Any host element nested
 inside a shadow tree can get anything that is being distributed.
 (Distributed content elements are unwrapped and not distributed
 as-is.)

 3) Lazy. A distinct global (similar to the isolated Shadow DOM story)
 is responsible for distribution so it cannot observe when distribution
 actually happens.

Has at-end-of-microtask been debated rather than 1/2? Synchronous
always has the downside that the developer has to deal with
reentrancy.

End-of-microtask does have the downside that API calls which
synchronously return layout information get the wrong values. Or
rather, values that might change at end of microtask.

But calling sync layout APIs is generally a bad idea for perf anyway.
If we introduced async versions of getComputedStyle,
getBoundingClientRect etc, then we could make those wait to return a
value until all content had been distributed into insertion points.

Of course, adding async layout accessors is a non-trivial project, but
it's long over due.

/ Jonas



Re: Permissions API vs local APIs

2015-05-06 Thread Jonas Sicking
I think mozilla would be fine with taking the permission API as a
dependency and implement that at the same time. Implementing the
permission API should be fairly trivial for us.

But we should verify this with the people actually working on the push API.

/ Jonas

On Wed, May 6, 2015 at 3:13 AM, Miguel Garcia migu...@chromium.org wrote:
 Agreed, I think we need a backwards compatible solution until the permission
 API gets some traction but once Mozilla ships it I think new APIs should
 just use the permission API.

 On Wed, May 6, 2015 at 10:59 AM, Michael van Ouwerkerk
 mvanouwerk...@google.com wrote:



 On Wed, May 6, 2015 at 6:25 AM, Anne van Kesteren ann...@annevk.nl
 wrote:

 On Wed, May 6, 2015 at 12:32 AM, Mike West mk...@google.com wrote:
  I agree with Jonas. Extending the permission API to give developers a
  single
  place to check with a single consistent style seems like the right way
  to
  go.

 Yet others at Google are pushing the expose them twice strategy...
 Perhaps because the Permissions API is not yet ready?


 Yes, we wanted to ensure this is in the Push API because that seems to
 have more implementation momentum from browser vendors than the Permissions
 API. We didn't want developers to do hacky things in the meantime. I agree
 that once the Permissions API has critical mass, that should be the single
 place for checking permissions.

 Regards,

 Michael




 --
 https://annevankesteren.nl/






Re: Permissions API vs local APIs

2015-05-05 Thread Jonas Sicking
On Mon, May 4, 2015 at 10:42 PM, Anne van Kesteren ann...@annevk.nl wrote:
 Over in 
 https://lists.w3.org/Archives/Public/public-whatwg-archive/2015May/0006.html
 Jonas pointed out that having two APIs for doing the same thing is
 nuts. We should probably decide whether we go ahead with the
 Permissions API or keep doing permission checks on a per-API basis.

I personally think having a single API, rather than half a dozen
navigator.*.hasPermission() APIs is better. If for no other reason
than that's it's likely going to be significantly easier to keep a
single API consistent, than the half-dozen different ones.

I'll also note that the reception on twitter from developers for the
permission API seemed quite positive.

/ Jonas



Re: :host pseudo-class

2015-05-04 Thread Jonas Sicking
On Sun, Apr 26, 2015 at 8:37 PM, L. David Baron dba...@dbaron.org wrote:
 On Saturday 2015-04-25 09:32 -0700, Anne van Kesteren wrote:
 I don't understand why :host is a pseudo-class rather than a
 pseudo-element. My mental model of a pseudo-class is that it allows
 you to match an element based on a boolean internal slot of that
 element. :host is not that since e.g. * does not match :host as I
 understand it. That seems super weird. Why not just use ::host?

 Copying WebApps since this affects everyone caring about Shadow DOM.

 We haven't really used (in the sense of shipping across browsers)
 pseudo-elements before for things that are both tree-like (i.e., not
 ::first-letter, ::first-line, or ::selection) and not leaves of the
 tree.  (Gecko doesn't implement any pseudo-elements that can have
 other selectors to their right.  I'm not sure if other engines
 have.)

 I'd be a little worried about ease of implementation, and doing so
 without disabling a bunch of selector-related optimizations that
 we'd rather have.

 At some point we probably do want to have this sort of
 pseudo-element, but it's certainly adding an additional dependency
 on to this spec.

My understanding is that the question here isn't what is being
matched, but rather what syntax to use for the selector. I.e. in both
cases the thing that the selector is matching is the DocumentFragment
which is the root of the shadow DOM.

If implementing :host is easier than ::host, then it seems like the
implementation could always convert the pseudo-element into a
pseudo-class at parse time. That should make the implementation the
same other than in the parser. Though maybe the concern here is about
parser complexity?

/ Jonas



Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Tue, Apr 28, 2015 at 4:28 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
 Aaron opened an issue for this on GitHub [1] and I agree that it is a 
 problem and we should definitely rename it to something else! One option 
 might be to change dir to directory, but we would need a different name for 
 directory (the attribute that gets back the virtual root holding the 
 selected files and folders).

 I wonder, is it necessary to have a separate dir/directory attribute from 
 multiple? Adding a new DOM attribute will allow for feature detecting this 
 change. UA's can handle the presentation of a separate directory picker if 
 necessary--why force this distinction on the web developer?

We need the dir/directory attribute in order for pages to indicate
that it can handle Directory objects. No matter where/how we expose
those Directory objects.

/ Jonas



Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Mon, Apr 27, 2015 at 9:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas a...@microsoft.com wrote:
 Hello WebApps Group,

 Hi Ali,

 Yay! This is great to see a formal proposal for! Definitely something
 that mozilla is very interested in working on.

 If there is sufficient interest, I would like to work on this within the 
 scope of the WebApps working group.

 I personally will stay out of WG politics. But I think the proposal
 will receive more of the needed attention and review in this WG than
 in the HTML WG. But I'm not sure if W3C policies dictate that this is
 done in the HTML WG.

 [4] Proposal: 
 http://internetexplorer.github.io/directory-upload/proposal.html

 So, some specific feedback on the proposal.

 First off, I don't think you can use the name dir for the new
 attribute since that's already used for setting rtl/ltr direction.
 Simply renaming the attribute to something else should fix this.

 Second, rather than adding a .directory attribute, I think that we
 should simply add any selected directories to the .files list. My
 experience is that having a direct mapping between what the user does,
 and what we expose to the webpage, generally results in less developer
 confusion and/or annoyance.

 My understanding is that the current proposal is mainly so that if we
 in the future add something like Directory.enumerateDeep(), that that
 would automatically enable deep enumeration through all user options.
 However that could always be solved by adding a
 HTMLInputElement.enumerateFilesDeep() function.

Oh, there's another thing missing that I missed. We also need some
function, similar to .click(), which allows a webpage to
programmatically bring up a directory picker. This is needed on
platforms like Windows and Linux which use separate platform widgets
for picking a directory and picking a file. Many websites hide the
default browser provided input type=file UI and then call .click()
when the user clicks the website UI.

A tricky question is what to do on platforms that don't have a
separate directory picker (like OSX) or which doesn't have a concept
of directories (most mobile platforms). We could either make those UAs
on those platforms not have the separate .clickDirectoryPicker()
function (or whatever we'll call it), or make them have it but just do
the same as .click().

/ Jonas



Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Tue, Apr 28, 2015 at 4:26 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
 Second, rather than adding a .directory attribute, I think that we should 
 simply add any selected directories to the .files list. My experience is 
 that having a direct mapping between what the user does, and what we expose 
 to the webpage, generally results in less developer confusion and/or 
 annoyance.

 I like this consolidation, but Ali concern (and one I share) is that legacy 
 code using .files will not expect to encounter new Directory objects in the 
 list and will likely break unless the Directory object maintains a 
 backwards-compatible File-like appearance.

Legacy pages won't be setting the directory attribute.

In fact, this is the whole purpose of the directory attribute. To
enable pages to signal I can handle the user picking directories.

 I have a concern about revealing the user's directory names to the server, 
 and suggested anonymizing the names, but it seems that having directory path 
 names flow through to the server intact is an important scenario for 
 file-syncing, which anonymizing might break.

I agree that this is a concern, though one separate from what API we use.

I do think it's fine to expose the directory name of the directory
that the user pick. It doesn't seem very different from the fact that
we expose the filename of the files that the user pick.

/ Jonas



Re: Exposing structured clone as an API?

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 6:31 PM, Kyle Huey m...@kylehuey.com wrote:
 On Thu, Apr 23, 2015 at 6:06 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/23/15 6:34 PM, Elliott Sprehn wrote:

 Have you benchmarked this? I think you're better off just writing your
 own clone library.


 That requires having a list of all objects browsers consider clonable and
 having ways of cloning them all, right?  Maintaining such a library is
 likely to be a somewhat demanding undertaking as new clonable objects are
 added...

 -Boris


 Today it's not demanding, it's not even possible.  e.g. how do you
 duplicate a FileList object?

We should just fix [1] and get rid of the FileList interface. Are
there more interfaces this applies to?

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23682

/ Jonas



Re: Directory Upload Proposal

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas a...@microsoft.com wrote:
 Hello WebApps Group,

Hi Ali,

Yay! This is great to see a formal proposal for! Definitely something
that mozilla is very interested in working on.

 If there is sufficient interest, I would like to work on this within the 
 scope of the WebApps working group.

I personally will stay out of WG politics. But I think the proposal
will receive more of the needed attention and review in this WG than
in the HTML WG. But I'm not sure if W3C policies dictate that this is
done in the HTML WG.

 [4] Proposal: http://internetexplorer.github.io/directory-upload/proposal.html

So, some specific feedback on the proposal.

First off, I don't think you can use the name dir for the new
attribute since that's already used for setting rtl/ltr direction.
Simply renaming the attribute to something else should fix this.

Second, rather than adding a .directory attribute, I think that we
should simply add any selected directories to the .files list. My
experience is that having a direct mapping between what the user does,
and what we expose to the webpage, generally results in less developer
confusion and/or annoyance.

My understanding is that the current proposal is mainly so that if we
in the future add something like Directory.enumerateDeep(), that that
would automatically enable deep enumeration through all user options.
However that could always be solved by adding a
HTMLInputElement.enumerateFilesDeep() function.

/ Jonas



Re: Why is querySelector much slower?

2015-04-27 Thread Jonas Sicking
On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang curvedm...@gmail.com wrote:
 Intuitively, querySelector('.class') only needs to find the first matching
 node, whereas getElementsByClassName('.class')[0] needs to find all matching
 nodes and then return the first. The former should be a lot quicker than the
 latter. Why that's not the case?

I can't speak for other browsers, but Gecko-based browsers only search
the DOM until the first hit for getElementsByClassName('class')[0].
I'm not sure why you say that it must scan for all hits.

/ Jonas



Re: Directory Upload Proposal

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 5:45 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas a...@microsoft.com wrote:
 If there is sufficient interest, I would like to work on this within the 
 scope of the WebApps working group.

 It seems somewhat better to just file a bug against the HTML Standard
 since this also affects the processing model of e.g. input.files.
 Which I think was the original proposal for how to address this...
 Just expose all the files in input.files and expose the relative
 paths, but I guess that might be a bit too synchronous...

Yeah. Recursively enumerating the selected directory (/directories)
can be a potentially very lengthy process. So something which the page
might want to display progress UI while it's happening. We looked at
various ways of doing this in [1] but ultimately all of them felt
clunky and not as flexible as allowing the page to enumerate the
directory tree itself. This way pages could even save time on
enumeration by displaying UI which allows the user to select which
sub-directories to traverse.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=846931

/ Jonas



Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Jonas Sicking
On Wed, Apr 1, 2015 at 4:30 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, Apr 1, 2015 at 4:27 PM, Domenic Denicola d...@domenic.me wrote:
 I think it's OK for different browsers to experiment with different 
 non-interoperable conditions under which they fulfill or reject the 
 permissions promise. That's already true for most permissions grants today.

 It's true when UX is involved. When UX must not be involved, it's a
 very different situation. For those cases we do spell out what needs
 to happen.

I agree with Anne. What Domenic describes sounds like something
similar to CORS. I.e. a network protocol which lets a server indicate
that it trusts a given party.

Not saying that we can use CORS to solve this, or that we should
extend CORS to solve this. My point is that CORS works because it was
specified and implemented across browsers. If we'd do something like
what Domenic proposes, I think that would be true here too.

However, in my experience the use case for the TCPSocket and UDPSocket
APIs is to connect to existing hardware and software systems. Like
printers or mail servers. Server-side opt-in is generally not possible
for them.

/ Jonas



Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Jonas Sicking
On Wed, Apr 1, 2015 at 6:37 PM, Florian Bösch pya...@gmail.com wrote:
 On Wed, Apr 1, 2015 at 6:02 PM, Jonas Sicking jo...@sicking.cc wrote:

 Not saying that we can use CORS to solve this, or that we should
 extend CORS to solve this. My point is that CORS works because it was
 specified and implemented across browsers. If we'd do something like
 what Domenic proposes, I think that would be true here too.

 However, in my experience the use case for the TCPSocket and UDPSocket
 APIs is to connect to existing hardware and software systems. Like
 printers or mail servers. Server-side opt-in is generally not possible
 for them.

 Isn't the problem that these existing systems can't be changed (let's say an
 IRC server) to support say WebSockets, and thus it'd be convenient to be
 able to TCP to it. I think that is something CORS-like could actually solve.
 You could deploy (on the same origin) a webserver that handles the opt-in
 for that origin/port/protocol and then the webserver can open a connection
 to it. For example:

 var socket = new Socket(); socket.connect('example.com', 194);

 -

 RAW-SOCKET-OPTIONS HTTP/1.1
 port: 194
 host: example.com

 -

 HTTP/1.1 200 OK
 Access-Control-Allow-Origin: example.com

 - browser opens a TCP connection to example.com 194.

 So you don't need to upgrade the existing system for server authorization.
 You just need to deploy a (http compatible) authorative source on the same
 origin that can give a browser the answer it desires.

Again, the use case here is to enable someone to develop, for example,
a browser base mail client which has support for POP/IMAP/SMTP.

It's going to be very hard for that email client to get any
significant user base if their install steps are:

1. Go to awesomeEmail2000.com
2. Contact your mail provider and ask them to install a http server on
their mail server
3. There is no step three :)

/ Jonas



Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Jonas Sicking
On Wed, Apr 1, 2015 at 7:03 PM, Domenic Denicola d...@domenic.me wrote:
 From: Boris Zbarsky [mailto:bzbar...@mit.edu]

 This particular example sets of alarm bells for me because of virtual 
 hosting.

 Eek! Yeah, OK, I think it's best I refrain from trying to come up with 
 specific examples. Let's forget I said anything...

 As in, this seems like precisely the sort of thing that one browser might
 experiment with, another consider an XSS security bug, and then we have
 content that depends on a particular browser, no?

 My argument is that it's not materially different from existing permissions 
 APIs.

I think it is.

In cases like geolocation or notifications, the people writing the
spec, and the people implementing the spec, were able to envision a
reasonable permissions UI.

For the TCP/UDPSocket APIs, no one, to my knowledge, has been able to
describe a reasonable UI.

Basically the spec contains a big magic happens here section. That's
always bad in a spec. For example, it'd be bad if the CSS spec said
figure out column sizes would make the table look good. The fact
that we're talking about permissions doesn't make magic any more ok.

Magic is different from leaving UI details up to the browser.

/ Jonas



Re: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Jonas Sicking
Oh, I should add one thing.

I think that the TCPSocket and UDPSocket APIs are great. There is a
growing number of implementations of proprietary platforms which are
heavily based on web technologies. The most well known one is Cordova.
Platforms like those were the original audience for the TCP/UDPSocket
APIs and I think they would work great there.

But of course it requires that those platforms are interested in
implementing a cross-platform API, even it's its not a web API in the
traditional sense.

It might also mean that the API isn't well suited for discussion in
the WebApps WG. I'll leave that discussion to others.

/ Jonas

On Wed, Apr 1, 2015 at 8:47 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Apr 1, 2015 at 7:03 PM, Domenic Denicola d...@domenic.me wrote:
 From: Boris Zbarsky [mailto:bzbar...@mit.edu]

 This particular example sets of alarm bells for me because of virtual 
 hosting.

 Eek! Yeah, OK, I think it's best I refrain from trying to come up with 
 specific examples. Let's forget I said anything...

 As in, this seems like precisely the sort of thing that one browser might
 experiment with, another consider an XSS security bug, and then we have
 content that depends on a particular browser, no?

 My argument is that it's not materially different from existing permissions 
 APIs.

 I think it is.

 In cases like geolocation or notifications, the people writing the
 spec, and the people implementing the spec, were able to envision a
 reasonable permissions UI.

 For the TCP/UDPSocket APIs, no one, to my knowledge, has been able to
 describe a reasonable UI.

 Basically the spec contains a big magic happens here section. That's
 always bad in a spec. For example, it'd be bad if the CSS spec said
 figure out column sizes would make the table look good. The fact
 that we're talking about permissions doesn't make magic any more ok.

 Magic is different from leaving UI details up to the browser.

 / Jonas



Re: CfC: publish Proposed Recommendation of Web Messaging; deadline March 28

2015-03-21 Thread Jonas Sicking
On Sat, Mar 21, 2015 at 5:52 AM, Arthur Barstow art.bars...@gmail.com wrote:

 2. http://www.w3c-test.org/webmessaging/without-ports/025.html; this test
 failure (which passes on IE) is considered an implementation bug
 (MessageChannel and MessagePort are supposed to be exposed to Worker) that
 is expected to be fixed.

I'm not sure that we can really consider lack of support in Workers a
bug. Worker support is generally non-trivial since it requires making
an API work off the main thread.

That said, mozilla has patches for worker support in progress right
now, so hopefully Firefox can serve as second implementation here
soon.

/ Jonas



Re: template namespace attribute proposal

2015-03-13 Thread Jonas Sicking
On Fri, Mar 13, 2015 at 1:57 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Fri, Mar 13, 2015 at 1:48 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Mar 13, 2015 at 1:16 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Mar 12, 2015 at 3:07 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Mar 12, 2015 at 4:32 AM, Benjamin Lesh bl...@netflix.com wrote:
 What are your thoughts on this idea?

 I think it would be more natural (HTML-parser-wise) if we
 special-cased SVG elements, similar to how e.g. table elements are
 special-cased today. A lot of template-parsing logic is set up so
 that things work without special effort.

 Absolutely.  Forcing authors to write, or even *think* about,
 namespaces in HTML is a complete usability failure, and utterly
 unnecessary.  The only conflicts in the namespaces are font
 (deprecated in SVG2), script and style (harmonizing with HTML so
 there's no difference), and a (attempting to harmonize API surface).

 Note that the contents of a HTML script parses vastly different from
 an SVG script. I don't recall if the same is true for style.

 So the parser sadly still needs to be able to tell an SVG script
 from a HTML one.

 I proposed aligning these so that parsing would be the same, but there
 was more opposition than interest back then.

 That's back then.  The SVGWG is more interested in pursuing
 convergence now, per our last few F2Fs.

The resistance came mainly from the HTML-parser camp since it required
changes to parsing HTML scripts too.

Unless the SVG WG is willing to drop support for
script![CDATA[...]]/script. But that seems like it'd break a lot
of content.

/ Jonas



Re: template namespace attribute proposal

2015-03-13 Thread Jonas Sicking
On Fri, Mar 13, 2015 at 1:16 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Mar 12, 2015 at 3:07 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Mar 12, 2015 at 4:32 AM, Benjamin Lesh bl...@netflix.com wrote:
 What are your thoughts on this idea?

 I think it would be more natural (HTML-parser-wise) if we
 special-cased SVG elements, similar to how e.g. table elements are
 special-cased today. A lot of template-parsing logic is set up so
 that things work without special effort.

 Absolutely.  Forcing authors to write, or even *think* about,
 namespaces in HTML is a complete usability failure, and utterly
 unnecessary.  The only conflicts in the namespaces are font
 (deprecated in SVG2), script and style (harmonizing with HTML so
 there's no difference), and a (attempting to harmonize API surface).

Note that the contents of a HTML script parses vastly different from
an SVG script. I don't recall if the same is true for style.

So the parser sadly still needs to be able to tell an SVG script
from a HTML one.

I proposed aligning these so that parsing would be the same, but there
was more opposition than interest back then.

/ Jonas



Re: CORS performance

2015-02-24 Thread Jonas Sicking
On Tue, Feb 24, 2015 at 3:25 AM, Anne van Kesteren ann...@annevk.nl wrote:
 If that's the case then I think we'd get most of the functionality,
 with essentially none of the risk, by only allowing server-wide
 cookie-less preflights.

 If we only do it for this, could we combine that feature with the
 existing preflight then? Support a Access-Control-Allow-Origin-Wide:
 true header or some such that's mutually exclusive with
 Access-Control-Allow-Credentials: true.

I don't have opinions on this.

/ Jonas



Re: CORS performance

2015-02-23 Thread Jonas Sicking
On Mon, Feb 23, 2015 at 7:15 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:
 On Tue, Feb 17, 2015 at 9:31 PM, Brad Hill hillb...@gmail.com wrote:
 I think it is at least worth discussing the relative merits of using a
 resource published under /.well-known for such use cases, vs. sending
 pinned headers with every single resource.

 FWIW, when CORS was designed, the Flash crossdomain.xml design (which
 uses a well-known URL though not under /.well-known) already existed
 and CORS deliberately opted for a different design.

 It's been a while, so I don't recall what the reasons against adopting
 crossdomain.xml or something very similar to it were, but considering
 that the crossdomain.xml design was knowingly rejected, it's probably
 worthwhile to pay attention to why.

A lot websites accidentally enabled cross-origin requests with
cookies. Not realizing that that enabled attackers to make requests
that had side-effects as well as read personal user data without user
permission.

In short, it was very easy to misconfigure a server, and people did.

This is why I would feel dramatically more comfortable if we only
enabled server-wide opt-in for credential-less requests. Those are
many orders of magnitude easier to make secure.

/ Jonas



Re: CORS performance proposal

2015-02-23 Thread Jonas Sicking
On Sat, Feb 21, 2015 at 11:18 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sat, Feb 21, 2015 at 10:17 AM, Martin Thomson
 martin.thom...@gmail.com wrote:
 On 21 February 2015 at 20:43, Anne van Kesteren ann...@annevk.nl wrote:
 High-byte of what? A URL is within ASCII range when it reaches the
 server. This is the first time I hear of this.

 Apparently, all sorts of muck floats around the Internet.  When we did
 HTTP/2 we were forced to accept that header field values (URLs in
 particular) were a sequence of octets.  Those are often interpreted as
 strings in various interesting ways.

 But in this particular case it must be the browser that generates said
 muck, no? Other than Internet Explorer (and that's a couple versions
 ago, so wouldn't support this protocol anyway), there's no browser
 that does this as far as I know.

All browsers support sending %xx stuff to the server. Decoding those
is likely more often than not happening in a server-specific way
still. Despite specs defining how they should do it.

/ Jonas



Re: CORS performance proposal

2015-02-23 Thread Jonas Sicking
On Fri, Feb 20, 2015 at 11:43 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Fri, Feb 20, 2015 at 9:38 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Feb 20, 2015 at 1:05 AM, Anne van Kesteren ann...@annevk.nl wrote:
 An alternative is that we attempt to introduce
 Access-Control-Policy-Path again from 2008. The problems you raised
 https://lists.w3.org/Archives/Public/public-appformats/2008May/0037.html
 seem surmountable. URL parsing is defined in more detail these days
 and we could simply ban URLs containing escaped \ and /.

 I do remember that another issue that came up back then was that
 servers would treat more than just '\', or the escaped version
 thereof, as a /. But also any character whose low-byte was equal to
 the ascii code for '\' or '/'. I.e. the server would just cut the
 high-byte when doing some internal 2byte-string to 1byte-string
 conversion. Potentially this conversion is affected by what character
 encodings the server is configured for too, but i'm less sure about
 that.

 High-byte of what? A URL is within ASCII range when it reaches the
 server. This is the first time I hear of this.

I really don't remember the details. I'd recommend talking to
microsoft since I believe they had done most research into this at the
time.

Keep in mind though that just because URL parsing is defined a
particular way, doesn't mean that software implements it that way.

/ Jonas



Re: CORS performance

2015-02-23 Thread Jonas Sicking
On Mon, Feb 23, 2015 at 11:06 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Mon, Feb 23, 2015 at 7:55 PM, Jonas Sicking jo...@sicking.cc wrote:
 A lot websites accidentally enabled cross-origin requests with
 cookies. Not realizing that that enabled attackers to make requests
 that had side-effects as well as read personal user data without user
 permission.

 In short, it was very easy to misconfigure a server, and people did.

 This is why I would feel dramatically more comfortable if we only
 enabled server-wide opt-in for credential-less requests. Those are
 many orders of magnitude easier to make secure.

 Why is that not served by requiring an additional header that
 explicitly opts into that case?

I don't think an extra header is that much harder to deploy than
crosssite.xml is. I.e. I don't see strong reasons to think that people
won't misconfigure.

 That combined with requiring to list
 the explicit origin has worked well for CORS so far.

This could potentially help.

I don't remember the details of how/why people screwed up with
crosssite.xml. But if the problem was that people hosted multiple
services on the same server and only thought of one of them when
writing a policy, then this won't really help very much.

Do we have any data on how common it is for people to use CORS with
credentials? My impression is that it's far less common than CORS
without credentials.

If that's the case then I think we'd get most of the functionality,
with essentially none of the risk, by only allowing server-wide
cookie-less preflights.

But data would help for sure.

/ Jonas



Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey d...@arandomurl.com wrote:
 so presumably it is OK to set the Content-Type to text/plain

 Thats not ok, but may explain my confusion, is Content-Type considered a
 Custom Header that will always trigger a preflight? if so then none of the
 caching will apply, CouchDB requires sending the appropriate content-type

We most likely can consider the content-type header as *not* custom.
I was one of the people way back when that pointed out that there's a
theoretical chance that allowing arbitrary content-type headers could
cause security issues. But it seems highly theoretical.

I suspect that the mozilla security team would be fine with allowing
arbitrary content-types to be POSTed though. Worth asking. I can't
speak for other browser vendors of course.

/ Jonas



Re: CORS performance proposal

2015-02-19 Thread Jonas Sicking
Would this be allowed for both requests with credentials and requests
without credentials? The security implications of the two are very
different.

/ Jonas

On Thu, Feb 19, 2015 at 5:29 AM, Anne van Kesteren ann...@annevk.nl wrote:
 When the user agent is about to make its first preflight to an origin
 (timeout up to the user agent), it first makes a preflight that looks
 like:

   OPTIONS *
   Access-Control-Request-Origin-Wide-Cache: [origin]
   Access-Control-Request-Method: *
   Access-Control-Request-Headers: *

 If the response is

   2xx XX
   Access-Control-Allow-Origin-Wide-Cache: [origin]
   Access-Control-Allow-Methods: *
   Access-Control-Allow-Headers: *
   Access-Control-Max-Age: [max-age]

 then no more preflights will be made for the duration of [max-age] (or
 shortened per user agent preference). If the response includes

   Access-Control-Allow-Credentials: true

 the cache scope is increased to requests that include credentials.

 I think this has a reasonable tradeoff between security and opening up
 all the power of the HTTP APIs on the server without the performance
 hit. It still makes the developer very conscious about the various
 features involved.

 The cache would be on a per requesting origin basis as per the headers
 above. The Origin and Access-Control-Allow-Origin would not take part
 in this exchange, to make it very clear what this is about.

 (This does not affect Access-Control-Expose-Headers or any of the
 other headers required as part of non-preflight responses.)


 --
 https://annevankesteren.nl/




Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 3:30 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Feb 19, 2015 at 12:17 PM, Dale Harvey d...@arandomurl.com wrote:
 With Couch / PouchDB we are working with an existing REST API wherein every
 request is to a different url (which is unlikely to change), the performance
 impact is significant since most of the time is used up by latency, the CORS
 preflight request essentially double the time it takes to do anything

 Yeah, also, it should not be up to us how people design their HTTP
 APIs. Limiting HTTP in that way because it is hard to make CORS scale
 seems bad.


 I think we've been too conservative when introducing CORS. It's
 effectively protecting content behind a firewall,

...and content that uses user credentials like cookies.

/ Jonas



Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 12:38 PM, Brad Hill hillb...@gmail.com wrote:
 I think that POSTing JSON would probably expose to CSRF a lot of things that
 work over HTTP but don't expect to be interacted with by web browsers in
 that manner.  That's why the recent JSON encoding for forms mandates that it
 be same-origin only.

Note that you can already POST JSON cross-origin. Without any
preflight. The only thing you can't do is to set the Content-Type
header to the official JSON mimetype.

So the question is, does the server check that the Content-Type header
is set to application/json and if not abort any processing?

/ Jonas



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Jonas Sicking
On Tue, Feb 10, 2015 at 12:43 PM, Michaela Merz
michaela.m...@hermetos.com wrote:
 Blobs are immutable but it would be cool to have blob
 'pipes' or FIFOs allowing us to stream from those pipes by feeding them
 via AJAX.

Since it sounds like you want to help with this, there's good news!
There's an API draft available. It would be very helpful to have web
developers, such as yourself, to try it out and find any rough edges
before it ships in browsers.

There's a spec and a prototype implementation over at
https://github.com/whatwg/streams/

It would be very helpful if you try it out and report any problems
that you find or share any cool demos that you create.

/ Jonas



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Jonas Sicking
On Tue, Feb 10, 2015 at 12:51 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 i agree that it's not a democratic process and even though some W3C/TAG
 people will engage you every now and then the end result is the browser
 vendors and even companies like Akamai have more say than the users and
 developers

Developers actually have more say than any other party in this process.

Browsers are not interested in shipping any APIs that developers
aren't going to use. Likewise they are not going to remove any APIs
that developers are using (hence sync XHR is not going to go anywhere,
no matter what the spec says).

Sadly W3C and the developer community has not yet figured out a good
way to communicate.

But here's where you can make a difference!

Please do suggest problems that you think needs to be solved. With new
specifications that are still in the development phase, with existing
specifications that have problems, and with specifications that
doesn't exist yet but you think should.

Looking forward to your constructive contributions.

/ Jonas



Re: oldNode.replaceWith(...collection) edge case

2015-01-27 Thread Jonas Sicking
On Jan 27, 2015 4:51 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Jan 22, 2015 at 11:43 AM, Jonas Sicking jo...@sicking.cc wrote:
  In general I agree that it feels unintuitive that you can't replace a
node
  with a collection which includes the node itself. So the extra line or
two
  of code seems worth it.

 You don't think it's weird that before/after/replaceWith all end up
 doing the same for that scenario? Perhaps it's okay..

Yeah, I think that's okay.

/ Jonas


Re: oldNode.replaceWith(...collection) edge case

2015-01-22 Thread Jonas Sicking
On Jan 17, 2015 8:20 PM, Glen Huang curvedm...@gmail.com wrote:

 Oh crap. Just realized saving index won't work if context node's previous
siblings are passed as arguments. Looks like inserting transient node is
still the best way.

The simplest way to write this method would seem to me to be something like:

Node.prototype.replaceWith = function(collection) {
  if (collection instanceof Node)
collection = [collection];

  var following = this.nextSibling;
  var parent = this.parentNode;
  parent.removeChild(this);
  for (node of collection) {
if (node == following) {
  following = following.nextSibling;
  continue;
}

if (node.nodeType == FRAGMENT) {
  var last = node.lastChild;
}

parent.insertBefore(node, following);

if (node.nodeType == FRAGMENT) {
  following = last.nextSibling;
}
  }
}

In general I agree that it feels unintuitive that you can't replace a node
with a collection which includes the node itself. So the extra line or two
of code seems worth it.

/ Jonas


Re: PSA: Indexed Database API is a W3C Recommendation

2015-01-08 Thread Jonas Sicking
\o/

On Thu, Jan 8, 2015 at 12:20 PM, Arthur Barstow art.bars...@gmail.com wrote:
 Congratulations All! This was a job very well done.

 On 1/8/15 2:37 PM, Coralie Mercier wrote:

 It is my pleasure to announce that Indexed Database API is published as
 a W3C Recommendation
 http://www.w3.org/TR/2015/REC-IndexedDB-20150108/

 This specification defines an API for a database of records holding
 simple values and hierarchical objects.

 All Members who responded to the Call for Review [1] of the Proposed
 Recommendation supported the publication of this specification as a
 W3C Recommendation, or abstained.

 Please join us in thanking the Web Applications Working Group [2]
 for their achievement.

 This announcement follows section 8.1.2 [3] of the W3C Process Document.

 For Tim Berners-Lee, Director,
 Philippe Le Hegaret, Interaction Domain Lead,
 Xiaoqian Wu, Team Contact,
 Yves Lafon, Team contact;
 Coralie Mercier, W3C Communications

 [1] https://www.w3.org/2002/09/wbs/33280/IndexedDB-PR/results
 [2] https://www.w3.org/2008/webapps/
 [3] https://www.w3.org/2014/Process-20140801/#ACReviewAfter





Re: Interoperability vs file: URLs

2014-12-04 Thread Jonas Sicking
On Thu, Dec 4, 2014 at 7:50 AM, Sam Ruby ru...@intertwingly.net wrote:
 On 12/02/2014 02:22 AM, Jonas Sicking wrote:
 To be clear, I'm proposing to remove any and all normative definition
 of file:// handling from the spec. Because I don't think there is
 interoperability, nor do I think that it's particularly high priority
 to archive it.


 A bug has been file on your behalf:

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=27518

 In response, I suggest that your proposal is a bit too extreme, and I
 suggest dialing it back a bit.

That sounds ok to me.

/ Jonas



Re: URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

2014-12-01 Thread Jonas Sicking
Just in case I haven't formally said this elsewhere:

My personal feeling is that it's probably better to stay away from
speccing the behavior of file:// URLs.

There's very little incentive for browsers to align on how to handle
file:// handling. The complexities of different file system behaviors
on different platforms and different file system backends makes doing
comprehensive regression testing painful. And the value is pretty low
because there's almost no browser content that uses absolute file://
URLs.

I'm not sure if non-browser URL consuming software has different
incentives. Most software that loads resources from the local file
system use file paths, rather than file:// URLs. Though I'm sure there
are exceptions.

And it seems like file:// URLs add a significant chunk of complexity
to the spec. Complexity which might be for naught if implementations
don't implement them.

/ Jonas



On Mon, Dec 1, 2014 at 5:17 PM, Sam Ruby ru...@intertwingly.net wrote:
 On 11/18/2014 03:18 PM, Sam Ruby wrote:


 Meanwhile, I'm working to integrate the following first into the WHATWG
 version of the spec, and then through the WebApps process:

 http://intertwingly.net/projects/pegurl/url.html


 Integration is proceeding, current results can be seen here:

 https://specs.webplatform.org/url/webspecs/develop/

 It is no longer clear to me what through the WebApps process means. In an
 attempt to help define such, I'm making a proposal:

 https://github.com/webspecs/url/blob/develop/docs/workmode.md#preface

 At this point, I'm looking for general feedback.  I'm particularly
 interested in things I may have missed.  Pull requests welcome!

 Once discussion dies down, I'll try go get agreement between the URL
 editors, the WebApps co-chairs and W3C Legal.  If/when that is complete,
 this will go to W3C Management and whatever the WHATWG equivalent would be.

 - Sam Ruby




Re: URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

2014-12-01 Thread Jonas Sicking
On Mon, Dec 1, 2014 at 7:11 PM, Domenic Denicola d...@domenic.me wrote:
 What we really need to do is get some popular library or website to take a
 dependency on mobile Chrome or mobile Safari's file URL parsing. *Then* we'd
 get interoperability, and quite quickly I'd imagine.

To my knowledge, all browsers explicitly block websites from having
any interactions with file:// URLs. I.e. they don't allow loading an
img from file:// or even link to a file:// HTML page using a
href=file:// Even though both those are generally allowed cross
origin.

So it's very difficult for webpages to depend on the behavior of
file:// parsing, even if they were to intentionally try.

/ Jonas

 
 From: Jonas Sicking
 Sent: 2014-12-01 22:07
 To: Sam Ruby
 Cc: Webapps WG
 Subject: Re: URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

 Just in case I haven't formally said this elsewhere:

 My personal feeling is that it's probably better to stay away from
 speccing the behavior of file:// URLs.

 There's very little incentive for browsers to align on how to handle
 file:// handling. The complexities of different file system behaviors
 on different platforms and different file system backends makes doing
 comprehensive regression testing painful. And the value is pretty low
 because there's almost no browser content that uses absolute file://
 URLs.

 I'm not sure if non-browser URL consuming software has different
 incentives. Most software that loads resources from the local file
 system use file paths, rather than file:// URLs. Though I'm sure there
 are exceptions.

 And it seems like file:// URLs add a significant chunk of complexity
 to the spec. Complexity which might be for naught if implementations
 don't implement them.

 / Jonas



 On Mon, Dec 1, 2014 at 5:17 PM, Sam Ruby ru...@intertwingly.net wrote:
 On 11/18/2014 03:18 PM, Sam Ruby wrote:


 Meanwhile, I'm working to integrate the following first into the WHATWG
 version of the spec, and then through the WebApps process:

 http://intertwingly.net/projects/pegurl/url.html


 Integration is proceeding, current results can be seen here:

 https://specs.webplatform.org/url/webspecs/develop/

 It is no longer clear to me what through the WebApps process means. In
 an
 attempt to help define such, I'm making a proposal:

 https://github.com/webspecs/url/blob/develop/docs/workmode.md#preface

 At this point, I'm looking for general feedback.  I'm particularly
 interested in things I may have missed.  Pull requests welcome!

 Once discussion dies down, I'll try go get agreement between the URL
 editors, the WebApps co-chairs and W3C Legal.  If/when that is complete,
 this will go to W3C Management and whatever the WHATWG equivalent would
 be.

 - Sam Ruby





Re: Interoperability vs file: URLs (was: URL Spec WorkMode)

2014-12-01 Thread Jonas Sicking
On Mon, Dec 1, 2014 at 7:58 PM, Sam Ruby ru...@intertwingly.net wrote:
 On 12/01/2014 10:22 PM, Jonas Sicking wrote:

 On Mon, Dec 1, 2014 at 7:11 PM, Domenic Denicola d...@domenic.me wrote:

 What we really need to do is get some popular library or website to take
 a
 dependency on mobile Chrome or mobile Safari's file URL parsing. *Then*
 we'd
 get interoperability, and quite quickly I'd imagine.


 To my knowledge, all browsers explicitly block websites from having
 any interactions with file:// URLs. I.e. they don't allow loading an
 img from file:// or even link to a file:// HTML page using a
 href=file:// Even though both those are generally allowed cross
 origin.

 So it's very difficult for webpages to depend on the behavior of
 file:// parsing, even if they were to intentionally try.

 Relevant related reading, look at the description that the current URL
 Living Standard provides for the origin for file: URLs:

 https://url.spec.whatwg.org/#origin

 I tend to agree with Jonas.  Ideally the spec would match existing browser
 behavior.  When that's not possible, getting agreements from browser vendors
 on the direction would suffice.

 When neither exist, a more accurate description (such as the one cited above
 in the Origin part of the URL Standard) is appropriate.

To be clear, I'm proposing to remove any and all normative definition
of file:// handling from the spec. Because I don't think there is
interoperability, nor do I think that it's particularly high priority
to archive it.

/ Jonas



Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/18/14, 10:26 PM, Michaela Merz wrote:

 First: We need signed script code.

 For what it's worth, Gecko supported this for a while.  See
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html.
 In practice, people didn't really use it, and it made the security model a
 _lot_ more complicated and hard to reason about, so the feature was dropped.

 It would be good to understand how proposals along these lines differ from
 what's already been tried and failed.

The way we did script signing back then was nutty in several ways. The
signing we do in FirefoxOS is *much* simpler. Simple enough that no
one has complained about the complexity that it has added to Gecko.

Sadly enhanced security models that use signing by a trusted party
inherently looses a lot of the advantages of the web. It means that
you can't publish a new version of you website by simply uploading
files to your webserver whenever you want. And it means that you can't
generate the script and markup that make up your website dynamically
on your webserver.

So I'm by no means arguing that FirefoxOS has the problem of signing solved.

Unfortunately no one has been able to solve the problem of how to
grant web content access to capabilities like raw TCP or UDP sockets
in order to access legacy hardware and protocols, or how to get
read/write acccess to your photo library in order to build a photo
manager, without relying on signing.

Which has meant that the web so far is unable to compete with native
in those areas.

/ Jonas



Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 9:38 PM, Florian Bösch pya...@gmail.com wrote:
 or direct file access

 http://www.html5rocks.com/en/tutorials/file/filesystem/

This is no more direct file access than IndexedDB is. IndexedDB also
allow you to store File objects, but also doesn't allow you to access
things like your photo or music library.

/ Jonas



Re: Push API change for permissions UX

2014-10-27 Thread Jonas Sicking
On Sat, Oct 25, 2014 at 11:42 PM, Jake Archibald jaffathec...@gmail.com wrote:
 This discussion is about how often push may be processed silently (without
 showing a notification), not if a push notification may *only* show a
 notification.

Ok.

I think this comes back to the old problem of that different UAs have
different UIs and that it's hard to create a single spec to cover them
all.

The use case sounds reasonable to me, but I'm not sure if mozilla has
plans to implement anything similar. I'll defer to other people
working on push.

/ Jonas



Re: Push API change for permissions UX

2014-10-25 Thread Jonas Sicking
On Fri, Oct 24, 2014 at 9:39 AM, Owen Campbell-Moore owe...@google.com wrote:
 I think it might make sense to ask for permission to display
 notifications/UI at the same time as you ask for permission to run in the
 background.

 I hope the above explains why we believe that while some sites may want to
 ask for both permissions, they should be able to say to the user Hey, I
 want to send you notifications, without saying Hey, I want to run in the
 background whenever I want for any reason.

I suggest that if we attempt to solve this use case, that we do it by
adding the ability to send push messages that directly create a
notification, without waking up a SW.

There's recently been a separate thread about that.

/ Jonas



Re: Push API and Service Workers

2014-10-23 Thread Jonas Sicking
On Thu, Oct 23, 2014 at 2:27 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Oct 21, 2014 at 7:25 AM, Erik Corry erikco...@google.com wrote:
 * Push doesn't actually need SW's ability to intercept network
 communications on behalf of a web page.
 * You can imagine a push-handling SW that does all sorts of
 complicated processing of notifications, downloading things to a local
 database, but does not cache/intercept a web page.
 * This ties into the discussion of whether it should be possible to
 register a SW without giving it a network-intercept namespace

 As was discussed over in
 https://github.com/slightlyoff/ServiceWorker/issues/445#issuecomment-60304515
 earlier today, you need a scope for all uses of SW, because you need
 to *request permission* on a *page*, not within a SW (so the user has
 appropriate context on whether to grant the permission or not), and
 the scope maps the page to the SW that the registration is for.

 (The permission grant is actually per-origin, not per-scope/SW, but
 the registration itself is per-scope/SW, and it has to be done from
 within a page context because there *might* be a permission grant
 needed.)

Yes, you need to ask for permission within a page. But that page
doesn't have to have any particular relation to the scope of the SW
that it's asking for. It just needs to be same-origin with that SW.

As the API is structured, any page on a website can grab any SW
registration, and call registerPush or registerGeoFence on that SW
registration.

So I don't see how the scope of the SW matters.

/ Jonas



Re: Push API and Service Workers

2014-10-19 Thread Jonas Sicking
On Wed, Oct 15, 2014 at 3:07 PM, Shijun Sun shij...@microsoft.com wrote:
 My understanding here is that we want to leverage the push client in the 
 OS.  That will provide new capabilities without dependency on a direct 
 connection between the app and the app server.

Yes, this is how the spec is defined. The spec leaves it up to the
implementation how to transport the information from the push server
to the device. This part is entirely transparent to the webapp, even
once we specify the server side of the push feature, and so is up to
the implementation to do through whatever means it wants.

/ Jonas



Re: Push API and Service Workers

2014-10-15 Thread Jonas Sicking
The hard question is: What do you do if there's an incoming push
message for a given website, but the user doesn't have the website
currently open.

Service Workers provide the primitive needed to enable launching a
website in the background to handle the incoming push message.

Another solution would be to always open up a browser tab with the
website in it. But that's only correct behavior for some types of push
messages. I.e. some push messages should be handled without any UI
being opened. Others should be opened by launching a attention
window which is rendered even though the phone is locked. Others
simply want to create a toast notification.

We could add lots of different types of push messages. But that's a
lot of extra complexity. And we'd have to add similar types of
geofencing registrations, and types of alarm clock registrations,
etc.

The current design separates the trigger from what to do when the
trigger fires. Which both makes for a smaller API, and for a more
flexible design.

/ Jonas



On Wed, Oct 15, 2014 at 2:42 PM, Shijun Sun shij...@microsoft.com wrote:
 Hi,



 I'm with the IE Platform team at Microsoft.  I joined the WebApps WG very
 recently.  I am looking into the Push API spec, and got some questions.
 Hope to get help from experts in the WG.



 The current API definition is based on an extension of the Service Workers.
 I'd like to understand whether the Service Workers is a must-have dependency
 for all scenarios.  It seems some basic scenarios can be enabled without the
 Service Worker if the website can directly create a PushRegistrationManager.
 I'm looking at Fig. 1 in the spec.  If the user agent can be the broker
 between the Push client and the webpage, it seems some generic actions can
 be defined without the Service Workers - for example immediately display a
 toast notification with the push message.



 It is very likely I have missed some basic design principle behind the
 current API design.  It'd be great if someone could share insights on the
 scenarios and the normative dependency on the Service Workers.



 All the Best, Shijun





Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-10-06 Thread Jonas Sicking
On Sun, Oct 5, 2014 at 2:28 PM,  cha...@yandex-team.ru wrote:
 So the question turns on whether the changes would invalidate a patent 
 review, and my quick guess is that the answer is yes ;(

Really? I would have made the opposite conclusion. Changing the event
source makes a very small difference in behavior. I would greatly
surprise me if it affected the applicability of a given patent.

That said, it is theoretically possible. But that seems to be true for
*any* normative change of a spec.

/ Jonas



Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-10-05 Thread Jonas Sicking
On Sun, Oct 5, 2014 at 7:05 AM, Arthur Barstow art.bars...@gmail.com wrote:
 On 10/2/14 2:44 PM, Jonas Sicking wrote:

 Though I also agree with Mounir. Changing the event source doesn't
 seem like a change that's substantial enough that we'd need to go back
 to WD/LCWD.

 Does any implementation actually feel that it would be?


 So, it appears you two recommend #2 below (publish the LC as is). What do
 you think about option #3 (publishing the LC as is but also adding a
 non-normative note that seeks feedback Issue-40)?

I'm fine with that too.

/ Jonas



Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-10-02 Thread Jonas Sicking
Though I also agree with Mounir. Changing the event source doesn't
seem like a change that's substantial enough that we'd need to go back
to WD/LCWD.

Does any implementation actually feel that it would be?

/ Jonas

On Thu, Oct 2, 2014 at 4:15 AM, Mounir Lamouri mou...@lamouri.fr wrote:
 Can we at least publish a new WD so people stop referring to the old
 TR/?

 -- Mounir

 On Wed, 1 Oct 2014, at 20:36, Arthur Barstow wrote:
 On 9/25/14 9:26 AM, Mounir Lamouri wrote:
  On Thu, 25 Sep 2014, at 21:52, Arthur Barstow wrote:
  On 9/25/14 6:36 AM, Anne van Kesteren wrote:
  It effectively comes down to the fact that the specification describes
  something, but Chrome implements it in another way per how I suggested
  it should work (using animation frame tasks).
  So this appears to be [Issue-40] and I think a one-line summary is the
  Editors consider this something that can be deferred to the next version
  and Anne considers it something that should be addressed before LC is
  published.
 
  Vis-a-vis this CfC, it seems the main options are:
 
  1. Continue to work on this issue with the goal of getting broader
  consensus on the resolution
 
  2. Publish the LC as is
 
  3. Publish the LC as is but explicitly highlight this Issue and ask
  for Implementer/Developer feedback
 
  4. Other options?
 
  Of course, I'd like to hear from others but I tend to think we should
  first try #1 (especially since Anne indicates the spec and at least one
  implementations are currently not aligned).
 
  Mounir, Marcos - would you please work with Anne on a mutually agreeable
  solution?
  Last I checked, animation frame task was still underdefined. This is
  what you can read in the WHATWG's fullscreen specification:
  Animation frame task is not really defined yet, including relative
  order within that task, see bug 26440.
 
  In my opinion, if the spec is changed to use animation frame task, it
  would not change much in the current state of things.

 Well, perhaps this would be true but the devil's in the details and
 the details do matter (see below).

  Also, I'm not entirely sure why Anne is so loudly complaining about that
  issue. The issue was not closed or waived but postponed until we can
  properly hooked to the thing. LC doesn't freeze the specification and we
  could definitely get this fixed before moving to CR.
 
  What I suggested to him on IRC and what I believe is the best approach
  to reconcile the two worlds (WHATWG live standards and W3C snapshots) is
  to take the current version of the spec to LC and update the ED to use
  animation frame task and mark it as a WIP feature. I opened issue 75
  last week as a reminder to do that.
 
  Arthur, what do you think of that solution?

 We can certainly publish a LC with open issues (as was explicitly noted
 in the original CfC [1]). However, I do want to emphasize that if any
 substantive issue is filed after the LC is published, and the group
 agrees to address any such issue(s), the group must publish another LC
 before the spec can move to CR. I mention this because LC-LC loops
 are time consuming for the group, implementers and developers and thus
 should be avoided if possible. As such, it seems like pursuing #1 should
 be the next step.

 -Thanks, AB






Re: Service worker popup (rich notification)

2014-10-02 Thread Jonas Sicking
On Thu, Oct 2, 2014 at 11:31 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Oct 2, 2014 at 8:27 PM, John Mellor joh...@google.com wrote:
 This seems to either require a somewhat stronger trust signal from the user,
 or a very easy mechanism for revoking the permission if the website does
 spam you; and probably in either case showing the url bar should be
 compulsory to prevent phishing. But this isn't something we've thought about
 deeply yet.

 Indeed. The Notifications API is nice, but it's not suitable for this.
 You need a browsing context of sorts so you can show images, video,
 buttons, etc.

Indeed. I wouldn't call these notifications at all. What's needed here
is to launch full browser windows so that we can display full-screen
or full-window UIs to the user. To make matters even more complicated,
generally speaking you want to be able to do this on a mobile device,
even if it's locked.

I.e. an alarm clock app wouldn't be terribly useful if it only worked
when the device was unlocked. And a skype app wouldn't be terribly
useful if you could only receive calls when the device was unlocked.

Fortunately, while this goes outside the browser window, it doesn't
break the same-origin boundary. So it should be quite possible to
solve this the same way we're planning on solving other such APIs,
like storage, indexedDB and notifications. I.e. make the API async and
then leave it up to UAs to implement policies.

/ Jonas



Re: [clipboard] Semi-Trusted Events Alternative

2014-09-15 Thread Jonas Sicking
On Sun, Sep 14, 2014 at 5:54 AM, Dale Harvey d...@arandomurl.com wrote:
 websites can already trivially build editors that use copy and paste within
 the site itself, the entire problem is that leads to confusing behaviour
 when the user copies and pastes outside the website, which is a huge use
 case of the clipboard in the first place

I'm not sure I fully follow your argument. So let me provide two
options that I think we have.

1. Forbid any attempts at reading directly from the clipboard, no
matter if the data there came from the current origin or not. Thereby
keeping the API consistent.
2. Allow reading data from the clipboard at any time if the data there
originated from the current origin. Thereby making the API as helpful
as possible for the case when data is copied within a website.

Are you saying that you think the consistency of option 1 is more
important than the ease-of-use of option 2?

The only thing that I'd worry about is that having websites handle
within-website copying themselves rather than go through the clipboard
API is that it can very easily create edgecases that are easy to miss.

For example, how does the website detect if the user overwrites the
contents of the clipboard by copying data from another location? Or if
the OS has features like a clipboard manager which allows you to keep
a history of clipboard contents and paste old values (I believe
Windows used to enable this).

/ Jonas



Re: [clipboard] Semi-Trusted Events Alternative

2014-09-15 Thread Jonas Sicking
On Mon, Sep 15, 2014 at 1:42 PM, Hallvord R. M. Steen
hst...@mozilla.com wrote:
 It's an interesting idea that partly fixes the main drawback with the current 
 proposal: that to read clipboard contents, paste must be triggered from the 
 browser's own UI, not the website's. The current proposal makes it possible 
 for a website to create a Copy button it its UI, and it will just work when 
 clicked, but to create an equivalent Paste button the site must be 
 white-listed and allowed to read all clipboard content.

I'm not suggesting any whitelists.

I'm just suggesting that all websites can, without any user gestures,
read data from the clipboard if that data originated from the website
itself.

/ Jonas



Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-09-15 Thread Jonas Sicking
On Fri, Sep 12, 2014 at 8:07 AM, Mounir Lamouri mou...@lamouri.fr wrote:
 On Fri, 12 Sep 2014, at 08:52, Jonas Sicking wrote:
 It's somewhat inconsistent that we use the term natural to indicate
 the most natural direction based on hardware, but we use the term
 primary when indicating the most natural portrait/landscape
 direction based on hardware.

 Why do we use primary for one and natural for the other?

 natural and primary have very different meaning. There can be only
 one natural orientation for a device, it's when angle = 0. However,
 portrait-primary and portrait-secondary would depend of the context. For
 example, I have two monitors in front of me. They are both in portrait
 orientation but they could both have different angles, if that was the
 case, one device would have angle = 90, one would have angle = 270 but I
 would expect to both be portrait-primary.

I still think landscape-primary effectively means the most natural
landscape orientation just like natural means the most natural
orientation.

I don't really know that it makes sense to talk about screen
orientation APIs for desktop monitors. In general the best solution
there is likely to not expose the API at all since any attempts to use
the API to accomplish any change would result in really poor UI.

But this is a bikeshedding topic, so I won't fight it.

 Second, I'm still very worried that people will interpret
 screen.orientation.angle=0 as portrait. I don't expect to be able to
 convince people here to remove the property. However I think it would
 be good to at least make it clear in the spec that the .angle property
 can not be used to detect portrait vs. landscape.

 A informative note in the description of the angle property saying
 something like:

 The value of this property is relative to the natural angle of the
 hardware. So for some devices angle will be 0 when the device is in
 landscape mode, and on other devices when the device is in portrait
 mode. Thus this property can not be used to detect landscape vs.
 portrait. The primary use case for this property is to enable doing
 conversions between coordinates relative to the screen and coordinates
 relative to the device (such as the ones returned from the
 DeviceOrientationEvent interface).

 In order to check if the device is in portrait or landscape mode,
 instead use the orientation.type property.

 Isn't Best Practice 1: orientation.angle and orientation.type
 relationship what you are looking for?

Ah, I hadn't seen that. For me the natural place to read about the
relationship between those properties is at the description of the
properties themselves.

 Also, I can't find any normative definition of if orientation.angle
 should increase or decrease if the user rotates a device 90 degrees
 clockwise?

 I believe you found the definition in the specification according to
 your reply.

I think it's likely to result in many implementation bugs if we rely
on this being defined buried inside an algorithm rather than at least
mentioned at the definition of the property.

/ Jonas



Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-09-11 Thread Jonas Sicking
On Thu, Sep 11, 2014 at 2:19 PM, Arthur Barstow art.bars...@gmail.com wrote:
 Mounir and Marcos would like to publish a LCWD of The Screen Orientation API
 and this is a Call for Consensus to do using the latest ED (not yet in the
 LCWD template) as the basis:

   https://w3c.github.io/screen-orientation/

Sorry, my first comment is a naming bikeshed issue. Feel free to
ignore as it's coming in late, but I hadn't thought of it until just
now.

It's somewhat inconsistent that we use the term natural to indicate
the most natural direction based on hardware, but we use the term
primary when indicating the most natural portrait/landscape
direction based on hardware.

Why do we use primary for one and natural for the other?

Second, I'm still very worried that people will interpret
screen.orientation.angle=0 as portrait. I don't expect to be able to
convince people here to remove the property. However I think it would
be good to at least make it clear in the spec that the .angle property
can not be used to detect portrait vs. landscape.

A informative note in the description of the angle property saying
something like:

The value of this property is relative to the natural angle of the
hardware. So for some devices angle will be 0 when the device is in
landscape mode, and on other devices when the device is in portrait
mode. Thus this property can not be used to detect landscape vs.
portrait. The primary use case for this property is to enable doing
conversions between coordinates relative to the screen and coordinates
relative to the device (such as the ones returned from the
DeviceOrientationEvent interface).

In order to check if the device is in portrait or landscape mode,
instead use the orientation.type property.

Also, I can't find any normative definition of if orientation.angle
should increase or decrease if the user rotates a device 90 degrees
clockwise?

/ Jonas



Re: CfC: publish LCWD of Screen Orientation API; deadline September 18

2014-09-11 Thread Jonas Sicking
On Thu, Sep 11, 2014 at 3:52 PM, Jonas Sicking jo...@sicking.cc wrote:
 Also, I can't find any normative definition of if orientation.angle
 should increase or decrease if the user rotates a device 90 degrees
 clockwise?

My bad, I see it now. Given how easy this is to get wrong, it might be
worth adding this information more explicitly at the definition of the
'angle' property, or the 'current orientation angle' concept, rather
than just buried inside an algorithm.

/ Jonas



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Jonas Sicking
On Wed, Sep 3, 2014 at 10:49 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, Sep 3, 2014 at 7:07 PM, Ian Hickson i...@hixie.ch wrote:
 Hear hear. Indeed, a large part of moving to a living standard model is
 all about maintaining the agility to respond to changes to avoid having to
 make this very kind of assertion.

 See 
 http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/thread.html#msg232
 for why we added a warning to the specification. It was thought that
 if we made a collective effort we can steer people away from depending
 on this. And I think from that perspective gradually phasing it out
 from the specification makes sense. With some other features we take
 the opposite approach, we never really defined them and are awaiting
 implementation experience to see whether they can be killed or need to
 be added (mutation events). I think it's fine to have several
 strategies for removing features. Hopefully over time we learn what is
 effective and what is not.

 Deprecation warnings have worked for browsers. They might well work
 better if specifications were aligned with them.

I generally agree with Anne here. As a browser developer it is
frustrating when attempts to remove old cruft from the web is met
with pushback from authors with the argument you can't
remove/deprecate this features because the spec says that this feature
must be there.

Obviously many authors are just using this as an argument when what
they really mean is you can't remove/deprecate this feature because I
use it.

However I've also noticed that it's true that authors are much more
willing to deal with features being removed when they know it's not
happening on the whim of a single browser vendor, and that it might be
reverted in the future, but rather that it's an agreed upon change to
the web platform with an agreed upon other solution.

I also don't think that simply updating a spec once multiple browser
vendors have removed a feature helps. It's the process of removing the
feature in the first place which is harder if the spec doesn't back
you up.

But possibly this can be better expressed than what's currently in the
spec. I.e. if we say that the feature is deprecated because it leads
to bad UI, and that since the expectation is that eventually
implementations will remove support for the feature it is already now
considered conformat to the spec to throw an exception. However many
websites still use the feature, so implementations that want to be
compatible with such websites need to still keep the feature working.

/ Jonas



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Jonas Sicking
On Wed, Sep 3, 2014 at 2:01 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Sep 3, 2014 at 12:45 PM, Glenn Maynard gl...@zewt.org wrote:
 My only issue is the wording: it doesn't make sense to have normative
 language saying you must not use this feature.  This should be a
 non-normative note warning that this shouldn't be used, not a normative
 requirement telling people that they must not use it.  (This is a more
 general problem--the use of normative language to describe authoring
 conformance criteria is generally confusing.)

 This is indeed just that general problem that some people have with
 normative requirements on authors.  I've got no problem with
 normatively requiring authors to do (or not do) things; the
 restrictions can then be checked in validators or linting tools, and
 give those tools a place to point to as justification.

Agreed. Making it a conformance requirement not to use sync XHR seems
like a good idea. That way we can also phrase it as implementations
that want to be compatible with non-conformant websites need to still
support sync requests.

/ Jonas



Re: Proposal for a Permissions API

2014-09-02 Thread Jonas Sicking
On Tue, Sep 2, 2014 at 6:51 AM, Mounir Lamouri mou...@lamouri.fr wrote:
 # Straw man proposal #

 This proposal is on purpose minimalistic and only contains features that
 should have straight consensus and strong use cases, the linked document
 [1] contains ideas of optional additions and list of retired ideas.

 ```
 /* Note: WebIDL doesn't allow partial enums so we might need to use a
 DOMString
  * instead. The idea is that new API could extend the enum and add their
  own
  * entry.
  */
 enum PermissionName {
 };

 /* Note: the idea is that some APIs would extend this dictionary to add
 some
  * API-specific information like a DOMString accuracy for an
  hypothetical
  * geolocation api that would have different accuracy granularity.
  */
 dictionary Permission {
   required PermissionName name;
 };

 /* Note: the name doesn't matter, not exposed. */
 enum PermissionState {
   // If the capability can be used without having to ask the user for a
   yes/no
   // question. In other words, if the developer can't ask the user in
   advance
   // whether he/she wants the web page to access the feature. A feature
   using a
   // chooser UI is always 'granted'.
   granted,
   // If the capability can't be used at all and trying it will be a
   no-op.
   denied,
   // If trying to use the capability will be followed by a question to
   the user
   // asking whether he/she wants to allow the web page to be granted the
   // access and that question will no longer appear for the subsequent
   calls if
   // it was answered the first time.
   prompt
 };

 dictionary PermissionStatus : Permission {
   PermissionState status;
 };

 /* Note: the promises would be rejected iff the Permission isn't known
 or
  * misformatted.
  */
 interface PermissionManager {
   PromisePermissionStatus has(Permission permission);
 };

 [NoInterfaceObject, Exposed=Window,Worker]
 interface NavigatorPermissions {
   readonly attribute PermissionManager permissions;
 };

 Navigator implements NavigatorPermissions;
 WorkerNavigator implements NavigatorPermissions;
 ```

 The intent behind using dictionaries is to simplify the usage and
 increase flexibility. It would be easy for an API to inherit from
 Permission to define new properties. For example, a Bluetooth API could
 have different permissions for different devices or a Geolocation API
 could have different level of accuracies. See [1] for more details.

I'm generally supportive of this direction.

I'm not sure that that the PermissionStatus thing is needed. For
example in order to support bluetooth is might be better to make the
call look like:

permissions.has(bluetooth, fitbit).then(...);

That said, I don't mind the PermissionsStatus construct. It certainly
reduces the risk that we paint ourselves into a corner.

/ Jonas



Re: Screen Orientation Feedback

2014-08-14 Thread Jonas Sicking
Yup. Well.. sounds like people, including you, pointed out these
problems. No idea why it was ignored since I wasn't there.

/ Jonas

On Wed, Aug 13, 2014 at 11:28 PM, Lars Knudsen lar...@gmail.com wrote:
 If only someone had pointed out these problems earlier ;)

 On Aug 5, 2014 11:17 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 I think the current interaction between the screen orientation and
 device orientation specs is really unfortunate.

 Any time that you use the device orientation in order to render
 something on screen, you have to do non-obvious math in order to get
 coordinates which are usable. Same thing if you want to use the device
 orientation events as input for objects which are rendered on screen.

 I would argue that these are the by far most common use cases for

 I agree that the main problem here is that the deviceorientation spec
 defined that their events should be relative to the device rather than
 to the screen. However we can still fix the problem by simply adding

 partial interface DeviceOrientationEvent
 {
   readonly attribute double? screenAlpha;
   readonly attribute double? screenBeta;
   readonly attribute double? screenGamma;
 }

 No new events needs to be defined.

 I guess we can argue that this should be added to the
 DeviceOrientation spec, but that seems unlikely to happen in practice
 anytime soon. I think we would do developers a disservice by blaming
 procedural issues rather than trying to solve the problem.

 I think mozilla would be happy to implement such an addition to the
 DeviceOrientation event (I'm currently checking to make sure). Are
 there other UAs that have opinions (positive or negative) to such an
 addition?

 / Jonas





Re: Screen Orientation Feedback

2014-08-13 Thread Jonas Sicking
On Wed, Aug 13, 2014 at 1:38 AM, Rich Tibbett ri...@opera.com wrote:

 Do you have any thoughts on providing screen-adjusted devicemotion
 event data also (i.e. acceleration, accelerationIncludingGravity,
 rotationRate)

It's not something I've thought about, but yeah, it sounds like that
would make sense.

/ Jonas



Re: Proposal for a credential management API.

2014-08-12 Thread Jonas Sicking
Hi Mike,

I'm very interested in improving the login experience on websites. In
particular I'd like to create a better flow when federated logins are
used, with at least the following goals:

* Make it easier for websites to use federated login as to discourage passwords.
* Ensure that the designed solution has support from most commonly
used federated login providers.
* Enable the user to manage their accounts in browser chrome rather
than have to go to specific websites to log out.
* Enable a login flow which is less jarring UX-wise than today's redirects.
* Don't increase the number of clicks needed to log in. Today two
clicks are usually enough, we shouldn't be worse than that since then
websites won't adopt it and user's won't like it.
* Make it easier for websites to support multiple federated login
providers by ensuring that they all use a common API. I.e. adding
support for more login providers shouldn't need to require running
code specific to that provider.
* Enable the UA to track which login providers that the user has
accounts with so that the UA can render UI which only displays
providers that are relevant to the user.
* Enable the user to have multiple accounts with the same provider for
providers that allow this.

All of these goals are likely not required. But I definitely want to
make sure that whatever we build is attractive enough to users,
webdevelopers and federated-login-providers that it actually gets
used.

/ Jonas




On Thu, Jul 31, 2014 at 12:48 AM, Mike West mk...@google.com wrote:
 TL;DR: Strawman spec and usecases at
 https://github.com/mikewest/credentialmanagement

 # Use Cases

 User agents' password managers are a fragile and proprietary hodgepodge of
 heuristics meant to detect and fill sign-in forms, password change forms,
 etc.
 We can do significantly better if we invite websites' explicit cooperation:

 * Federated identity providers are nigh undetectable; I don't know of any
 password managers that try to help users remember that they signed into
 Stack Overflow with Twitter, not Google.

 * Signing in without an explicit form submission (via XHR, WebSockets(!),
 etc) is good for user experience, but difficult to reliably detect.

 * Password change forms are less well-supported than they could be.

 * Users are on their own when creating new accounts, faced either with a
 list of identity providers they've mostly never heard of, or with the
 challenge of coming up with a clever new password.

 More background and exploration of native equivalents at
 http://projects.mikewest.org/credentialmanagement/usecases/.

 # Workarounds

 HTML defines a number of `autocomplete` attributes which help explain
 fields'
 purpose to user agents. These make the common case of form submission more
 reliably detectable, but are less helpful for XHR-based sign-in, and don't
 address federated identity providers at all.

 # Proposal:

 The API I'm outlining here is intentionally small and simple: it does not
 attempt to solve the general authentication problem in itself, but instead
 provides an interface to user agents' existing password managers. That
 functionality is valuable _now_, without significant effort on the part of
 either browser vendors or website authors.

 The API quite intentionally winks suggestively in the direction of an
 authentication API that would, for instance, do an OAuth dance on behalf of
 an
 application, but that's not the immediate goal.

 ```
 [NoInterfaceObject]
 interface Credential {
   readonly attribute DOMString id;
   readonly attribute DOMString name;
   readonly attribute DOMString avatarURL;
 };

 [Constructor(DOMString id, DOMString password, DOMString name, DOMString
 avatarURL)]
 interface LocalCredential : Credential {
   readonly attribute DOMString password;
 };

 [Constructor(DOMString id, DOMString federation, DOMString name, DOMString
 avatarURL)]
 interface FederatedCredential : Credential {
   readonly attribute DOMString federation;
 };

 partial interface Navigator {
   readonly attribute CredentialsContainer credentials;
 };

 interface CredentialsContainer {
   PromiseCredential? request(optional CredentialRequestOptions options);
   Promiseany notifySignedIn(optional Credential credential);
   Promiseany notifyFailedSignIn(optional Credential credential);
   Promiseany notifySignedOut();
   readonly attribute PendingCredential? pending;
 };
 ```

 A more detailed specification is up at
 http://projects.mikewest.org/credentialmanagement/spec/.

 # Example:

 ```
 navigator.credentials.request({
   'federations': [ 'https://federated-identity-provider.com/' ]
 }).then(function(credential) {
   if (!credential) {
 // The user had no credentials, or elected not to provide one to this
 site.
 // Fall back to an existing login form.
   }

   var xhr = new XMLHttpRequest();
   xhr.open(POST, https://example.com/loginEndpoint;);
   var formData = new FormData();
   formData.append(username, credential.id);
   

Re: Proposal for a credential management API.

2014-08-12 Thread Jonas Sicking
On Tue, Aug 12, 2014 at 9:33 AM, Mike West mk...@google.com wrote:
 * Enable a login flow which is less jarring UX-wise than today's
 redirects.
 * Don't increase the number of clicks needed to log in. Today two
 clicks are usually enough, we shouldn't be worse than that since then
 websites won't adopt it and user's won't like it.

 One-click sign-in (with a zero-click, Keep me logged in option) is a very
 reasonable goal, and one that I think is achievable.

 One- or two-click sign _up_, on the other hand, will likely be more
 difficult given the complexities of authorization (scopes, etc).

I'm not sure what you count as sign-up? Today, if I visit a new
website that I've never visited before, I can log in to that website
in two clicks using identity providers as facebook/twitter/google. I
don't think anything more than that is going get the support we need.

/ Jonas



Re: Screen Orientation Feedback

2014-08-11 Thread Jonas Sicking
On Fri, Aug 8, 2014 at 6:44 AM, Mounir Lamouri mou...@lamouri.fr wrote:
 Maybe this feedback should be more for DeviceOrientation than Screen
 Orientation. There has been a few discussions there
 (public-geolocation).

This is the type of procedural issues that I'd really rather not get
caught in. I think it's fine to defer to the DeviceOrientation spec,
but only if we think there's any chance of it getting added there
anytime soon. Given that no drafts, to my knowledge, has been
published for a DeviceOrientation v2, that does not seem to be the
case.

 Anyway. I am not convinced that adding new properties will really fix
 how developers handle this. I asked around and it seems that native
 platforms do not expose Device Orientation relative to the screen. I am
 not sure why we should expose something different on the Web platform.

I don't think the fact that other platforms do not supply screen
relative orientation events is a strong technical argument for why we
shouldn't.

I'm definitely in favor of looking at what other platforms do, but not
with the mindset that what other platforms do is the right thing to
do, but rather to see if they have good solutions that we could learn
from. Surely other platforms will make design mistakes, just like we
do.

 I think we should work on providing developers the right tools in order
 for them to do the right thing.

I totally agree with this. For all the use cases that I can think of
for getting the coordinates relative to the screen is more important
than relative to the device. This includes:

* A navigation page which shows a map as well as how the device is
oriented relative to the map.
* A navigation page which shows a map orientated so that the on-screen
map matches real world.
* A game where an in-game character is controlled by tilting the
device left and right to make the character walk left vs. right.

I'm sure there are use cases where you need to know the orientation
relative to the device rather than relative to the screen, they just
seem to be less common to me.

Given that, the right tool seems to be to provide the
DeviceOrientation events relative to the screen and allow them to be
compensated to be relative to the device if needed.

Sadly it's too late for that. Authors already have the wrong tool as a
default since the DeviceOrientation spec is written and implemented
the way it is.

However we can at least give authors the right tool as well, by
introducing screeAlpha etc.

 For example, without the Screen
 Orientation API, they do not know the relative angle between the device
 natural orientation and the screen. This API is not yet widely
 available. Some version of it ships in Firefox and IE but is prefixed.
 It should be in Chrome Beta soon.

I don't think the right tool to do the right thing in this case
means give them coordinates in a coordinate system that they don't
want, and then give them enough information to transform the
coordinate into the coordinate system that they do want.

I'm not arguing that we remove the relative angle that's in the spec
right now. I'm arguing that for device orientation events, we should
provide coordinates relative to the screen as well.

/ Jonas



Screen Orientation Feedback

2014-08-05 Thread Jonas Sicking
Hi All,

I think the current interaction between the screen orientation and
device orientation specs is really unfortunate.

Any time that you use the device orientation in order to render
something on screen, you have to do non-obvious math in order to get
coordinates which are usable. Same thing if you want to use the device
orientation events as input for objects which are rendered on screen.

I would argue that these are the by far most common use cases for

I agree that the main problem here is that the deviceorientation spec
defined that their events should be relative to the device rather than
to the screen. However we can still fix the problem by simply adding

partial interface DeviceOrientationEvent
{
  readonly attribute double? screenAlpha;
  readonly attribute double? screenBeta;
  readonly attribute double? screenGamma;
}

No new events needs to be defined.

I guess we can argue that this should be added to the
DeviceOrientation spec, but that seems unlikely to happen in practice
anytime soon. I think we would do developers a disservice by blaming
procedural issues rather than trying to solve the problem.

I think mozilla would be happy to implement such an addition to the
DeviceOrientation event (I'm currently checking to make sure). Are
there other UAs that have opinions (positive or negative) to such an
addition?

/ Jonas



Re: [push-api] Moving PushManager push onto ServiceWorkerRegistration

2014-07-11 Thread Jonas Sicking
On Fri, Jul 11, 2014 at 8:17 AM, Jake Archibald jaffathec...@gmail.com wrote:
 navigator.serviceWorker.ready.then(function(reg) {
   reg.push.register(...)
 });

I agree this looks good. Though maybe

reg.registerPush(...)

instead?

/ Jonas



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Jonas Sicking
On Mon, Jun 23, 2014 at 9:59 AM, Joshua Bell jsb...@google.com wrote:
 On Sat, Jun 21, 2014 at 9:45 PM, ben turner bent.mozi...@gmail.com wrote:

 I think this sounds like a fine idea.

 -Ben Turner


 On Sat, Jun 21, 2014 at 5:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi all,

 I found an old email with notes about features that we might want to put
 in v2.

 Almost all of them was recently brought up in the recent threads about
 IDBv2. However there was one thing on the list that I haven't seen brought
 up.

 It might be a nice perf improvement to add support for a
 IDBObjectStore/IDBIndex.exists(key) function.

 This sounds redundant with count().

 Was count() added to the spec after that note was written? (count() seems to
 be a relatively late addition, given that it occurs last in the IDLs)

Hmm.. good point. It is indeed very possible that I wrote that note
before we had count().

There is a small performance difference between them though when
applied to indexes. Indexes could have multiple entries with the same
key (but different primaryKey), in which case count() would have to
find all such entries, whereas exists() would only need to find the
first.

But most of the time count() probably does well enough.

/ Jonas



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Jonas Sicking
On Mon, Jun 23, 2014 at 1:03 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 Having said that, and speaking naively here, a synchronous .exists() or 
 .contains() would be useful as existence checks shouldn't have to be 
 exclusively asynchronous as that complicates how we'd write: if this exists 
 and that other thing doesn't exists then do xyz

Note that the .contains() discussion is entirely separate from the
.exists() discussion. I.e. your subject is entirely off-topic to this
thread.

The .exists() function I proposed lives on IDBObjectStore and IDBIndex
and is an asynchronous database operation.

The .contains() function that you are talking about lives on an
array-like object and just does some in-memory tests which means that
it's synchronous.

So the two are completely unrelated.

/ Jonas



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Jonas Sicking
On Mon, Jun 23, 2014 at 1:38 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 No, I was suggesting .exists() can be synchronous to make it useful

 I referred to it as .contains() too so sorry if that conflated them for you 
 but it has nothing to do with the .contains Joshua was talking about.

 In short, an asynchronous .exists() as you proposed does seem redundant

 But I was wondering what about a synchronous .exists() (the same proposal you 
 had but synchronous as opposed to asynchronous)

 Makes any sense?

We can't make any database operations synchronous since that require
synchronous IO which is really bad for perf. It's also different from
all other database operations. So if that's your request then the
answer is definitely no.

/ Jonas


 Sent from my iPhone

 On Jun 23, 2014, at 1:28 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Jun 23, 2014 at 1:03 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 Having said that, and speaking naively here, a synchronous .exists() or 
 .contains() would be useful as existence checks shouldn't have to be 
 exclusively asynchronous as that complicates how we'd write: if this 
 exists and that other thing doesn't exists then do xyz

 Note that the .contains() discussion is entirely separate from the
 .exists() discussion. I.e. your subject is entirely off-topic to this
 thread.

 The .exists() function I proposed lives on IDBObjectStore and IDBIndex
 and is an asynchronous database operation.

 The .contains() function that you are talking about lives on an
 array-like object and just does some in-memory tests which means that
 it's synchronous.

 So the two are completely unrelated.

 / Jonas



IDBObjectStore/IDBIndex.exists(key)

2014-06-21 Thread Jonas Sicking
Hi all,

I found an old email with notes about features that we might want to put in
v2.

Almost all of them was recently brought up in the recent threads about
IDBv2. However there was one thing on the list that I haven't seen brought
up.

It might be a nice perf improvement to add support for a
IDBObjectStore/IDBIndex.exists(key) function.

This would require less IO and less object creation than simply using
.get(). It is probably particularly useful when doing a filtering join
operation between two indexes/object stores. But it is probably useful
other times too.

Is this something that others think would be useful?

/ Jonas


Re: Indexed DB Transactions vs. Microtasks

2014-06-18 Thread Jonas Sicking
On Thu, Jun 19, 2014 at 8:44 AM, Adam Klein ad...@google.com wrote:
 While I agree that the original microtask intent would suggest we change
 this, and I concur that it seems unlikely to break content, I worry about
 the spec and implementation complexity that would be incurred by having to
 support the notion of at the end of the current microtask. It suggests one
 of:

 1. A new task queue, which runs after microtasks (nanotasks?)
 2. The ability to put tasks at the start of the microtask queue rather than
 at the end

I was just thinking to hardcode this into the algorithm that's run at
the end of the microtask. Note that closing the transaction never runs
code, which means that very little implementation complexity is
needed.

I definitely agree that both of the above options are pretty unattractive.

/ Jonas



Re: Fetch API

2014-06-06 Thread Jonas Sicking
On Fri, Jun 6, 2014 at 12:33 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 It seems to me that for both the HeaderMap constructor and any object-literal 
 processing, the best solution for now is to just do things in prose...

I think the first thing we should decide on is what syntax we want JS
authors to be able to write.

How to then write this in the spec can be debated after. The
capability set of WebIDL should not dictate how we define the API
(other than that we should be very sure about what we're doing if we
go outside of what WebIDL recommends).

I'm still arguing that we shouldn't have a HeaderMap class at all, and
instead just use normal Map objects. And in places where we take a map
as an argument, also allow plain JS objects that are then enumerated.

/ Jonas



Re: Indexed DB Transactions vs. Microtasks

2014-06-06 Thread Jonas Sicking
On Thu, Jun 5, 2014 at 12:59 PM, Joshua Bell jsb...@google.com wrote:
 case 1:

   var tx;
   Promise.resolve().then(function() {
 tx = db.transaction(storeName);
 // tx should be active here...
   }).then(function() {
 // is tx active here?
   });

 For case 1, ISTM that yes matches the IDB spec, since control has not
 returned to the event loop while the microtasks are running. Implementations
 appear to agree.

Yes. I believe that this is what the spec currently calls for.

However I think it would actually be good to change the spec to say
that the transaction is closed at the end of the current microtask.

When we originally designed microtasks for Mutation Observers, one of
the goals was to ensure that there were not accidental
interdependencies between different event handlers to the same event.
By flushing end-of-microtask work after each event handler you ensure
that each event handler gets a clean slate.

I realize that this is a backwards-incompatible change. However it
seems pretty unlikely to break any existing content. So it'd be nice
to give it a try.

 case 2:

   var tx = db.transaction(storeName);
   var request = tx.objectStore(storeName).get(0);
   request.onsuccess = function() {
 // tx should be active here...
 Promise.resolve().then(function() {
   // is tx active here?
 });
   };

 For case 2, it looks like implementations differ on whether microtasks are
 run as part of the event dispatch. This seems to be outside the domain of
 the IDB spec itself, somewhere between DOM and ES. Anyone want to offer an
 interpretation?

I agree that this falls outside of the domain of the IDB spec. I admit
I would have preferred if the .then handler never sees an active
transaction, rather than have that depend on if resolving the promise
happens synchronously or asynchronously...

I don't know how to define that in a sane manner though.

And I have no idea how Firefox's current implementation appears to
manage to accomplish that.

/ Jonas



Re: Fetch API

2014-06-04 Thread Jonas Sicking
On Wed, Jun 4, 2014 at 2:46 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, Jun 4, 2014 at 9:49 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jun 4, 2014 at 12:31 AM, Anne van Kesteren ann...@annevk.nl wrote:
 Does ES define the order of { x: a, y: b } btw?

 I believe so, but someone would need to check. Either way I think
 browsers effectively are forced to return a consistent order for
 web-compat reasons.

 I checked. It's not true, see
 http://esdiscuss.org/topic/for-in-evaluation-order#content-10

Ah. So the enumeration order for { x: a, y: b } is actually
consistent across browsers. But once you start doing more complex
things than that, like sticking enumerable properties on the prototype
chain, or deleting properties, then the order is not consistent.

I would still be ok with allowing a plain JS object being passed. If
differences in order for complex operations hasn't bitten people for
JS enumeration, then maybe it won't for header order either.

/ Jonas



Re: Fetch API

2014-06-03 Thread Jonas Sicking
On Tue, Jun 3, 2014 at 9:38 AM, Jake Archibald jaffathec...@gmail.com wrote:
 On 3 June 2014 16:50, Anne van Kesteren ann...@annevk.nl wrote:

 On Sun, Jun 1, 2014 at 8:06 AM, Domenic Denicola
 dome...@domenicdenicola.com wrote:

  - I like HeaderMap a lot, but for construction purposes, I wonder if a
  shorthand for the usual case could be provided. E.g. it would be nice to be
  able to do
 
  fetch(http://example.com;, {
headers: {
  X-Foo: Bar
}
  });
 
  instead of, assuming a constructor is added,
 
  fetch(http://example.com;, {
headers: new HeaderMap([
  [X-Foo, Bar]
])
  });

 Yeah, it's not clear to me what is best here. An object whose keys are
 ByteString and values are either ByteString or a sequence of
 ByteString? I agree that we want this.

 I vote ByteString: ByteString. If you want something more complicated,
 provide a HeaderMap or mutate after construction.

One thing we should keep in mind is if we actually need to support
100% of all the crazyness that servers do. And especially if we need
to support it in a particularly convenient way.

Something like

headers: {
  X-Foo: Bar
}

Does actually have a defined order between the name-value pairs, even
though it's not terribly explicit. And we could even support

headers: {
  X-Foo: [Bar, Bar2]
}

For supporting sending multiple X-Foo headers.

This wouldn't support interleaving headers such that we send two
X-Foo headers with a X-Bar header in between, but are there
actually use cases for that?

I feel fairly sure that simply doing:

headers: {
  X-Foo: [Bar, Bar2]
}

Will cover well over 99% of everything that people need to do. And
hopefully the remaining part of a percent could update their servers
to actually support HTTP semantics properly.

/ Jonas



Re: WebApp installation via the browser

2014-06-02 Thread Jonas Sicking
On Fri, May 30, 2014 at 5:40 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Are there any platforms providing the feature? Has the feature gained
 any traction among the platform vendors?

The webapps platform that we use in FirefoxOS and Firefox Desktop
allows any website to be an app store. I *think*, though I'm not 100%
sure, that this works in Firefox for Android as well.

I'm not sure what you mean by side loaded, but we're definitely
trying to allow normal websites to provide the same experience as the
firefox marketplace. The user doesn't have to turn on any developer
mode or otherwise do anything otherwise special to use such a
marketplace. The user simply needs to browse to the website/webstore
and start using it.

The manifest spec that is being developed in this WG is the first step
towards standardizing the same capability set. It doesn't yet have the
concept of an app store, instead any website can self-host itself as
an app.

It's not clear to me if there's interest from other browser vendors
for allowing websites to act as app stores, for now we're focusing the
standard on simpler use cases.

/ Jonas



Re: Blob URL Origin

2014-05-29 Thread Jonas Sicking
On Thu, May 22, 2014 at 1:29 AM, Anne van Kesteren ann...@annevk.nl wrote:
 For blob URLs (and prolly filesystem and indexeddb) we put the origin
 in the URL and define a way to extract it again so new
 URL(blob).origin does the right thing.

Yup.

 For fetching blob URLs (and prolly filesystem and indexeddb) we
 effectively act as if the request's mode was same-origin. Allowing
 tainted cross-origin requests would complicate UUID (for the UA) and
 memory (for the page) management in a multiprocess environment.

Hmm.. I think that is effectively it yes. I.e. even though img says
that it wants to permit cross-origin loads, we'd override that if the
fetch is for a blob: URL and only permit same-origin loads. Is that
what you mean?

/ Jonas



Data URL Origin (Was: Blob URL Origin)

2014-05-29 Thread Jonas Sicking
On Thu, May 22, 2014 at 1:29 AM, Anne van Kesteren ann...@annevk.nl wrote:
 How do we deal with data URLs? Obviously you can always get a resource
 out of them. But when should the response of fetching one be tainted
 and when should it not be? And there's a somewhat similar question for
 about URLs. Although only about:blank results in something per the
 specification at the moment.

My proposal is something like this:

* Add a new flag to the fetch algorithm allow inheriting origin
* The default for this new flag is false
* If the flag is set to false, the origin of the URL is a unique identifier.
* When the origin is a unique identifier, it would not match any other
origin and so responses would always be tainted.
* If the flag is true, then the origin of the URL is equal to that of
the page that initiated the load.
* When the origin of the URL is inherited, it would always match the
origin of the caller, and so responses would never be tainted.
* I don't know what URL(data).origin should return.
* Make APIs explicitly opt in to setting the allow inheriting origin
flag to true based on whatever policies that we decide.

So for example we could make img always set the allow inheriting
origin flag to true.

And for iframes the flag would only be true if some iframe
allowinheritingoriginfordataurlsplease attribute was set. And then it
would still only be set for the initial load. If the iframe navigated
(through a link or through setting window.location) the flag would be
set to falls.

For `new Worker(...)` I'm not sure what would be web compatible. I'd
prefer if the flag was set to false by default, but that the page
could use some explicit syntax (similar to the iframe) to opt in to
allowing inheriting.

/ Jonas



Re: Data URL Origin (Was: Blob URL Origin)

2014-05-29 Thread Jonas Sicking
On Thu, May 29, 2014 at 9:21 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, May 29, 2014 at 9:06 AM, Jonas Sicking jo...@sicking.cc wrote:
 * The default for this new flag is false
 * If the flag is set to false, the origin of the URL is a unique identifier.
 * When the origin is a unique identifier, it would not match any other
 origin and so responses would always be tainted.
 * If the flag is true, then the origin of the URL is equal to that of
 the page that initiated the load.
 * When the origin of the URL is inherited, it would always match the
 origin of the caller, and so responses would never be tainted.

 This does not clarify what happens if you end up at a data URL as a
 result of a redirect. If the redirect is cross-origin you'll end up
 tainted. If it's CORS you get a network error. But if it's same-origin
 that's fair game?

For something like an iframe load I think the safe thing is to
always clear the flag when a redirect happens. I.e. if someone does

iframe src=http://example.com/a; allowinheritingoriginfordataurlsplease

and example.com redirects to a data URL, we would have all sorts of
messy questions if we allowed the flag to stay set an the origin to be
inherited:

* Should it be inherited from the owner of the iframe, who set the
allowinheritingoriginfordataurlsplease attribute, or from example.com
who is the one that generated the data URL. We don't want example.com
to get XSSed either.
* What if the owner of the iframe hadn't thought about redirects to
data URLs and just checked the src URL for data: and verified that it
didn't contain any bad stuff?

Redirecting to a data URL feels like a very edge-casy thing. So lets
keep it simple and safe rather than worry about cramming more features
in.

 * I don't know what URL(data).origin should return.

 Probably just null.

Given that the effective origin depends on which API you pass the
data-url to, I agree that trying to return a real origin here is
never going to be sensible. I don't know if returning null is the
way to go, or if returning `undefined` is. I guess I don't have a
strong opinion.

 * Make APIs explicitly opt in to setting the allow inheriting origin
 flag to true based on whatever policies that we decide.

 So for example we could make img always set the allow inheriting
 origin flag to true.

 And for XMLHttpRequest? We decided a while back we wanted data URLs to
 work there.

I don't feel strongly.

 For `new Worker(...)` I'm not sure what would be web compatible. I'd
 prefer if the flag was set to false by default, but that the page
 could use some explicit syntax (similar to the iframe) to opt in to
 allowing inheriting.

 Given that workers execute script in a fairly contained way, it might be okay?

Worker scripts aren't going to be very contained as we add more APIs
to workers. They can already read any data from the server (through
XHR) and much local data (through IDB).

I'd definitely want them not to inherit the origin, the question is if
that's web compatible at this point. Maybe we can allow them to
execute but as a sandboxed origin?

/ Jonas



Re: Last Call for CSS Font Loading Module Level 3

2014-05-27 Thread Jonas Sicking
I've provided this input through a few channels already, but I don't
think the user of [SetClass] here is good (and in fact I've been
arguing that SetClass should be removed from WebIDL).

First off you likely don't want to key the list of fonts on the
FontFace object instance like the spec currently does. What it looks
like you want here is a simple enumerable list of FontFace objects
which are currently available to the document.

Second, subclassing the ES6 Set class should mean that the following
two calls are equivalent:

Set.prototype.add.call(myFontFaceSet, someFontFace);
myFontFaceSet.add(someFontFace);

However I don't think the former would cause the rendering of the
document to change in, whereas the latter would.

Hence I would strongly recommend coming up with a different solution
than using SetClass.

Separately, FontFace.loaded seems to fulfill the same purpose as
FontFaceSet.ready(). I.e. both indicate that the object is done
loading/parsing/applying its data. It seems more consistent if they
had the same name, and if both were either an attribute or both were a
function.

/ Jonas



Re: [manifest] Fetching restriction, Re: [manifest] Update and call for review

2014-05-27 Thread Jonas Sicking
On Tue, May 27, 2014 at 9:11 AM, Marcos Caceres w...@marcosc.com wrote:

 On May 27, 2014 at 9:25:26 AM, Ben Francis (bfran...@mozilla.com) wrote:
  As per our conversation in IRC, something else I'd like to highlight
 is the fact that in the current version of the spec any web site
 can host an app manifest for any web app.

 I'm really sorry, seems I wasn't very coherent on IRC - it's not possible for 
 site A to use site B's manifest unless site B explicitly shares it (using 
 CORS).

 Unlike with stylesheets or other link'ed resources, the fetch mode for 
 manifests is always CORS - meaning that the following would not work:

 titlefakegmail.com/title
 link rel=manifest href=http://mail.google.com/manifest.json;

 Even if the above manifest.json existed, the above would result in a 
 network error when the user agent tries to fetch manifest.json from 
 google.com. The error is because the request is not same origin, and because 
 google doesn't include the CORS header allowing the cross-origin request.

 The only way that gmail would allow my own app store to use its manifest 
 would be for Google to include the HTTP header:

 Access-Control-Allow-Origin: http://myownappstore.com;

This is a bit of an abuse of CORS. Adding an
Access-Control-Allow-Origin: * header currently has the semantic
meaning of any website can read the contents of this file. I.e. it
only means that the bits in the file are accessible from other
websites.

That means that for a webserver on the public internet it is currently
always safe to add the Access-Control-Allow-Origin: * header to any
file since all files can be read anyway by simply using a different
HTTP client than a browser, such as wget.

It does not currently mean, and I don't think it should mean, I am ok
with acting as a manifest for any website.

I think restricting manifests to same-origin is the way to go. I would
not be surprised if manifests will eventually end up with similar
security properties as hosting HTML files currently does.

/ Jonas



Re: Last Call for CSS Font Loading Module Level 3

2014-05-27 Thread Jonas Sicking
On Tue, May 27, 2014 at 10:41 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Separately, FontFace.loaded seems to fulfill the same purpose as
 FontFaceSet.ready(). I.e. both indicate that the object is done
 loading/parsing/applying its data. It seems more consistent if they
 had the same name, and if both were either an attribute or both were a
 function.

 No, the two do completely different (but related) things.  Why do you
 think they're identical?  One fulfills when a *particular* FontFace
 object finishes loading, the other repeatedly fulfills whenever the
 set of loading fonts goes from non-zero to zero.

Semantically they both indicate the async processing that this object
was doing is done. Yes, in one instance it just signals that a given
FontFace instance is ready to be used, in the other that the full
FontFaceSet is ready. Putting the properties on different objects is
enough to indicate that, the difference in name doesn't seem
important?

In general it would be nice if we started establishing a pattern of a
.ready() method (or property) on various objects to indicate that they
are ready to be used. Rather than authors knowing that they need to
listen to load events on images, success events on
IDBOpenRequests, .loaded promise on FontFace objects and .ready()
promise on FontFaceSets.

/ Jonas



Re: Last Call for CSS Font Loading Module Level 3

2014-05-27 Thread Jonas Sicking
On Tue, May 27, 2014 at 12:14 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, May 27, 2014 at 11:44 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, May 27, 2014 at 10:41 AM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 Separately, FontFace.loaded seems to fulfill the same purpose as
 FontFaceSet.ready(). I.e. both indicate that the object is done
 loading/parsing/applying its data. It seems more consistent if they
 had the same name, and if both were either an attribute or both were a
 function.

 No, the two do completely different (but related) things.  Why do you
 think they're identical?  One fulfills when a *particular* FontFace
 object finishes loading, the other repeatedly fulfills whenever the
 set of loading fonts goes from non-zero to zero.

 Semantically they both indicate the async processing that this object
 was doing is done. Yes, in one instance it just signals that a given
 FontFace instance is ready to be used, in the other that the full
 FontFaceSet is ready. Putting the properties on different objects is
 enough to indicate that, the difference in name doesn't seem
 important?

 The loaded/ready distinction exists elsewhere, too.  Using .loaded for
 FontFaceSet is incorrect, since in many cases not all of the fonts in
 the set will be loaded.

Sure, but would using .ready() for FontFace be wrong?

 In general it would be nice if we started establishing a pattern of a
 .ready() method (or property) on various objects to indicate that they
 are ready to be used. Rather than authors knowing that they need to
 listen to load events on images, success events on
 IDBOpenRequests, .loaded promise on FontFace objects and .ready()
 promise on FontFaceSets.

 Yes, I'm actively working with Anne, Domenic, and others to help
 figure out the right patterns for this that we can extend to the rest
 of the platform.

Great!

/ Jonas



  1   2   3   4   5   6   7   8   9   10   >