Re: Form submission participation (was Re: Goals for Shadow DOM review)

2014-02-20 Thread Charles Pritchard

> On Feb 20, 2014, at 2:09 PM, Edward O'Connor  wrote:
> 
> +public-webapps, -www-tag in replies to avoid cross-posting
> 
> Hi,
> 
> Domenic wrote, to www-tag:
> 
>> [C]an shadow DOM be used to explain existing elements, like  or
>> , in terms of a lower-level primitive?
>> 
>> As of now, it seems like it cannot, for two reasons:
>> 
>> 1. Native elements have extra capabilities which are not granted by
>> shadow DOM, or by custom elements. For example, they can participate
>> in form submission.
> 
> Authors need to be able to participate in form submission, but this is
> independent of Custom Elements.
> 
> Web applications often maintain state in JS objects that have no direct
> DOM representation. Such applications may want such state to be
> submittable.
> 
> Existing form elements map one field name to many values. People often
> build custom controls precisely because those controls hold more
> complex values that would be better represented as many names to many
> values. Subclassing existing form elements don't get you this.
> 
> And inheriting from HTMLInputElement is insane (not because inheriting
> is insane, but because HTMLInputElement is insane), so that's not really
> how we want author-defined objects to become submittable.
> 
> Given the above I don't think we should try to solve the "how authors
> can participate in form submission" problem by enabling the subclassing
> of existing form elements. Instead, we should define a protocol
> implementable by any JS object, which allows that JS object to expose
> names and values to the form validation and submission processes.
> 
> Something like this:
> 
>  function Point(x, y) {
>this.x = x;
>this.y = y;
>  }
>  Point.prototype.formData = function() {
>return {
>  "x": this.x,
>  "y": this.y
>};
>  }
> 
>  var theForm = document.querySelector("#my-form");
> 
>  var p = new Point(4,2);
> 
>  theForm.addParticipant(p);
>  theForm.submit();
> 
> This is obviously a super hand-wavy strawman and would need to be
> fleshed out. Thoughts?
> 
> 
> Ted
> 


Sounds like a great idea; can that kind of thinking also be applied to 
HTMLMediaElement interfaces  for custom video/audio handling with play/pause 
buttons?

There are some tricks with streams but it'd be really nice to be able to 
essentially use  with custom elements backed by media sources 
like SVG and Canvas.

Not to derail the conversation... I think you're absolutely on the right track.


-Charles


Re: IndexedDB: Syntax for specifying persistent/temporary storage

2013-12-12 Thread Charles Pritchard
On Dec 11, 2013, at 10:53 PM, Kinuko Yasuda  wrote:

>> "persistent" storage would behave as in A. I.e. webpages can't use it
>> without there being a prompt involved at some point. Bookmarked
>> webapps can use it without prompt.
> 
> This sounds kinda reasonable to me, but I'd imagine some webapps (who are 
> serious about their storage) would probably want to know if the "default" is 
> either temporary or persistent in this case, and we may need a tiny API for 
> that.


In my experience, perm/persistent and temporary are misnomers. Temporary is 
closer to a "cache". And persistent is all-or-nothing. That is, if you write 
two files to it, you're guaranteed that you'll always be able to fetch both 
files or neither.

This is different than a pass-through to the host file system, something that 
Chrome supports retained pointers to in their packaged app API.

I've lost persistent storage about four times with Chrome, following an update. 
Just the way of the world :-). In all of those cases, if the storage were 
pass-thru, I would not have lost that data. Though the consistency would be 
managed via OS, not UA.

-Charles



Re: Styling form control elements

2013-12-10 Thread Charles Pritchard
On Dec 6, 2013, at 4:59 AM, Scott González  wrote:
> 
>> On Fri, Dec 6, 2013 at 5:26 AM, Brian Di Palma  wrote:
>> If UA controls are not styleable in the manner I wish them to be and I
>> have access to custom elements + shadow DOM,
>> I think I would just create my own controls and use them instead of UA ones.
> 
> And you'll make the experience worse for many users because many users have 
> devices that you actually don't want to replace. Also, all the other problems 
> about validation, semantics, etc.


I've seen this theme pop up for years. There are standards of completeness-- as 
long as the browser supports a sufficient API, implementers can support 
"complete" widgets.

The issue is in enumerating and exposing those items which are necessary to 
construct a widget.

We talk about select, well aria has DOM semantics, pseudo-selectors may work 
for display containers, focus is and has always been mentioned, as well as 
pointer and keyboard input events within WCAG. We have input method editors for 
typeahead style select boxes.

Now we can shoot all that down and say that the UA should be the only point of 
control for custom form elements, but that just doesn't create productive 
discourse.

Enumerating and exposing a means to provide a full (Level AAA) custom control 
is a productive effort. That said, yes, I acknowledge that many controls, UA 
provided or author created have been and will be substandard.

-Charles

Re: RE : Sync IO APIs in Shared Workers

2013-12-04 Thread Charles Pritchard


On 12/4/13, 2:43 AM, Ke-Fong Lin wrote:

IMHO, we should make sync APIs available in both dedicated and shared workers.
In order of importance:

1) Sync APIs are inherently easier to use than async ones, and they are much
less error prone. JS developers are not C++ developers. Whenever possible, it's
just better to make things more simpler and convenient.


This argument is not particularly helpful. Apart from that, many JS APIs 
use callbacks,

all developers are-or-have to be aware of them.


2) Sync APIs do the job. We are talking about web-apps, not heavy load servers.
High performance applications will use async APIs anyway. I'd rather think there
are a lot of use cases where the dedicated or shared worker would do a lot of 
small
and short duration work, suitable for sync coding. Why force the complication 
of async
on developers ? If easy things can be done easily, then let it be.


Promises seem to have solved quite a it of the syntactic cruft/issues.
Devs are already in an async world when doing JS.


3) It does no harm.


It's not particularly fun re-writing async methods from the webpage to 
be sync for workers, or otherwise using shims to avoid redundancy. The 
extra semantic load on the namespaces (docs and otherwise) isn't all 
that pleasing either. There is a cost.






RTC in Web workers

2013-11-22 Thread Charles Pritchard

Should RTC data channels be available in Workers and/or SharedWorker?

Mainly:
self.RTCPeerConnection
self.RTCSessionDescription

WebSocket and XHR are available, seems like RTC ought to be but isn't.

-Charles




Re: Simple API proposal for writing file in local file system

2013-10-18 Thread Charles Pritchard

On 10/18/2013 11:11 AM, Jeremie Patonnier wrote:


2013/10/18 Ehsan Akhgari >


Have you seen this proposal?


 It's not clear to me if your proposal here is trying to solve
problems that the above proposal doesn't already solve...


Yes, I saw that proposal but no, I do not suggest to solve the same 
problem.


I found that proposal for an abstract file system very good when the 
application have to work in its own world.
Unfortunately there is a few cases where it is very convenient to 
allow to actually truly write on the os file system.


Here a few concrete example:

  * When the user want to export his file to an external storage (Hard
drive, USB storage, SD Card, etc.) In that case it where web dev
use the data URI workaround to "download" the file.
  * When the user want to use a resource available in its file system
and want to update that resource with any change he made to have
some other application using it as well (mostly native apps on
desktop). For example, editing an image and put it back on the
file system to have another native application accessing it (let's
say for example Adobe Lightroom). So being able to open a file and
perform (silent) saving on the original file allows to build
smoother workflow when using a native app and a web app at the
same time.





Chrome has a secured extension for maintaining access to the local file 
system:

http://developer.chrome.com/apps/fileSystem.html

They only allow this for "packaged applications". This, with the 
FileSystem API allows for -most- of what you'd like. While it can be a 
little clumsy to work with, so can IDB; it just takes some wrapping 
code. As discussed, an easier promises based API is being developed by 
Mozilla, and while it's questionable as to whether Google or other 
vendors will follow, wrapping existing callback APIs into promises is 
not a big deal in the JS-side.


Unfortunately, nobody has stepped forward with concrete code for file 
watchers. The File API itself is a little "broken", as File objects were 
supposed to be immutable, but we have undefined behavior when a File is 
changed on the local file system.

http://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0846.html
http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0431.html

-Charles


Re: File constructor

2013-07-16 Thread Charles Pritchard

On 7/16/2013 5:40 AM, Anne van Kesteren wrote:

On Mon, Jul 15, 2013 at 11:37 PM, Arun Ranganathan  wrote:

On Jul 15, 2013, at 8:01 PM, Anne van Kesteren wrote:

So exposing FormData's contents came up again.

Can you send me a pointer to that discussion, so I can plug into it?

My bad: https://www.w3.org/Bugs/Public/show_bug.cgi?id=22680



https://www.w3.org/Bugs/Public/show_bug.cgi?id=20887

Sounds like mapping FormData to a File hasn't come up yet.

That's the third thread referenced in that bug. Seems like there's not
much actual objections. I thought implementers had concerns.


There wasn't any uptake on alternates for FormData... So, at least 
exposing it as a File is a help.


I'd asked about using the FormData class for "x-www-form-urlencoded" 
requests via XHR.

http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1143

-Charles



[FileAPI] Revisiting Deflate/Compression

2013-07-13 Thread Charles Pritchard
We've had a few conversations pop up about exposing deflate/inflate to 
the webapps environment.

Years of them (more recently May 2013).

Packaging a zip file is very simple in JS, it's just the inflate/deflate 
code that's a trudge.

We all know the benefits of compressing JSON and XML over the pipe.

I'd like to see deflate exposed through FileReader.
For example: reader.readAsArrayBuffer(blob, {deflate: true});

Inflate semantics could be similar:
reader.readAsArrayBuffer(blob, {inflate: true});

Being that blob is synchronous, it seems like extending the constructor 
would only be reasonable in the context of a worker:

new Blob(["my easily compressed string"], {deflate: true});

Jonas already outlined some of the reasons not to pursue this: 
inflate/deflate can be performed in JS, JS is reasonably fast...


In practice, JS is significantly slower than the browser's own native 
code, native code is already available in existing browsers, there are 
very few deflate/inflate JS libraries available, and including them has 
costs in size, loading time and licensing. As a consequence, web app 
authors are simply not using deflate when appropriate. We can easily 
remedy that by exposing deflate and inflate through these existing APIs.


If there is push-back on extending Blob, I'm content with simply getting 
FileReader to support inflate/deflate.


-Charles







Re: Clipboard API: Default content in copy/cut handlers

2013-07-12 Thread Charles Pritchard
The issue I've come up against is in keyboard access to the DnD model.

Clipboard semantics exist in their own world, with OS quirks.

DnD presupposed some kind of security-consent from the user. It's one of the 
most powerful gestures, allowing a user to grant read access to an entire 
directory tree.


Unfortunately, we don't have that semantic easily accessed from the keyboard.


On the desktop, copy and paste hot keys are often used for keyboard access to 
drag and drop, even though you can't copy a folder into a text document.

That's the state of things. I do think browser vendors should be responsible 
for implementing DnD via keyboard. onclick works via keyboard but even with 
input file, WebKit has not extended the same support as a DnD onto an input 
element (vs click and file selection).

Process has stalled, there's a stalemate between vendors.


-Charles

On Jul 12, 2013, at 12:56 AM, Paul Libbrecht  wrote:

> Daniel,
> 
> I personally think it is not at all a good idea to populate the "clipboard" 
> when starting the drag!
> It makes sense when a "copy" operation is triggered, as the application may 
> be vanishing.
> Most desktop DnDs I have observed only operate the transformation when the 
> drop has occurred (hence a flavour is identified). A good way to test this is 
> to take a heavy piece in a graphics programme and drop it outside.
> 
> Those two specs have evolved independently and I always sow clipops to be a 
> more refined version of html5's DnD but there you have spotted a 
> non-extension point.
> 
> Is there any reason to justify the requirement to populate before the 
> dragstart?
> 
> Paul
> 
> 
> On 12 juil. 2013, at 00:22, Daniel Cheng wrote:
> 
>> I've noticed that the way that drag-and-drop processing model is written, 
>> the default content that would be in the drag data store is available in the 
>> dragstart event. 
>> http://www.whatwg.org/specs/web-apps/current-work/multipage/dnd.html#drag-and-drop-processing-model
>>  specifies that the drag data store is populated before dispatching the 
>> dragstart event.
>> 
>> Would it make sense to do something similar for 
>> http://dev.w3.org/2006/webapi/clipops/#processing-model? Right now, as I 
>> read it, we don't populate the clipboard until after the copy/cut event has 
>> been cancelled. It'd be nice to make it consistent with drags... the main 
>> problem is I'm not sure if this will break compatibility with sites that 
>> didn't expect this.
>> 
>> Daniel
> 


Considering an onincompleteload event

2013-06-11 Thread Charles Pritchard
Thought I'd at least open a discussion on the possibility of an onabort 
+ onincompleteload event pair for webpages which would otherwise 
crash/take down the web browser.

We've made it this far without adding such an event.

An example case would be the one-page HTML5 spec; there are some setups 
that just can't load the whole page. There are some other cases where a 
web app simply does not have adequate pagination for results.
It's frustrating to have a page load in front of me, but remain 
inaccessible because the browser is trying to load the whole thing when 
I only need the first screen's height of data.


This conversation continues a little on the concept of an onlowmemory 
event. There seemed to be consensus against supporting such an event. In 
that case, I'd made it clear that such an event would be useful for web 
apps on mobile phones such as the iPhone, where Safari / iOS will 
terminate a running application once it consumes too much RAM. Some 
media heavy web apps (typically using HTML5 Canvas) would be able to 
release some assets upon a low memory event.


-Charles




Re: Filesystem events

2013-05-29 Thread Charles Pritchard

We didn't come to much of a resolution.
It was suggested that the current behavior in browsers was incorrect; 
that the File should become inaccessible if/when it changes.


There still seems to be some hang-up on the Directory entry concept. For 
example, WebKit allows the drag and drop (and selection via input file 
-webkit-directory) of a directory,

creating directory entry hooks, and Mozilla does not seem to support that.

All vendors (I believe) are supportive of stashing File objects into 
IndexedDB.


I don't know that we've made it any further on the concept of mount 
points and/or simple input file directory standardization, and as such, 
there's not been progress on

file / directory watchers.

-Charles


On 5/29/2013 11:21 AM, pira...@gmail.com wrote:


Interesting conversation, thanks for pointing to it :-)

So it seems FileList should be live objects that show updates and 
removals of files and this would be fetched poolling over it (not 
perfect, but does the job). Ok, I have seen this yet and in fact I'm 
doing a "hack" looking when modifiedTime and length are null on Chrome 
to detect when a file has been (re)moved, but what happens when a file 
is added? How can I be able to detect it? I'm specially interested on 
this use case.


El 29/05/2013 19:51, "Charles Pritchard" <mailto:ch...@jumis.com>> escribió:


On 5/29/2013 10:26 AM, pira...@gmail.com
<mailto:pira...@gmail.com> wrote:


Currently there's no way to fetch real time filesystem
modifications inside webapps, both on FileLists or DirEntries.
I propose to add filesystem monitoring events to the
DirEntries objects, so when a file or directory is added,
removed or modified a corresponding event would buble on the
filesystem hierarchy up to the DirEntry object where an event
listener is registered to catch it, so the webapp would act in
correspondance (hashing the files, uploading them...).

This would also be applied to the FileSystem API and the
DeviceStorage API (so for example files dropped on that folder
would be uploaded and later removed, in a mail box or printer
queue style) and up to some degree on FileList objects, too.


See also:
http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0087.html





Re: Filesystem events

2013-05-29 Thread Charles Pritchard

On 5/29/2013 10:26 AM, pira...@gmail.com wrote:


Currently there's no way to fetch real time filesystem modifications 
inside webapps, both on FileLists or DirEntries. I propose to add 
filesystem monitoring events to the DirEntries objects, so when a file 
or directory is added, removed or modified a corresponding event would 
buble on the filesystem hierarchy up to the DirEntry object where an 
event listener is registered to catch it, so the webapp would act in 
correspondance (hashing the files, uploading them...).


This would also be applied to the FileSystem API and the DeviceStorage 
API (so for example files dropped on that folder would be uploaded and 
later removed, in a mail box or printer queue style) and up to some 
degree on FileList objects, too.




See also:
http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0087.html




Re: ZIP archive API?

2013-05-03 Thread Charles Pritchard
On May 3, 2013, at 3:18 PM, Jonas Sicking  wrote:

>> platforms, but it matters a great deal
>> on underpowered devices such as mobiles.
> 
> Show me some numbers to back this up and you'll have me convinced :)
> 
> Remember that on underpowered devices native code is proportionally slower 
> too.


The word around town is that asm.js takes about 2x longer than native compiled 
code.

So if we're looking at inflate and deflate, you've got that metric, and you 
have the memory and bandwidth overhead of including a js-based inflate deflate 
library and maintaining whatever licensing it comes with.

Basic zip construction without compression is trivial to support in JS, and in 
some cases, it's desirable if you're targeting specific cases where the file 
format uses some magic numbers.

-Charles


Sandbox with intents

2013-03-12 Thread Charles Pritchard
The web intents push has stalled a little, but in the same direction, I've hit 
a snag on iframe sandbox.

Chrome has picked up the semantic for extensions, allowing us to define a page 
as we might with http headers as a sandbox. Within that context, I'd like to 
create an iframe, I don't want any popups, but I do want that frame to access 
localStorage, because it's across origins, and I want it to postMessage back. 
I'd also like to be able to create blank iframe via about:blank. At present, 
I'm blocked on both of these.

Should we extend sandbox semantics? I really had a hard time reading the spec.

I just want safe means of running code, and getting data across origins.

With chrome extensions, you now need sandbox pages to use inline script and 
eval, unless it's hosted on the web. Web is now "more" trusted than some 
extension pages in the chrome security model; because sandbox is the primary 
means of making an untrusted extension page.


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-12 Thread Charles Pritchard
I agree with Scott; notraverse, something along those lines.

Glad to hear about the wide consensus on the overall effort.


On Mar 12, 2013, at 4:11 PM, Scott González  wrote:

> It's been a while since I looked at this spec, what are the ways in which you 
> can get access? It seems like a name such as traversable could work well.
> 
> 
> On Tue, Mar 12, 2013 at 6:47 PM, Daniel Buchner  wrote:
>> What about obscured, opaque, invisible, or restricted?
>> 
>> 
>> 
>> On Tue, Mar 12, 2013 at 3:34 PM, Alan Stearns  wrote:
>>> On 3/12/13 2:41 PM, "Boris Zbarsky"  wrote:
>>> 
>>> >On 3/12/13 5:19 PM, Dimitri Glazkov wrote:
>>> >> However, to allow developers a degree of enforcing integrity of their
>>> >> shadow trees, we are going add a new mode, an equivalent of a "KEEP OUT"
>>> >> sign, if you will, which will makes a shadow tree non-traversable,
>>> >> effectively skipping over it in an element's shadow tree stack.
>>> >
>>> >To be clear, what this mode does is turn off the simple way of getting
>>> >the shadow tree.  It does not promise that someone can't get at the
>>> >shadow tree via various non-obvious methods, because in practice such
>>> >promises are empty as long as script inside the component runs against
>>> >the web page global.
>>> >
>>> >The question is how to name this.  "Hidden" seems to promise too much to
>>> >me.  Perhaps "obfuscated"?  "Veiled"?
>>> >
>>> >-Boris
>>> >
>>> >P.S.  Tempting as it is, "RedWithGreenPolkadots" is probably not an OK
>>> >name for this bikeshed.
>>> 
>>> Apologies in advance for adding to the bikeshedding
>>> 
>>> protected (mostly private, but you can get around it)
>>> shielded (the shield can be lowered)
>>> gated (the gate can be opened)
>>> fenced (most fences have an opening)
>>> 
>>> Or bleenish-grue, if we're going with color names.
>>> 
>>> Alan
> 


Re: exposing CANVAS or something like it to Web Workers

2013-02-20 Thread Charles Pritchard
Chrome extensions have a background.html capability with a full dom; I've used 
those to prototype the concept. I've also used webkit's CSS fill 
(-webkit-canvas) in the mix.

A good deal of experimentation can be done there, prior to hacking up C++.



On Feb 20, 2013, at 11:43 AM, Kenneth Russell  wrote:

> On Fri, Feb 8, 2013 at 9:14 AM, Travis Leithead
>  wrote:
>>>> What would be the advantage? If you wanted to keep dom elements in sync
>>>> with the canvas you'd still have to post something from the worker back to
>>>> the main thread so the main thread would know to pop.
>> 
>> 
>> 
>> Well, it's not a fleshed out proposal by any stretch, but you could imagine
>> that an event could be used to signal that new frames were ready from the
>> producer—then the main thread would know to pop.
> 
> It sounds like this approach would require a bunch of new concepts --
> a remote worker context, events signalled through it, etc. It would
> still be necessary to postMessage from the main thread back to the
> worker for flow control. Flow control is definitely necessary -- the
> producer can't just produce frames without any feedback about when
> they're actually consumed. We've had problems in Chrome's graphics
> pipeline in the past where lack of flow control led to slow and
> inconsistent frame rates.
> 
> I'm excited about Gregg's proposal because it solves a lot of use
> cases that aren't currently addressed by CanvasProxy, using a couple
> of simple primitives that build upon others already in the platform
> (cross-document messaging and Transferables).
> 
> How can we move forward so that user agents can experimentally
> implement these APIs? Ideally we'd prototype and experiment with them
> in some medium-sized applications before finalizing any specs in this
> area.
> 
> Thanks,
> 
> -Ken
> 
> 
>> From: Gregg Tavares [mailto:g...@google.com]
>> Sent: Friday, February 8, 2013 3:14 AM
>> To: Travis Leithead
>> Cc: Ian Hickson; Charles Pritchard; Web Applications Working Group WG
>> 
>> 
>> Subject: Re: exposing CANVAS or something like it to Web Workers
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On Thu, Feb 7, 2013 at 10:46 PM, Travis Leithead
>>  wrote:
>> 
>> Having thought about this before, I wonder why we don’t use a
>> producer/consumer model rather than a transfer of canvas ownership model?
>> 
>> 
>> 
>> A completely orthogonal idea (just my rough 2c after reading Gregg’s
>> proposal), is to have an internal frame buffer accessible via a WorkerCanvas
>> API which supports some set of canvas 2d/3d APIs as appropriate, and can
>> “push” a completed frame onto a stack in the internal frame buffer. Thus the
>> worker can produce frames as fast as desired.
>> 
>> 
>> 
>> On the document side, canvas gets a 3rd kind of context—a
>> WorkerRemoteContext, which just offers the “pop” API to pop a frame from the
>> internal frame buffer into the canvas.
>> 
>> 
>> 
>> Then you just add some basic signaling events on both ends of the frame
>> buffer and you’re good (as far as synchronizing the worker with the
>> document). The producer (in the worker) is free to produce multiple frames
>> in advance (if desired), while the consumer is able to pop frames when
>> available. You could even have the framebuffer depth configurable.
>> 
>> 
>> 
>> What would be the advantage? If you wanted to keep dom elements in sync with
>> the canvas you'd still have to post something from the worker back to the
>> main thread so the main thread would know to pop.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> From: Gregg Tavares [mailto:g...@google.com]
>> Sent: Thursday, February 7, 2013 2:25 PM
>> To: Ian Hickson
>> Cc: Charles Pritchard; Web Applications Working Group WG
>> Subject: Re: exposing CANVAS or something like it to Web Workers
>> 
>> 
>> 
>> I put up a new proposal for canvas in workers
>> 
>> 
>> 
>> http://wiki.whatwg.org/wiki/CanvasInWorkers
>> 
>> 
>> 
>> Please take a look.
>> 
>> 
>> 
>> This proposal comes from offline discussions with representatives from the
>> various browsers as well as input from the Google Maps team. I can post a
>> summary here if you'd like but it might be easier to read the wiki
>> 
>> 
>> 
>> Looking forward to feedback.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
&g

Re: Allow ... centralized dialog up front

2013-02-05 Thread Charles Pritchard
This direction of placing permissions up there in the site info expansion in 
Chrome feels like the right direction. That spot where they show where an SSL 
cert is valid/expired.

Now I can easily see cookies and flip various settings in one click as I look 
at site info.

I've been working on a web app where I don't need any upfront permissions, but 
the user can elect to elevate to clipboard, XSS and a high disk quota. I've 
certainly felt the cost of multiple dialogs vs a one-time "grant everything" 
prompt.



On Feb 5, 2013, at 5:09 PM, "Charles McCathie Nevile"  
wrote:

> TL;DR: Being able to declare the permissions that an app asks for might be 
> useful. User agents are and should continue to be free to innovate in ways 
> they present the requests to the user, because a block dialogue isn't a 
> universal improvement on current practice (which in turn isn't the same 
> everywhere).
> 
> On Mon, 04 Feb 2013 01:35:43 +0100, Florian Bösch  wrote:
> 
> So how exactly do you imagine this going down when an application that uses 
> half a dozen such capabilities starts? Clicking trough half a dozen allow -> 
> allow -> allow -> allow -> allow -> allow, you really think the user's gonna 
> bother what the 5th or sixth allow is about?
> 
> Where there are multiple permissions required the way to ensure user 
> attention isn't as simple as "a list that doesn't get read, witha single 
> button clicked by reflex", or "multiple buttons to be clicked by reflex 
> without reading".
> 
> At least that seems to be what the research shows.
> 
> You'll end up annoying the user, the developer and scaring people off a page. 
> Somehow I can't see that as the function of new capabilities you can offer on 
> a page. Furthermore, some capabilities (like pointerlock) actively interfere 
> with the idea that when you need it you can click it (such as the concept of 
> pointer-lock-drag which requests pointerlock on mousedown and releases it on 
> mouseup) where your "click it when you need it" idea will always fail the 
> first usage.
> 
> This may be true. But pointer-lock is an example of something that needs the 
> entire UX to be thought through. simply switching from one to the other 
> without the user knowing is also poor UX, since it risks making the user 
> think their system is broken. Add to this a user working with e.g. mousekeys, 
> or a magnifier at a few hundred percent plus high-contrast.
> 
> The problems are not simple, and it is unlikely the solutions will be either. 
> Ian's claim that everything can be done seamlessly without making it seem 
> like a security dialog may be over-confident, and as Robin points out the 
> first UI developed (well, the second actually) might not be the best approach 
> in the long run, but it is certainly the direction we should be aiming.
> 
> So where are we? The "single up front dialogue" doesn't work. We know that. 
> Mutliple contextual requests go from being effective to being 
> counter-productive at some magic tipping point that is hard to predict.
> 
> To take an example, let's say I have a chat application that can use web-cam 
> and geolocation. Some user agents might decide to put the permissions up 
> front when you first load the app. And some users will be fine with that. 
> Some will be happy to let it use geolocation when it wants, but will want to 
> turn the camera on and off explicitly (note that Skype - one of the 
> best-known video chat apps there is - allows this as a matter of course. I 
> don't know of anyone who has ever complained).
> 
> Some "app stores" might refuse to offer the service unless you have already 
> accepted that you will let any app from the store use geolocation and camera. 
> Others will be quite happy with a user agent that (like skype - or Opera) 
> puts the permissions interface in front of the user to modify at will. And 
> there are various other possible configurations.
> 
> At any rate, having a way of declaring the things that will be requested (as 
> I mentioned a zillion messages ago, most platforms have implemented this 
> somewhere, sometimes several times) would at least simplify the task for 
> implementors of deciding which approach to use, or how to blend the various 
> different possiblities.
> 
> cheers
> 
> Chaals
> 
> Not exactly confidence inspiring either, as a UX.
> 
> 
> On Mon, Feb 4, 2013 at 1:28 AM, Tobie Langel  wrote:
> On 2/2/13 12:16 PM, "Florian Bösch"  wrote:
> 
> >Usually games (especially 3D applications) would like to get capabilities
> >that they can use out of the way up front so they don't have to care
> >about it later on.
> 
> This is not an either / or problem.
> 
> First, lets clarify that the granting of a permission (and for how long it
> is granted) can be dependent on a variety of factors defined by the user
> and or the user agent and is out of control of the developer and of any
> spec body to standardize.
> 
> Different User Agents will behave differently depending on what m

Re: Reading image bytes to a PNG in a typed array

2013-01-15 Thread Charles Pritchard

On 1/14/2013 7:04 AM, Florian Bösch wrote:
On Mon, Jan 14, 2013 at 4:00 PM, Glenn Maynard > wrote:


You want toBlob, not toDataURL.

So how would I stick a blob into an arraybuffer?


http://code.google.com/p/chromium/issues/detail?id=67587


Re: exposing CANVAS or something like it to Web Workers

2012-11-16 Thread Charles Pritchard
On Nov 16, 2012, at 1:50 PM, Ian Hickson  wrote:

> On Mon, 14 May 2012, Gregg Tavares (�~K�) wrote:
>> 
>> I'd like to work on exposing something like CANVAS to web workers.
>> 
>> Ideally how over it works I'd like to be able to
>> 
>> *) get a 2d context in a web worker
>> *) get a WebGL context in a web worker
>> *) download images in a web worker and the images with both 2d contexts and
>> WebGL contexts
> 
> I've now specced something like this; for details, see:
> 
>   http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Nov/0199.html

Seems like we might use requestAnimationFrame in the main thread to postMessage 
to the worker as an alternative to using setInterval in workers for repaints.

-Charles


FileSystem API: Cross origin sharing

2012-08-14 Thread Charles Pritchard
Is there a long term plan for transferring access to Files from the FileSystem?

Currently, we can using FileEntry.file semantics then post the resulting Blob 
to another frame. From that point, the receiver can create a Blob URL. I'm 
concerned about all the extra work with Blobs which may get stuck in RAM.

Sure would be nice to pass strings instead.


-Charles




Re: Drag & Drop Web Apps

2012-08-10 Thread Charles Pritchard
On Aug 10, 2012, at 12:25 PM, Joran Greef  wrote:

> 1. Drag files and folders into a web app?

Chrome has this working now for files and folders.

> 2. Drag files and folders out of a web app?

Files may be dragged out; folders may not be-- however, using a container, such 
as zip, is trivial.


> 3. Drag a spreadsheet out of a web app onto the icon of Excel in the dock and 
> have it open in Excel?

Should work fine.

> 4. Monitor that same spreadsheet's content (originally provided by the web 
> app) for changes when the user edits it and presses CTRL+S?


There was some push back on this issue. Presently one may drag a spreadsheet 
into the browser and continue editing it; doing polling on the file to detect 
changes. There is not currently a mount mechanism. Nor a file watcher mechanism.

I suspect we will see both mechanisms appear some time next year.


Re: Drawing Tablets

2012-08-03 Thread Charles Pritchard

On 8/3/2012 10:33 AM, Florian Bösch wrote:
On Fri, Aug 3, 2012 at 7:21 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


What kind of correlated events are you thinking of?

For instance most tablet drivers deliver X/Y events separately. If you 
processed those individually, fast brushstrokes would become 
staircases. To avoid that, developers filter the queue and whenever 
the see an X event (which always arrives before the Y event) they wait 
for the Y event before correlating them to a "X+Y" event.


I was not aware of that. My experience comes mostly from the Wacom 
plugins which handle that work before delivering.
It seems like this is something the implementer should handle; 
delivering the full event to the developer.


http://www.wacomeng.com/web/WebPluginReleaseNotes.htm#_Toc324315286

They've done some touch event work as well:
http://www.wacomeng.com/web/WebPluginTouchAPI.htm



Some of this was discussed as part of the Sensor API proposal. It
seems that work is being shuffled into a web intents mechanism.
I've not yet experimented with high volume/precision data over
postMessage.

No idea what the intents is, but usually the "intend" in processing 
events is implemented by the developer who processes the events, so 
other than his application code, no further complication is required.


We Intents is a mechanism for posting/grabbing data as well as discovery.
http://webintents.org/

In intents, you might have an  which then runs the  
attaching to Wacom's plugin.
All websites would simply trigger an intent, such as asking for pen 
data; they would not need to worry about opening an iframe or object tag.


I've not gotten to it, but I'll get a demo posted one of these days.

 Item 2 is fun stuff, but at present, only the touch API has touched 
on the concept of multiple pointers.
This product line http://www.wacom.com/en/Products/Intuos.aspx is both 
a multitouch surface and pen input (although not both at the same 
time). http://www.wacom.com/en/Products/Intuos.aspx


I suspect however that the "not at the same time" aspect isn't a 
hardware or even driver limitation, but rather results from the 
compulsive need of applications/OSes to pretend touches and pens are 
core pointers, and since there can't be two pointers... If a suitable 
capture mode where implemented, I don't think simultaneous touches and 
pen strokes would present a problem (and they'd certainly be fun to use).


I agree; but our event infrastructure for the web isn't quite ready for 
two mouse pointers to run at the same time over a webpage.

It's something I'd like to see addressed.


Item 1 we can do that with pointer lock.

You do bring up a good point, if the web platform did
concurrent/multiple pointer devices, it'd be nice if it the
pointer lock API was aware of that situation.
As I understand it, the new release of Windows does have mature
support for multiple pointers. Support has been available for some
time.
The web platform is falling a bit behind in this area. Of course,
they haven't caught up with pen events yet and those have been
around for decades.
I suspect the affine transform is something that the author ought
to be processing from nice raw data.
They can use something like the CSSMatrix() object or other maths
to do transforms.

With a complex Canvas drawing surface, I've had to do about 3
levels of transforms anyway.

onmightypenevent(e) {  coordsForNextStep =
myMatrix.transform(e.arbitraryX, e.arbitraryY); };


Correct, you can leave it up to the developer and just engage 
pointerlock. However there's a snag with that.

- You need to have fullscreen to get pointerlock
- You might desire to get pointerlock on the drawing surface, but not 
engage fullscreen
- You'll want to engage pointerlock for more direct interaction, yet 
disengage it again for interaction with interface elements in the DOM. 
Everytime you engage/disengage it there's dialog boxes and whatnot.


All good points.


-Charles


Re: Drawing Tablets

2012-08-03 Thread Charles Pritchard

On 8/3/2012 10:09 AM, Florian Bösch wrote:
On Fri, Aug 3, 2012 at 6:54 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


As I understand it, the browsers have mature event queues; and
everything comes with a timestamp.
We've got requestAnimationFrame as our primary loop for processing
the queue.

To clear a queue (so to speak), I believe one simple removes any
associated handlers.

Yeah but that's not how it should work.
- Assuming requestAnimationFrame might be wrong in case that other 
events are used to refresh either logic or simulation (such as when 
redraws only happen upon input)


It's a complex wait mechanism:
onload -> onmousemove -> requestAnimationFrame [until idle] -> onmousemove.

Until we worked with high resolution events, we simply hooked into 
onmousemove. At some point, it became clear that was inefficient

and we tested out setTimeout loops (rAF was not available at the time).

- Dispatching events individually makes it difficult to work out 
correlated events, and since the device landscape changes constantly, 
it'd be easier to adopt new devices for developers if they could do 
their own correllation according to their needs. For that they need 
the buffer of queued events to work trough them.


What kind of correlated events are you thinking of?

- Depending on the use there's a differing "granularities" that a 
developer might want to implement. This is usually done by filtering 
the queue according to the applications needs, if receiving events 
individually, that'd just lead to the developer re-implementing the 
queue so he can filter it at the appropriate time.


Some of this was discussed as part of the Sensor API proposal. It seems 
that work is being shuffled into a web intents mechanism.

I've not yet experimented with high volume/precision data over postMessage.


Are there any times when mouse emulation is not executed with a pen?
As much as I've used my pen, it always "moves the mouse".
Capture to area is something that's being handled for pointer
events. I recently read that support for mouse capture is being
contributed to webkit; it's at least, being actively authored.
The capture model is based on the full screen request model.

The mouse emulation mode is appropriate when: 1) the tablet is used to 
interact with a larger area containing interface elements AND 2) it is 
used singularly AND 3) the screen dimensions match the tablet dimensions.


It is not appropriate when 1) the tablet is used to interact with a 
limited area (such as a drawing surface) exclusively OR 2) the tablet 
is used ambidexterity in conjunction with other pointing devices 
and/or multiple pens OR 3) the screen dimensions do not match the 
tablet dimensions


Item 3 seems a matter of configuration, I don't think we have anything 
to do on that one.


Item 2 is fun stuff, but at present, only the touch API has touched on 
the concept of multiple pointers.


Item 1 we can do that with pointer lock.

You do bring up a good point, if the web platform did 
concurrent/multiple pointer devices, it'd be nice if it the pointer lock 
API was aware of that situation.
As I understand it, the new release of Windows does have mature support 
for multiple pointers. Support has been available for some time.
The web platform is falling a bit behind in this area. Of course, they 
haven't caught up with pen events yet and those have been around for 
decades.




Regarding #1, this situation arises naturally when drawing is engaged 
with the limited area, but the area on screen is considerably smaller 
than the screen, which would result in the artist having to rely on a 
tiny area on his large tablet to engage in drawing, while most of the 
tablets surface area lies bare unused. GIMP implements capturing modes 
to drawing area, although the implementation is lacking and it cannot 
be switched seamlessly. Photoshop implements capture to area which can 
be switched seamlessly between mouse control and drawing utensil on 
surface control.


The analogy to pointer lock is fitting, however the conflation of 
fullscreen+pointerlock is not appropriate and it is lacking the affine 
transform aspect of a capture to area mode.


I suspect the affine transform is something that the author ought to be 
processing from nice raw data.
They can use something like the CSSMatrix() object or other maths to do 
transforms.


With a complex Canvas drawing surface, I've had to do about 3 levels of 
transforms anyway.


onmightypenevent(e) {  coordsForNextStep = 
myMatrix.transform(e.arbitraryX, e.arbitraryY); };





Re: Drawing Tablets

2012-08-03 Thread Charles Pritchard

On 8/3/2012 9:46 AM, Florian Bösch wrote:


My main issue with plugins and ink, and serial polling, is that I
lose data when I render: I can either have high quality input, and
poor immediate rendering,
or high quality rendering, but lose a lot of accuracy in the data
stream.

It's a well known problem for game applications, I'm a bit surprised 
this wasn't done right from the start.


1) The host OS APIs support a hodgepodge of 
polling/queing/selecting/events. Usually polling APIs require a very 
high frequency of polling.
2) For this reason game related frameworks like SDL, pygame, pyglet 
etc. abstract the underlying mechanism and fill event queues.
3) At user choosen intervals the queues events are dispatched by 
either letting the user call a a) "get all events" function which 
clears the queue and the user is responsible to work trough that queue 
(SDL/pygame model) or b) by the framework dispatching events 
individually at regular intervals (pyglet)


In working with framework dispatched events I hit some snags (such as 
when events that belong together are not correlated by the framework) 
which renders some devices the framework hasn't been coded for 
unusable (tablets for instance). I'd definitely prefer the flavor that 
lets the user explicitly clear the queue and work trough the events, 
also makes it easier to synchronize event processing with rendering 
and behavioral logic and enables the ecosystem of api users to come up 
with a solution to correlate events for their favorite devices.


As I understand it, the browsers have mature event queues; and 
everything comes with a timestamp.
We've got requestAnimationFrame as our primary loop for processing the 
queue.


To clear a queue (so to speak), I believe one simple removes any 
associated handlers.



So I believe that when the gamepad API is moving to this model (and it 
does by the sound of it) we're good on that front. There's still the 
question of how to switch "modes" such as mouse emulation or capture 
to area.


Are there any times when mouse emulation is not executed with a pen?
As much as I've used my pen, it always "moves the mouse".

Capture to area is something that's being handled for pointer events. I 
recently read that support for mouse capture is being contributed to 
webkit; it's at least, being actively authored.


The capture model is based on the full screen request model.


-Charles



Re: Drawing Tablets

2012-08-03 Thread Charles Pritchard

On 8/3/2012 3:54 AM, Florian Bösch wrote:
On Fri, Aug 3, 2012 at 8:09 AM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


Touch events v2 has some properties, such as pressure 


Although lacking most other properties (Z, tilt, rotation etc.)


Yes: so far I've not been able to garner interest in Pen events.
The touch events v2 properties are really just a stub, a reminder that 
something ought to be done.

I was also able to get a stub put into the SVG2 planning documents.

That's the best I've been able to get, as vendors are rather luke-warm 
on pen events.




InkML covers the full serialization of captured data.

It looked fairly complete, though I was missing the pen type/ID 
property. Also it's not clear to me how this would map to JS events 
required when using the pen input for things like WebGL.


http://www.w3.org/TR/InkML/#inkSource
As well as the simpler brush xml:id.

WebGL vectors map well to brush traces.
One would process a trace group into an int or float array then upload 
that to webgl for rendering.

Or, one might use that array to render via Canvas 2d.

The issue is that we have no mechanism to actually capture high 
resolution ink data.



 The Gamepad API is the closest implementation in browsers (Chrome)
There's a few extra challenges like correlating inputs (X/Y) and 
usually you'll want some capture mode (exclusive to area or mouse 
emulation) switch, which the gamepad API does not cover afaik.


They've also hit sampling issues: initial drafts and implementations 
worked with polling.

It seems that event callbacks are more appropriate.

My main issue with plugins and ink, and serial polling, is that I lose 
data when I render: I can either have high quality input, and poor 
immediate rendering,

or high quality rendering, but lose a lot of accuracy in the data stream.



and Wacom's Air implementation one of the closest in an HTML
environment.

You mean Adobe Air?


Wacom made some plugin items which run on Adobe's Air. They have 
onpenpressure events, for instance.



-Charles


Re: Drawing Tablets

2012-08-02 Thread Charles Pritchard
Touch events v2 has some properties, such as pressure InkML covers the full 
serialization of captured data.

The Gamepad API is the closest implementation in browsers (Chrome) and Wacom's 
Air implementation one of the closest in an HTML environment.


-Charles


On Jul 31, 2012, at 5:00 PM, Florian Bösch  wrote:

> I'm interested in drawing tablets and I wonder how that might appear in 
> browsers.
> 
> Typically drawing tablets have these properties:
> 
> - PenID: The current pen ID being used
> - Tool type: the classification of the pen
> - Proximity: in range of the magnet-resonance sensors
> - Distance: distance over the surface
> - X Position: absolute position from the left
> - Y Position: absolute position from the top
> - Z rotation: rotation of the pen around its axis (roll)
> - Pressure: when the pen contacts the tablet surface, amount of pressure 
> excerted
> - Tilt X: the tilting of the pen relative to the X axis of the tablet
> - Tilt Y: the tilting of the pen relative to the Y axis of the tablet
> - Wheel: the wheel on the pen mouse
> - Throttle: the throttle lever on airbrush pens
> - Pen buttons (stylus, stylus2)
> - Tablet buttons (left, middle, right)
> - Touch: touch sensitive slide bars on the tablet
> 
> The tablets do sport fairly good resolutions of 5080 LPI in the case of Wacom 
> Intuous tablets which translates to a precision of +- 0.02mm. If mapped to a 
> full HD monitors height of 1080 pixels it would correspond to more than 5 
> steps inbetween each pixel.
> 
> It's therefore of paramount importance to be able to setup a desired 
> transform (in whatever fashion) when entering drawing mode rather than 
> mapping and clamping a pen to its mouse emulated cursor position (for 
> application of this principle please try out photoshop and related products 
> that support tablets).
> 
> Another aspect that might be troublesome is that drivers for these devices 
> typically deliver correlated events separately (such as X/Y axes events) 
> which would be unusable to an application that way (it would result in 
> drawing stairs). It's therefore important to correlate some individual events 
> before passing them on to the application.
> 
> More modern tablets do also support multiple simultaneous pens and some also 
> have multitouch support.
> 
> Is there any specification that would be suitable to serve these devices?
> 
> 



Re: Lazy Blob

2012-08-01 Thread Charles Pritchard
On Aug 1, 2012, at 8:44 AM, Glenn Maynard  wrote:

> On Wed, Aug 1, 2012 at 9:59 AM, Robin Berjon  wrote:
> var bb = new BlobBuilder()
> ,   blob = bb.getBlobFromURL("http://specifiction.com/kitten.png";, "GET", { 
> Authorization: "Basic DEADBEEF" });
> 
> Everything is the same as the previous version but the method and some 
> headers can be set by enumerating the Object. I *think* that those are all 
> that would ever be needed.
> 
> We already have an API to allow scripts to make network requests: XHR.  
> Please don't create a new API that will end up duplicating all of that.  
> However this might be done, it should hang off of XHR.
> 
> Ideally, a new responseType could be added to XHR which operates like "blob", 
> except instead of reading the whole resource, it just performs a HEAD to 
> retrieve the response length and immediately returns the Blob, which can be 
> read to perform further Content-Range reads.  This sits relatively cleanly 
> within the XHR API.  It also has nice security properties, as if you send 
> that Blob somewhere else--possibly to a different origin--it can do nothing 
> with the Blob but read it as it was given.
> 
> The trouble is chunked responses, where the length of the resource isn't 
> known in advance, since Blobs have a static length and aren't designed for 
> streaming.


The "HEAD" request seems a little like a FileEntry for use with FileReader.

Something similar (very similar) is done for http+fuse on several platforms.

It's quite useful for working with large archives such as PDF or ZIP.


-Charles

Re: Push API draft uploaded

2012-05-24 Thread Charles Pritchard

On 5/24/2012 7:08 AM, SULLIVAN, BRYAN L wrote:

OK, I corrected the [NoInterfaceObject] (I hope), and referenced HTML5 for 
"resolving a URL".

The numeric readyState was borrowed from EventSource. I will look at the 
thread, but I think this is something that I will just align with the consensus 
in the group once determined. I don't have a strong opinion either way.

Latest version is athttp://dvcs.w3.org/hg/push/raw-file/default/index.html  


Does the following mean that "http/https" are interpreted as EventSource 
and ws/wss as WebSockets?

Does checkRemotePermission trigger CORS checks for those protocols?

"If the url parameter is present and the user agent recognizes the url 
value as a particular type of Push service that it supports, the user 
agent must activate the service."


My read on that is yes and yes, but I wanted to double-check.

-Charles



Re: [File API] File behavior under modification

2012-05-23 Thread Charles Pritchard

On 5/23/2012 3:32 PM, Glenn Maynard wrote:
On Wed, May 23, 2012 at 12:57 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


I think that's where the spec writing for this is challenging. I'd
lean toward documenting what's really out there instead of
mandating snapshot capabilities in the file system.


It doesn't mandate snapshot capabilities.  If the file is changed, 
reading the File doesn't give you the old data; it fails with an 
error.  That's easy to for the browser to check: compare the mtime of 
the file (probably both before and after the read, to avoid races).  
Native applications could fool this if they want to, but this isn't a 
problem in practice.  Also, implementations are free to use other 
mechanisms to implement the "snapshot state" concept (eg. file change 
notification APIs).


OK, I agree with this method. It'll fix both the Mozilla and WebKit 
families in respect to their current behavior. It may already be 
Mozilla's behavior (I'm a bit behind on tracking it).


-Charles


Re: [File API] File behavior under modification

2012-05-23 Thread Charles Pritchard
On May 23, 2012, at 6:58 AM, Glenn Maynard  wrote:

> On Wed, May 23, 2012 at 3:03 AM, Kinuko Yasuda  wrote:
> Just to make sure, I assume 'the underlying storage' includes memory.
> 
> Right.  For simple Blobs without a mutable backing store, all of this 
> essentially optimizes away.
> 
> We should also make it clear whether .size and .lastModifiedDate should 
> return live state or should just returning the same constant values.  (I 
> assume the latter)
> 
> It would be the values at the time of the snapshot state.  (I doubt it was 
> ever actually intended that lastModifiedDate always return the file's latest 
> mtime.  We'll find out when one of the editors gets around to this thread...)


I can't imagine that Mozilla's behavior here is intended. Only by returning a 
live mtime can authors be aware of whether or not the file has changed from a 
previous state.

We did go through this discussion quite awhile ago when I recommended file 
watcher hooks (which are available on engines like node.js).

It seems like there's a split between the ideal (take an immutable snapshot of 
the file) and the real world, where the File object represents an entry, such 
as FileEntry, not a static blob of data.

I think that's where the spec writing for this is challenging. I'd lean toward 
documenting what's really out there instead of mandating snapshot capabilities 
in the file system.

Re: IndexedDB: Binary Keys

2012-05-21 Thread Charles Pritchard
On May 21, 2012, at 5:03 PM, Jonas Sicking  wrote:

> On Mon, May 21, 2012 at 10:09 AM, Joran Greef  wrote:
>> IndexedDB supports binary values as per the structured clone algorithm
>> as implemented in Chrome and Firefox.
>> 
>> IndexedDB needs to support binary keys (ArrayBuffer, TypedArrays).
>> 
>> Many popular KV stores accept binary keys (BDB, Tokyo, LevelDB). The
>> Chrome implementation of IDB is already serializing keys to binary.
>> 
>> JS is moving more and more towards binary data across the board
>> (WebSockets, TypedArrays, FileSystemAPI). IDB is not quite there if it
>> does not support binary keys.
>> 
>> Binary keys are more efficient than Base 64 encoded keys, e.g. a 128
>> bit key in base 256 is 16 bytes, but 22 bytes in base 64.
>> 
>> Am working on a production system storing 3 million keys in IndexedDB.
>> In about 6 months it will be storing 60 million keys in IndexedDB.
>> 
>> Without support for binary keys, that's 330mb wasted storage
>> (60,000,000 * (22 - 16)) not to mention the wasted CPU overhead spent
>> Base64 encoding and decoding keys.
> 
> I agree that we should introduce this, but I think it's too late to
> add for version one (which is about to go to last call any day now).
> 
> I'd be happy to add it to version 2 though. However the current
> situation regarding binary data in Javascript is still pretty chaotic.
> See for example the resent change to switch a bunch of APIs over from
> ArrayBuffer to ArrayBufferViews.

Seems to me at this scale (the tens of millions of rows the author is 
examining,) something like the filesystem API or an otherwise low level 
solution is more appropriate.
> 

After all, we're still just talking about sqlite.

I agree on revisiting this issue.


Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 7:24 PM, Boris Zbarsky wrote:

On 5/14/12 10:18 PM, Charles Pritchard wrote:

On 5/14/2012 7:09 PM, Boris Zbarsky wrote:

On 5/14/12 10:00 PM, Charles Pritchard wrote:

What would web fonts do in this situation, in Mozilla?


Probably cry. ;)


If I've confirmed that a font is loaded in the main thread, would it
be available to a
worker for use in rendering?


Not without some pretty serious reworking. Which might need to happen.

Of course basic text layout would also not be available without some
serious reworking (e.g. making the textrun cache threadsafe or
creating per-thread textrun caches or something), so the question of
web fonts is somewhat academic at the moment.



I meant solely for Canvas 2d.


Yes, I understand that.  Canvas 2d text still needs to be able to do 
things like font fallback, shaping, bidi, etc, etc, etc. last I checked.


Oh, the rendering isn't thread safe either? Yes, Canvas 2d text does [is 
supposed to] use all of those items.


Well, I'll give up strokeText/fillText entirely in workers if it'll get 
me the goods faster.
For a11y, I'm going to need to track my text in the main thread anyway. 
I can pre-render there if need be.


-Charles



Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 7:09 PM, Boris Zbarsky wrote:

On 5/14/12 10:00 PM, Charles Pritchard wrote:

What would web fonts do in this situation, in Mozilla?


Probably cry.  ;)

If I've confirmed that a font is loaded in the main thread, would it 
be available to a

worker for use in rendering?


Not without some pretty serious reworking.  Which might need to happen.

Of course basic text layout would also not be available without some 
serious reworking (e.g. making the textrun cache threadsafe or 
creating per-thread textrun caches or something), so the question of 
web fonts is somewhat academic at the moment.




I meant solely for Canvas 2d.

I can live with staying away from fillText/strokeText on a worker thread 
if I'm loading fonts.
It's been broken on the main thread anyway, requiring intermediate 
Canvas surfaces for some operations.


...

SVG image and drawImage is mixed anyway; we can't transfer the data 
between threads as drawImage SVG will usually flag the Canvas as dirty 
in implementations.


We could just use Canvas 2d to handle pattern uploads for WebGL. Seems 
like that'd work without requiring fancy footwork to gain Picture/Image 
support in the worker.

SVG images would get fixed some other day.


-Charles









Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 6:07 PM, Boris Zbarsky wrote:

On 5/14/12 8:55 PM, Gregg Tavares (勤) wrote:

1)  Various canvas 2d context methods depend on the styles of the
canvas to define how they actually behave.  Clearly this would need
some sort of changes for Workers anyway; the question is what those
changes would need to be.

Which methods are these?


Anything involving setting color (e.g. the strokeStyle setter, the 
fillStyle setter), due to "currentColor".  Anything involving text 
because font styles come from the element or document.


Those are the ones that come to mind offhand, but I haven't looked at 
the various recent additions to the 2d context closely.


What would web fonts do in this situation, in Mozilla? If I've confirmed 
that a font is loaded in the main thread, would it be available to a 
worker for use in rendering?


Some implementations of Canvas font methods have been buggy, but 
dropping fillText/strokeText altogether would be a loss and may look 
strange in the specs.
I've seen font not working correctly with other canvas state variables 
in WebKit + Chrome; so it's not as though font is fully supported in the 
main thread, currently.


It may be easier to postpone Image / Picture semantics in workers. I 
think patterns could still be salvaged from createPattern.

It sounds like there are practical issues in current implementations.

When authoring,
With a picture, I can still xhr request it in the worker, then I send it 
to the main thread, and load it in the main thread, then send it back to 
the worker.
As an author, if I'm loading images I expect them to be a normal part of 
the page's load time and responsiveness I can still off-load my 
js-heavy work onto workers.


In Canvas 2d anyway, I can easily get by without picture in workers: (a 
= new Picture()).src = ; a.onload { ctx.drawImage(a,0,0); };  if I 
can use fillRect with pattern.
Otherwise, I'd have to do the extra steps of pushing pixel arrays back 
and forth.


All of the extra work is still worth it to get work done in workers, in 
cases where this level of work is needed. It's just a few extra lines of 
JS code (every time).


-Charles






Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 6:23 PM, Jonas Sicking wrote:

>>>  This would also require some
>>>  equivalent to requestAnimationFrame in the worker thread.

>>
>>  Agreed!
>>
>>  / Jonas

>
>
>
>  I'm a bit lost-- wouldn't we just postMessage from the document over to the 
web worker when we want a refresh?
>
>  I agree that we ought to be transferring pixel data not Canvas contexts; 
with the possible exception of CSS context.
>
>  We could just create new canvas instances inside the worker thread. I'd 
still prefer to clone CanvasPattern as a means of transferring paint over to the 
worker, though sending pixel data would work too.
>
>  I heard Picture come up-- it seems like that object might have additional 
semantics for high resolution alternatives that may need to be considered.

I was saying that I think we should transfer the context from the main
thread to the worker thread. The worker thread should then be able to
use the context to draw directly to the screen without interacting
with the main thread.

You should generally not need to transfer pixel data. But I think it
should be possible to grab pixel data on whichever thread is currently
owning a context. This will implicitly make it possible to transfer
pixel data if anyone wants to do it.


OK, that's the same concept as I was hoping for with 
document.getCSSCanvasContext.


Mozilla is a bit ahead of webkit, going with  "element(any-element)" -- 
webkit has "-webkit-canvas(css-canvas-id)".


It's a very different approach. I'd still go ahead and just have 
requestAnimationFrame pump events from the main frame.
There are so many things that could block the main frame to where an rAF 
repaint just isn't necessary from the worker-end.


It may also keep some animations continuing. I'm not positive, but 
something like window.prompt may block execution on the main thread, but 
if rAF were running on the worker, its repaints would still happen.
Generally, while the main thread is blocked, repainting on it is not 
going to be that useful; we're shuffling the repaints off so they aren't 
the cause of blocking.


Those are the two sides of it that I see... but it's still a very 
different proposal than the idea of just having non-transferable canvas 
contexts.


...

Boris brought up some good points about possible blocking in 
implementations with loading SVG images in the worker thread via Picture 
(or a chopped down Image). At present, those sound more like 
implementation-issues, not particularly issues in the validity of a spec.

http://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0744.html

They do sound a little unfortunate, but I'd sooner trade some possible 
blocking between a Worker and the main thread, than having to carry a 
full SVG parser written in JS in my worker threads.
As an author, I've always had to take precautions for SVG and decide 
whether I want to render or give myself up to the browser implementation.


-Charles



Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 6:14 PM, Gregg Tavares (勤) wrote:



On Mon, May 14, 2012 at 6:07 PM, Boris Zbarsky > wrote:


On 5/14/12 8:55 PM, Gregg Tavares (勤) wrote:

   1)  Various canvas 2d context methods depend on the styles
of the
   canvas to define how they actually behave.  Clearly this
would need
   some sort of changes for Workers anyway; the question is
what those
   changes would need to be.

Which methods are these?


Anything involving setting color (e.g. the strokeStyle setter, the
fillStyle setter), due to "currentColor".  Anything involving text
because font styles come from the element or document.


Good to know.

That doesn't sound like a showstopper though. If a 
canvas/CanvasSurface is available in workers the simplest solution 
would just be that "currentColor" defaults to something "black?" or 
nothing "". Pick one.


Font is still a little tricky from loading fonts via CSS. Font is tricky 
anyway, though, so it wouldn't be that much of a step backward.


Would we assume that if a font is available from the parent context it's 
going to be available to the worker?


currentColor would just default to black, as we're not talking about a 
color inherited from the DOM.




Those are the ones that come to mind offhand, but I haven't looked
at the various recent additions to the 2d context closely.



The recent additions are more proposals than additions. They're 
proposals from Tab and Ian and not yet implemented.


They revolve around a Path object, which we've not yet discussed.
Otherwise, they include lightweight nodes and DOM fallback content, 
which isn't relevant to workers.


My thinking is the same as yours: fillStyle/strokeStyle and font are the 
ones that come to mind.
Pattern and Gradient are items that could conceivably be cloned and/or 
shared.

They can both be as efficient as sending Blob via postMessage.

I think Gregg was just settling on not-sending Canvas over postMessage 
but rather creating the instance inside of each Worker.









Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 6:08 PM, Boris Zbarsky wrote:

On 5/14/12 8:58 PM, Charles Pritchard wrote:

I agree... Can we get this off the main thread?


"Maybe".

It would be pretty nontrivial in Gecko; last I looked it would be 
pretty painful in WebKit too.  Can't speak for other UAs.


Can it be pumped through what's essentially an iframe on a null origin?

I don't know enough about browser internals to help on this one.
Yes, loading an SVG image is a heavy call.

For 90% of the SVG content out there, it'd probably be faster to parse 
and draw the SVG via Canvas and JS.

Still it's a hell of a lot nicer to load it via  tag.

I'd just assumed that  was loaded off-thread / 
async much like  calls.


There's nothing that gets carried from the document to the  other 
than the width/height, which I believe is carried through.

Which is a good thing, of course.


-Charles



Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard
On May 14, 2012, at 5:50 PM, Boris Zbarsky  wrote:

> On 5/14/12 8:21 PM, Charles Pritchard wrote:
>> SVG and animated gif would render the same as it would in an
>> Image that has not been added to the dom or is otherwise display: none.
> 
> I'm not sure that would be workable in a worker for SVG, for the same reasons 
> that responseXML is not available in workers: it involves DOM elements.  
> Unless the SVG rasterization happens on the main thread under the hood and 
> the raster data is then sent over to the worker.  This might have ... 
> surprising performance characteristics.

I agree... Can we get this off the main thread? Svg via image is not quite the 
same as svg via HTMLDocument (I guess I mean, embedded).

Afaik, svg via image does not have any script controls but it does have xsl 
things. 

I've never tried to abuse the distinction via embedded blob Uris and such.

Put in other words:  may use an entirely different 
implementation than  in an HTML document.


Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard
On May 14, 2012, at 5:03 PM, Gregg Tavares (勤)  wrote:

> 
> 
> On Mon, May 14, 2012 at 4:42 PM, Jonas Sicking  wrote:
> On Mon, May 14, 2012 at 3:28 PM, Glenn Maynard  wrote:
> > On Mon, May 14, 2012 at 3:01 PM, Gregg Tavares (勤)  wrote:
> >
> >> I'd like to work on exposing something like CANVAS to web workers.
> >>
> >> Ideally how over it works I'd like to be able to
> >>
> >> *) get a 2d context in a web worker
> >
> >
> > I'd recommend not trying to tackle 2d and 3d contexts at once, and only
> > worrying about WebGL to start.
> >
> > Another issue: rendering in a worker thread onto a canvas which is displayed
> > in the main thread.  This needs to be solved in a way that doesn't cause the
> > asynchronous nature of what's happening to be visible to scripts.  toDataURL
> > and toBlob would probably need to be prohibited on the canvas element.  I'm
> > not sure what the actual API would look like.
> 
> If/when we do this, I think it should be done in such a way that the
> main window can't access the canvas object at all. Similar to what
> happens when an ArrayBuffer is transferred to a Worker using
> structured cloning. Once a canvas is transferred to a Worker, any
> access to it should throw or return null/0/"". If you want to transfer
> pixel data to the main thread, it seems less racy to do that by
> getting the pixel data in the Worker which owns the canvas and then
> transfer that to the main thread using postMessage.
> 
> How about separating the canvasy parts of canvas from canvas and the imagy 
> parts of image from image.
> 
> In other words, Imagine canvas is implemented like this
> 
> class Canvas : public HTMLElement {
>   private:
> CanvasSurface* m_surface;  // everything about canvas that is not 
> HTMLElement
> };
> 
> And that Image is similarly implemented as
> 
> class Image : public HTMLElement {
>   private:
> Picture* m_picture;  // everything about Image that is not HTMLElement
> }
> 
> now imagine you can instantiate inner implementation of these things. The 
> parts that are not HTMLElement
> 
> var canvasSurface = new CanvasSurface();
> var ctx = canvasSurface.getContext("2d");
> var pic = new Picture;
> pic.src = "http://someplace.com/someimage.jpg";;
> pic.onload = function() {
>ctx.drawImage(pic, 0, );
> }
> 
> Let's assume you can instantiate these things in either the page or a worker. 
> Nether can be transfered.
> 
> Would that work? What problems would that have?
> 

Seems fine. If it works with XHR I'd imagine it'll work with Picture.

Blob uri behavior may come up as an issue. I believe they should "just work". 
SVG and animated gif would render the same as it would in an Image that has not 
been added to the dom or is otherwise display: none.

I suspect we may have blob Uris corresponding to live video streams. If those 
come about and work with onload I'd expect them to work here (simply grabbing 
the last/first available frame).






>  
> 
> > This would also require some
> > equivalent to requestAnimationFrame in the worker thread.
> 
> Agreed!
> 
> / Jonas
> 


Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard
On May 14, 2012, at 4:42 PM, Jonas Sicking  wrote:

> On Mon, May 14, 2012 at 3:28 PM, Glenn Maynard  wrote:
>> On Mon, May 14, 2012 at 3:01 PM, Gregg Tavares (勤)  wrote:
>> 
>>> I'd like to work on exposing something like CANVAS to web workers.
>>> 
>>> Ideally how over it works I'd like to be able to
>>> 
>>> *) get a 2d context in a web worker
>> 
>> 
>> I'd recommend not trying to tackle 2d and 3d contexts at once, and only
>> worrying about WebGL to start.
>> 
>> Another issue: rendering in a worker thread onto a canvas which is displayed
>> in the main thread.  This needs to be solved in a way that doesn't cause the
>> asynchronous nature of what's happening to be visible to scripts.  toDataURL
>> and toBlob would probably need to be prohibited on the canvas element.  I'm
>> not sure what the actual API would look like.
> 
> If/when we do this, I think it should be done in such a way that the
> main window can't access the canvas object at all. Similar to what
> happens when an ArrayBuffer is transferred to a Worker using
> structured cloning. Once a canvas is transferred to a Worker, any
> access to it should throw or return null/0/"". If you want to transfer
> pixel data to the main thread, it seems less racy to do that by
> getting the pixel data in the Worker which owns the canvas and then
> transfer that to the main thread using postMessage.
> 
>> This would also require some
>> equivalent to requestAnimationFrame in the worker thread.
> 
> Agreed!
> 
> / Jonas



I'm a bit lost-- wouldn't we just postMessage from the document over to the web 
worker when we want a refresh?

I agree that we ought to be transferring pixel data not Canvas contexts; with 
the possible exception of CSS context.

We could just create new canvas instances inside the worker thread. I'd still 
prefer to clone CanvasPattern as a means of transferring paint over to the 
worker, though sending pixel data would work too.

I heard Picture come up-- it seems like that object might have additional 
semantics for high resolution alternatives that may need to be considered.

-Charles


Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 1:16 PM, Anne van Kesteren wrote:

On Mon, May 14, 2012 at 10:01 PM, Gregg Tavares (勤)  wrote:

I'd like to work on exposing something like CANVAS to web workers.

Ideally how over it works I'd like to be able to

*) get a 2d context in a web worker
*) get a WebGL context in a web worker
*) download images in a web worker and the images with both 2d contexts and
WebGL contexts

Any thoughts?

Have we gotten any further with use cases? See
http://lists.w3.org/Archives/Public/public-whatwg-archive/2010Mar/thread.html#msg144
for an old use case thread that went nowhere. Or


1. Speeding up "onmousemove"-based drawing:
In my drawing projects (based on mouse/pen input), we lose mouse events 
/ pen pressure information when the main thread is busy rendering what 
the user is drawing.

Processing the drawing commands off-thread would lighten the load.

2. Avoiding blocking during redrawing of complex scenes or pre-rendering 
of animations.


With complex scenes, where we're repainting, we don't particularly want 
to block the main thread while a scene is loading.
But, we'd also like to use as much horsepower as the user's machine can 
lend.


A complex scene may block for a few seconds -- we can of course use 
green threading approaches, but that adds quite a bit of extra 
guess-work and

does not fully exploit the user's machine for speed.





Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread Charles Pritchard

On 5/14/2012 1:01 PM, Gregg Tavares (勤) wrote:

I'd like to work on exposing something like CANVAS to web workers.

Ideally how over it works I'd like to be able to

*) get a 2d context in a web worker
*) get a WebGL context in a web worker
*) download images in a web worker and the images with both 2d 
contexts and WebGL contexts


Any thoughts?


As far as implementation, I'd love to be able to pass webkit's 
document.getCSSCanvasContext('2d') around.

It seems like a safe place to experiment.

I can get a lot done with CanvasPattern as a transferable, without 
needing to add Image (or video) into the worker context.


Notes:
1. getCSSCanvasContext is non-standard. It works with CSS image 
-webkit-canvas.
2. I heard that a more generic "-moz-element()" paint server is supposed 
to replace -webkit-canvas in time.
3. Passing the CSS canvas context would let me render off-frame and 
update a canvas visible on the document automatically.


Canvas -should- have toBlob and a typed array buffer for ImageData.
They are both useful for passing image data back to the main frame.


From my experience with WebGL, I think it should be considered with 
added care and lower priority.
There are stability, speed and memory issues. WebGL in workers seems to 
augment 2d, are there other big benefits?



-Charles







Re: Adding a paint event to HTMLElement to support Web Components / Shadow DOM

2012-05-08 Thread Charles Pritchard
He's talking about running a clip() method on the component.

Effective for Canvas but relatively minor in the scope of all that goes on in a 
page.

I think the bigger issue is about sizing components and meeting the scale. 
Clipping is a nice area for high efficiency but less of an issue in practice.

Look at the audio tag when making controls have a smaller and smaller width. 
WebKit neglects to hide the volume bar but they do shrink the progress bar 
fairly well. That's the big issue.





On May 8, 2012, at 10:42 PM, Boris Zbarsky  wrote:

> On 5/9/12 1:20 AM, Gregg Tavares (勤) wrote:
>> I don't think I understand how requestAnimationFrame would work here.
>> Maybe my example was poor. I'm not suggesting a live constantly updating
>> audio wave. Instead I'm suggesting a static WebComponent that is heavy
>> to render. For example the wave display in an audio editing tool. Let's
>> say I had an app that displayed several audio tracks, some of those
>> tracks are scrolled off the page. (like this
>> http://www.newfreedownloads.com/imgs/17189-w520.jpg) The tracks
>> components are set to 100% width so that sizing the window ends up
>> re-sizing all the components
> 
> Then their resize handlers do requestAnimationFrame(rerender_me, 
> this_element).  If the element is visible, rerender_me is called.  If not, 
> it's queued up by the browser until the element becomes visible, no?
> 
> That's assuming support for the element argument to requestAnimationFrame, of 
> course.
> 
> -Boris
> 



Re: Adding a paint event to HTMLElement to support Web Components / Shadow DOM

2012-05-08 Thread Charles Pritchard
On May 8, 2012, at 5:56 PM, Boris Zbarsky  wrote:

> On 5/8/12 8:30 PM, Gregg Tavares (勤) wrote:
>> AFAICT the resize event only fires on window.
> 
> There have been proposals over the years to change that.


CSS Transform and zoom semantics are the steps I still don't catch in my Canvas 
apps. I will, once I port to web components.


> 
>> Imagine a relatively heavy to repaint WebComponent like one that draws
>> an representation of an audio wave. If that component is hidden behind
>> some other component it would be nice if it didn't re-draw itself.
> 
> This seems like a use case for requestAnimationFrame's second argument, and 
> browser quality-of-implementation issues.
> 
>> Does a 'paint' event make sense?
> 
> Possibly, depending on how it's defined; actually running arbitrary script 
> during painting is pretty undesirable (e.g. ideally painting happens on a 
> totally separate thread from script, and having to synchronize them is not 
> particularly ideal).

We've got transition end and rAF. I think we're good on this one.
> 

In fact, might just hook into the transition events with web components.


Re: Adding a paint event to HTMLElement to support Web Components / Shadow DOM

2012-05-08 Thread Charles Pritchard
I think we've got request animation frame for the invisible content scenario.

For the height/width/resolution, CSS selectors and SVG are probably the easier 
case.

Canvas however, that's not so easy.

-Charles



On May 8, 2012, at 5:30 PM, Gregg Tavares (勤)  wrote:

> Imagine you want to make a Web Component that draws a graph
> Imagine that you'd like the graph to show more data if it's larger and less 
> if it's smaller. In other words, you don't want it to scale the data.
> Imagine you set that component's size to width: 100%; height: 100% to scale 
> to its container
> 
> Question: How does the Web Component know when to repaint its content?
> 
> AFAICT the resize event only fires on window. If the point of Web Components 
> is to allow people to make self contained components it seems like there 
> might be the need for some way of letting the component know it needs to 
> repaint.
> 
> Resize is just one example. 
> 
> Imagine a relatively heavy to repaint WebComponent like one that draws an 
> representation of an audio wave. If that component is hidden behind some 
> other component it would be nice if it didn't re-draw itself. When it becomes 
> visible thought needs  to know it should redraw itself.
> 
> Does a 'paint' event make sense? 
> 



Re: [websockets] Moving Web Sockets back to LCWD; is 15210 a showstopper?

2012-05-08 Thread Charles Pritchard
The setTimeout comment in the w3 tracker is a pretty good reason. I 
strongly agree with Olli Pettay's comment.


onemptybuffer would bring sockets in line with the server-side "ondrain" 
event that we see in node.js and other socket APIs.
I disagree with Hixie's rationale that we need to give vendors time to 
catch-up before asking them to implement that event.


For a counter-point, AFAIK, we're not doing heavy multiplexing or other 
such exotic items. That's fine. This feature is basic.


-Charles

On 5/8/2012 12:56 AM, Maciej Stachowiak wrote:

I think it would be reasonable to defer the feature requested in 15210 to a 
future version of Web Sockets API. It would also be reasonable to include it if 
anyone feels strongly. Was a reason cited for why 15210 should be considered 
critical? I could not find one in the minutes.

Cheers,
Maciej


On May 3, 2012, at 3:41 PM, Arthur Barstow  wrote:


During WebApps' May 2 discussion about the Web Sockets API CR, four Sockets API 
bugs were identified as high priority to fix: 16157, 16708, 16703 and 15210. 
Immediately after that discussion, Hixie checked in fixes for 16157, 16708 and 
16703and these changes will require the spec going back to LC.

Since 15210 remains open, before I start a CfC for a new LC, I would like some 
feedback on whether the new LC should be blocked until 15210 is fixed, or if we 
should move toward a new LC without the fix (and thus consider 15210 for the 
next version of the spec). If you have any comments, please send them by May 10.

-AB

[Mins] http://www.w3.org/2012/05/02-webapps-minutes.html#item08
[CR] http://www.w3.org/TR/2011/CR-websockets-20111208/
[Bugz] http://tinyurl.com/Bugz-Web-Socket-API
[15210] https://www.w3.org/Bugs/Public/show_bug.cgi?id=15210








Inking out a PenObserver, was Re: GamepadObserver (ie. MutationObserver + Gamepad)

2012-05-04 Thread Charles Pritchard
Glenn, all of your points apply well to pressure sensitive pen input. 
Pen vendors are not on the list, but their devices are quite similar.
I've changed the subject line and my responses are to your gamepad notes 
in respect to their application to pen input.


Pen input in this case is pressure sensitive input with high sensitivity 
to time, pressure and coordinates, with a sample rate faster than 60hz. 
Pen devices are commonly connected via USB.
The Gamepad API is an experimental API which deals with similar device 
characteristics. Pen vendors are not active with the W3C though the 
InkML specification did make it into recommendation status.


On 5/3/2012 9:14 PM, Glenn Maynard wrote:
Here are some piecemeal thoughts on the subject of gamepads and the 
"gamepad" spec.  I havn't closely followed earlier discussions (and 
there don't seem to have been any in a while), so much of this may 
have been covered before.


- It should be possible to tell what's changed, not just the current 
state of the device.  Needing to compare each piece of input to the 
previous state is cumbersome.


There's a variety of options that may not ever change. Some devices may 
simply not support the values. With pen, we've got items like azimuth 
and altitude. Few pens support it.
As authors, we just ignore that input or do feature tests (if pressure 
exists, and those metrics are still 0, then they're likely not supported).



- Native deadzone handling (by the OS or internally to the device) is 
the only way to correctly handle deadzones when you don't have 
intimate knowledge of the hardware.  It should be explicitly (if 
non-normatively) noted that native deadzone handling should be used 
when available.


- It's very important that it's possible to receive gamepad data 
without polling.  We don't need more web pages running setTimeout(0) 
loops as fast as browsers will let them, which is what it encourages.  
Not all pages are based on a requestAnimationFrame loop.


Recently, I found a setTimeout(20ms, then rAF) to work reasonably well 
for my little laptop.  Any faster, and I lose more data points, and 
slower and the lag [drawing to the screen] is disorienting.
That's just a subjective observation on a light-duty laptop with one 
vendor, just using a track pad.



- An API that can only return the current state loses button presses 
if they're released too quickly.  It's common to press a button while 
the UI thread is held up for one reason or another--and you also can't 
assume that users can't press and release a button in under 16ms (they 
can).


The rAF loop can in some circumstances "slow" the polling or otherwise 
result in data loss.
It's a real pain. We get our best fidelity if the screen is empty and 
we're not drawing. Once we start drawing, we're bound to lose some data.




- APIs like that also lose the *order* of button presses, when they're 
pressed too quickly.  (I've encountered this problem in my own 
experience; it's definitely possible--it's not even particularly 
hard--for a user to press two buttons in a specific order in under 16ms.)


My pen has two buttons in addition to two high sensitivity pressure 
inputs. Wacom introduced "onpressurechange" events in their Flash SDL. 
In some sense, it's an experimental plugin for WebKit as it runs in Air, 
but it's really closer to the Flash side of things.




I'd suggest a halfway point between polling and events: a function to 
retrieve a list of device changes since the last call, and an event 
fired on the object the first time a new change is made.  For example,


var gamepad = window.openGamepad(0);
gamepad.addEventListener("input", function(e) {
var changes = gamepad.readChanges();
}, false);

with changes being an array of objects, each object describing a 
timestamped change of state, eg:


changes = [
{
button: 0,
state: 1.0,
lastState: 0.85,
timestamp: 1336102719319
}
]


At what point would we consider entries to be stale? Is that something 
that should be a concern?

It'd take quite a lot of changes to be a problem.

As a security issue -- do we need to point out that two windows should 
not have concurrent access to the same gamepad (such as applies to 
keyboard and mouse)?

That's another issue I noticed with pen tablet plugins.

-Charles



Re: Clipboard API spec should specify beforecopy, beforecut, and beforepaste events

2012-05-01 Thread Charles Pritchard
On May 1, 2012, at 4:07 PM, Scott González  wrote:

> On Tue, May 1, 2012 at 5:08 PM, Boris Zbarsky  wrote:
> If we _do_ decide to specify them then their interaction with script running 
> inside the events that changes the focus needs to be very carefully 
> specified, since changing focus will change what cut/copy/paste behavior.  I 
> would also need to see some _really_ convincing use cases.
> 
> I recall moving focus for paste events in order to figure out what is being 
> pasted. I believe this is common in WYSIWYG editors; a new element is created 
> and focus is moved to that element, then the paste occurs, then the element 
> is inspected for the content and the editor does whatever it needs to (like 
> cleaning up junk from pasted Word documents). Obviously if there was a 
> cleaner way to get the contents, like Microsoft APIs for accessing the 
> clipboard, then this wouldn't be needed.

I can't recall whether or not onbeforecopy gives us a chance to set the 
clipboard data; I know it mostly just acts as an oncontextmenu hook.

I think that onbeforepaste does help with avoiding the need to keep a form 
control in focus.

The difficulty with current APIs is probably why cloud9 has a small width 
textarea to both handle caret position and clipboard events.

Re: [webcomponents] HTML Parsing and the element

2012-04-23 Thread Charles Pritchard

On 4/18/2012 2:54 PM, Dimitri Glazkov wrote:

>  I am also pretty scared of tokenising stuff like it is markup but then
>  sticking it into a different document. It seems like very surprising
>  behaviour. Have you considered (and this may be a very bad idea) exposing
>  the markup inside the template as a text node, but exposing the
>  corresponding DOM as an IDL attribute on the HTMLTemplateElement (or
>  whatever it's called) interface?

This seems like a neat idea -- though I haven't thought about this in depth yet.


What about: My  and 
template!

template.shadowRoot.firstChild.nextSibling.tagName == 'MYSPECIALTAGS';

Given how much time templates would save me in development, I am 
completely fine with the extra  tag and .shadowRoot accessor.


-Charles





Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard

On 4/11/2012 2:50 PM, Boris Zbarsky wrote:

On 4/11/12 5:47 PM, Charles Pritchard wrote:

On 4/11/2012 2:41 PM, Kenneth Russell wrote:
On Wed, Apr 11, 2012 at 10:04 AM, Boris Zbarsky 
wrote:

> Seems like right now passing a typed array to send() requires a bit
of extra
> hoop-jumping to pass the .buffer instead, right? Is that desirable?

It may be convenient to add an overload to send() (presumably on both
XHR and WebSocket? Any others?) accepting ArrayBufferView. As pointed


It's convenient.

xhr.send(view); // shorthand
xhr.send(view.buffer.slice(view.byteOffset,
view.byteOffset+view.byteLength)); // longhand.


Note that those have different performance characteristics, too; the 
latter involves a buffer copy.


Are we stuck with a buffer copy (or copy on write) mechanism anyway?

What is the spec on changing the buffer after xhr.send?

example:
xhr.send(bigView.buffer); bigView[0] = 255; bigView[1000] = 255;




Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard

On 4/11/2012 2:41 PM, Kenneth Russell wrote:

On Wed, Apr 11, 2012 at 10:04 AM, Boris Zbarsky  wrote:

>  Seems like right now passing a typed array to send() requires a bit of extra
>  hoop-jumping to pass the .buffer instead, right?  Is that desirable?

It may be convenient to add an overload to send() (presumably on both
XHR and WebSocket? Any others?) accepting ArrayBufferView. As pointed


It's convenient.

xhr.send(view); // shorthand
xhr.send(view.buffer.slice(view.byteOffset, 
view.byteOffset+view.byteLength)); // longhand.


Kenneth,

Can we get a voice from MS? They've been supporting typed arrays in IE10 
for awhile.

If we're going to have this method, I'd really like to see it in IE10.


Boris, How do we feature test for support of the shorthand method?


-Charles




Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard

On 4/11/2012 1:16 PM, Boris Zbarsky wrote:

On 4/11/12 4:06 PM, Glenn Maynard wrote:

It's a bit worse than that, actually: if you want to send only part of a
buffer, you have to create a whole new ArrayBuffer and copy the data
over.  If you just pass "view.buffer", you'll send the *whole*
underlying buffer, not just the slice represented by the view.


Oh, that's just broken.

That argues for the removal of the ArrayBuffer overload, indeed, and 
just leaving the ArrayBufferView version.




Adding .send(ArrayBufferView) doesn't seem like it'd hurt anything; it 
would help in the case of subarray.

Removing .send(ArrayBuffer) would hurt things.

I'd imagine that we saw send(ArrayBuffer) first because that's just how 
the semantics for ArrayBuffer have been used.


It may cut down on constructors/header code within cpp implementations.


As an author, I'm going to be stuck to .send(ArrayBuffer) for awhile, 
given the distance between this proposal and pick-up by MS and Apple.


Again, if there's anything happening with TC39 that would make 
.send(BinaryData) available, I'm up for waiting on it, instead of jumping

in early with ArrayBufferView.


-Charles



Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard

On 4/11/2012 1:49 PM, Boris Zbarsky wrote:

On 4/11/12 4:40 PM, Charles Pritchard wrote:

That argues for the removal of the ArrayBuffer overload, indeed, and
just leaving the ArrayBufferView version.


I've got no idea where TC39 is taking things.


ArrayBuffer and ArrayBufferView and such are not specced in TC39 at 
the moment, and I'm not aware of plans to change that (though there 
are other plans like Binary Data that are related).



I think that's the bigger issue here.


How so?


If ArrayBufferView becomes a JS semantic, some of this is moot


Which part?


When/if we start using the "Binary Data" instead of re-purposing typed 
arrays.



All of the postMessage semantics use ArrayBuffer AFAIK.

postMessage does arbitrary object graphs, which can include either 
ArrayBuffers or ArrayBufferViews.  If you try to postMessage a typed 
array, the receiver will get a typed array, as expected.


My understanding is/was that an ArrayBufferView will be copied whereas 
an ArrayBuffer will be neutered/transferred.





I know you've been circling this issue for awhile, so I'll put it out
there again: yes, using typed arrays is difficult.


Sure, but we shouldn't make it _more_ difficult if we can avoid it.


These practices are already established. You're exploring ways to make 
using them easier.


Typed Arrays have matured. They're available in all major 
implementations and as far as I'm aware, compatible across the 
implementations.


I know they can be a pain in the ass, but they are functioning.

-Charles



Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard

On 4/11/2012 1:16 PM, Boris Zbarsky wrote:

On 4/11/12 4:06 PM, Glenn Maynard wrote:

It's a bit worse than that, actually: if you want to send only part of a
buffer, you have to create a whole new ArrayBuffer and copy the data
over.  If you just pass "view.buffer", you'll send the *whole*
underlying buffer, not just the slice represented by the view.


Oh, that's just broken.

That argues for the removal of the ArrayBuffer overload, indeed, and 
just leaving the ArrayBufferView version.


I've got no idea where TC39 is taking things. I think that's the bigger 
issue here.
Yes, I've been bitten by trying to use .subarray  instead of .slice (as 
Glenn points out).


If ArrayBufferView becomes a JS semantic, some of this is moot; and 
ArrayBuffer is still very necessary for compatibility.


All of the postMessage semantics use ArrayBuffer AFAIK.


I know you've been circling this issue for awhile, so I'll put it out 
there again: yes, using typed arrays is difficult.



-Charles




Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Charles Pritchard
Yes; .buffer has stable semantics across many apis.

It does feel awkward when first using it, but the design makes sense after some 
experience.


On Apr 11, 2012, at 10:04 AM, Boris Zbarsky  wrote:

> Seems like right now passing a typed array to send() requires a bit of extra 
> hoop-jumping to pass the .buffer instead, right?  Is that desirable?
> 
> -Boris
> 



Re: Speech API Community Group

2012-04-03 Thread Charles Pritchard
I'd like to encourage everyone interested in the Speech API to join the 
mailing list:

http://lists.w3.org/Archives/Public/public-speech-api/

For those interested in more hands-on interaction, there's the CG:
http://www.w3.org/community/speech-api/

For some archived mailing list discussion, browse the old XG list:
http://lists.w3.org/Archives/Public/public-xg-htmlspeech/

It seems like we can move this chatter over to public-speech-api and off 
of the webapps list.


-Charles


On 4/3/2012 1:08 PM, Michael Bodell wrote:


A little bit of historical context and resource references might be 
helpful for some on the email thread.


While this is still an early stage for a community group, if one will 
happen, it actually isn’t early for the community as a group to talk 
about this.  In many ways we’ve already done the initial incubation 
and community discussion and investigation for this space in the HTML 
Speech XG.  This lead to the XG’s use case and requirements document:


http://www.w3.org/2005/Incubator/htmlspeech/live/requirements.html

which were then refined to a prioritized requirement list after 
soliciting community input:


http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-20111206/#prioritized

As I read it, Milan and Jim and Raj’s requirements discussed are part 
of FPR7 [Web apps should be able to request speech service different 
from default] and FPR12 [Speech services that can be specified by web 
apps must include network speech services], both of which were voted 
to have “Strong Interest” by the community.


Further work from these requirements led to the community coming up 
with a proposal, which is ready now to be taken to a standards track 
process, that was published in the XG final report:


http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-20111206/

Hopefully we can all properly leverage the work the community has 
already done.


Michael Bodell

Co-chair HTML Speech XG

*From:*Jerry Carter [mailto:je...@jerrycarter.org]
*Sent:* Tuesday, April 03, 2012 12:50 PM
*To:* Raj (Openstream); Milan Young; Jim
*Cc:* Glen Shires; public-xg-htmlspe...@w3.org; public-webapps@w3.org
*Subject:* Re: Speech API Community Group

We can discuss this in terms of generalities without any resolution, 
so let me offer two more concrete use cases:


My friend Jóse is working on a personal site to track teams and
player statistics at the Brazil 2014 World Cup.  He recognizes
that the browser will define a default language through the HTTP
Accept-Language header, but knows that speakers may code switch in
their requests (e.g. Spanish + English or Portuguese + English or
) or be better served by using native pronunciations (Jesus =
/heːzus/ vs. /ˈdʒiːzəs/).  Hence, he requires a resource that can
provide support for Spanish, English, and Portuguese and that can
also support multiple simultaneous languages.

These are two solid requirements.  A browser encountering the page 
might (1) be able to satisfy these requirements, (2) require user 
permission before accessing such a resource, or (3) be unable to meet 
the request.


My colleague Jim has another application for which hundreds of
hours have been invested to optimize the performance for a specify
recognition resource.  Security considerations further restrict
the physical location of conforming resources.  His page requires
a very specific resource.

These are two solid requirements.  A browser encountering the page 
might (1) be able to satisfy these requirements, (2) require user 
permission before accessing such a resource, or (3) be unable to meet 
the request.


There are indeed commercial requirements around the capabilities of 
resources.  We are in full agreement.  It is important to be able to 
list requirements for conforming resources and to ensure that the 
browser is enforcing those requirements.  That stated, the application 
author does no care where such a conforming resource resides so long 
as it is available to the targeted user population.  The user does not 
care where the resource resides so long as it works well and does not 
cost too much to use.


The trick within a Speech JavaScript API is to define what 
characteristics may be specified for resource selection or, 
alternatively, to determine that such definition is external to the 
immediate API: for instance,  there might be a separate spec which is 
referenced by the Speech JavaScript API.  It is too early to tell what 
direction the group might go.  It is already clear that there are 
strong opinions as to what criteria may be necessary for resource 
selection. *Refusing to participate unless one's specific criteria are 
addressed strikes me as quite inappropriate at this early stage.*


-=- Jerry

On Apr 3, 2012, at 3:15 PM, Raj (Openstream) wrote:




Perhaps true for users of the applicaitons. But, Authors would need 
Resource-specification(location),
hence clearly specifying how network/loc

Re: File API "oneTimeOnly" is too poorly defined

2012-03-29 Thread Charles Pritchard
Any feedback on what exists in modern implementations? MS seems to have the 
most hard-line stance when talking about this API.

When it comes to it, we ought to look at what happened in the latest harvest. 
IE10, O12, C19, and so forth.



On Mar 28, 2012, at 6:12 PM, Glenn Maynard  wrote:

> On Wed, Mar 28, 2012 at 7:49 PM, Jonas Sicking  wrote:
> > This would still require work in each URL-consuming spec, to define taking a
> > reference to the underlying blob's data when it receives an object URL.  I
> > think this is inherent to the feature.
> 
> This is an interesting idea for sure. It doesn't solve any of the
> issues I brought up, so we still need to define when dereferencing
> happens. But it does solve the problem of the URL leaking if it never
> gets dereferenced, which is nice.
> 
> Right, that's what I meant above.  The "dereferencing" step needs to be 
> defined no matter what you do.  This just makes it easier to define 
> (eliminating task ordering problems as a source of problems).
> 
> Also, I still think that all APIs should consistently do that as soon as it 
> first sees the URL.  For example, XHR should do it in open(), not in send().  
> That's makes it easy for developers to understand when the dereferencing 
> actually happens (in the general case, for all APIs).
> 
> One other thing: "dereferencing" should take a reference to the underlying 
> data of the Blob, not the Blob itself, so it's unaffected by neutering 
> (transfers and Blob.close).  That avoids a whole category of problems.
> 
> -- 
> Glenn Maynard
> 


Re: HELP

2012-03-27 Thread Charles Pritchard
Wrong list. But for the sake of web crawlers: send may require an argument in 
some implementations. Use null.

Such as: xhr.send(null);

And for other readers, this message I'm replying to may be spam. Intelligent 
noise.

-Charles



On Mar 27, 2012, at 1:15 AM, joseph godwin  wrote:

> l cannot access google apps business, l try to check my domain but not 
> responding and it bring out all these 
>  Can't connect to server:
> [Exception... "Not enough arguments [nsIXMLHttpRequest.send]"  nsresult: 
> "0x80570001 (NS_ERROR_XPC_NOT_ENOUGH_ARGS)"  location: "JS frame :: 
> https://www.google.com/a/cpanel/resources/js/signup_bin.js :: _checkDomain :: 
> line 33"  data: no]
> plesae help l want to sign up for google collaboration for business


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On 3/7/12 3:56 PM, Feras Moussa wrote:

Then let's try this again.

var a = new Image();
a.onerror = function() { console.log("Oh no, my parent was neutered!"); }; 
a.src = URL.createObjectURL(blob); blob.close();

Is that error going to hit?

until it has been revoked, so in your example onerror would not be hit
due to calling close.

var a = new Worker('#');
a.postMessage(blob);
blob.close();

The above would work as expected.


Well that all makes sense; so speaking for myself, I'm still confused 
about this one thing:



 xhr.send(blob);
 blob.close(); // method name TBD



 In our implementation, this case would fail. We think this is reasonable 
because the



So you want this to be a situation where we monitor progress events of 
XHR before releasing the blob?
It seems feasible to monitor the upload progress, but it is a little 
awkward.


-Charles



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On 3/7/12 12:34 PM, Kenneth Russell wrote:

On Wed, Mar 7, 2012 at 12:02 PM, Charles Pritchard  wrote:

On Mar 7, 2012, at 11:38 AM, Kenneth Russell  wrote:


I believe that we should fix the immediate problem and add a close()
method to Blob. I'm not in favor of adding a similar method to
ArrayBuffer at this time and therefore not to Transferable. There is a
high-level goal to keep the typed array specification as minimal as
possible, and having Transferable support leak in to the public
methods of the interfaces contradicts that goal.

I think there's broad enough consensus amongst vendors to table the discussion 
about adding close to Transferable.

Would you please let me know why ypu believe ArrayBuffer should not have a 
close method?

I would like some clarity here. The Typed Array spec would not be cluttered by 
the addition of a simple close method.

It's certainly a matter of opinion -- but while it's only the addition
of one method, it changes typed arrays' semantics to be much closer to
manual memory allocation than they currently are. It would be a
further divergence in behavior from ordinary ECMAScript arrays.

The TC39 working group, I have heard, is incorporating typed arrays
into the language specification, and for this reason I believe extreme
care is warranted when adding more functionality to the typed array
spec. The spec can certainly move forward, but personally I'd like to
check with TC39 on semantic changes like this one. That's the
rationale behind my statement above about preferring not to add this
method at this time.


Searching through the net tells me that this has been a rumor for years.

I agree with taking extreme care -- so let's isolate one more bit of 
information:


Is ArrayBuffer being proposed for TC39 incorporation, or is it only the 
Typed Arrays? The idea here is to alter ArrayBuffer, an object which can 
be neutered via transfer map. It seems a waste to have to create a 
Worker to close down buffer views.


Will TC39 have anything to say about the "neuter" concept and/or Web 
Messaging?



Again, I'm bringing this up for the same practical experience that 
Blob.close() was brought up. I do appreciate that read/write allocation 
is a separate semantic from write-once/read-many allocation.


I certainly don't want to derail the introduction of Typed Array into 
TC39. I don't want to sit back for two years either, while the 
ArrayBuffer object is in limbo.


If necessary, I'll do some of the nasty test work of creating a worker 
simply to destroy buffers, and report back on it.

var worker = new Worker('trash.js');
worker.postMessage(null,[bufferToClose]);
worker.close();
vs.
bufferToClose.close();



-Charles



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On Mar 7, 2012, at 11:38 AM, Kenneth Russell  wrote:

> On Tue, Mar 6, 2012 at 6:29 PM, Glenn Maynard  wrote:
>> On Tue, Mar 6, 2012 at 4:24 PM, Michael Nordman  wrote:
>>> 
 You can always call close() yourself, but Blob.close() should use the
 "neuter" mechanism already there, not make up a new one.
>>> 
>>> Blobs aren't transferable, there is no existing mechanism that applies
>>> to them. Adding a blob.close() method is independent of making blob's
>>> transferable, the former is not prerequisite on the latter.
>> 
>> 
>> There is an existing mechanism for closing objects.  It's called
>> "neutering".  Blob.close should use the same terminology, whether or not the
>> object is a Transferable.
>> 
>> On Tue, Mar 6, 2012 at 4:25 PM, Kenneth Russell  wrote:
>>> 
>>> I would be hesitant to impose a close() method on all future
>>> Transferable types.
>> 
>> 
>> Why?  All Transferable types must define how to neuter objects; all close()
>> does is trigger it.
>> 
>>> I don't think adding one to ArrayBuffer would be a
>>> bad idea but I think that ideally it wouldn't be necessary. On memory
>>> constrained devices, it would still be more efficient to re-use large
>>> ArrayBuffers rather than close them and allocate new ones.
>> 
>> 
>> That's often not possible, when the ArrayBuffer is returned to you from an
>> API (eg. XHR2).
>> 
>>> This sounds like a good idea. As you pointed out offline, a key
>>> difference between Blobs and ArrayBuffers is that Blobs are always
>>> immutable. It isn't necessary to define Transferable semantics for
>>> Blobs in order to post them efficiently, but it was essential for
>>> ArrayBuffers.
>> 
>> 
>> No new semantics need to be defined; the semantics of Transferable are
>> defined by postMessage and are the same for all transferable objects.
>> That's already done.  The only thing that needs to be defined is how to
>> neuter an object, which is what Blob.close() has to define anyway.
>> 
>> Using Transferable for Blob will allow Blobs, ArrayBuffers, and any future
>> large, structured clonable objects to all be released with the same
>> mechanisms: either pass them in the "transfer" argument to a postMessage
>> call, or use the consistent, identical close() method inherited from
>> Transferable.  This allows developers to think of the transfer list as a
>> list of objects which won't be needed after the postMessage call.  It
>> doesn't matter that the underlying optimizations are different; the visible
>> side-effects are identical (the object can no longer be accessed).
> 
> Closing an object, and neutering it because it was transferred to a
> different owner, are different concepts. It's already been
> demonstrated that Blobs, being read-only, do not need to be
> transferred in order to send them efficiently from one owner to
> another. It's also been demonstrated that Blobs can be resource
> intensive and that an explicit closing mechanism is needed.
> 
> I believe that we should fix the immediate problem and add a close()
> method to Blob. I'm not in favor of adding a similar method to
> ArrayBuffer at this time and therefore not to Transferable. There is a
> high-level goal to keep the typed array specification as minimal as
> possible, and having Transferable support leak in to the public
> methods of the interfaces contradicts that goal.

I think there's broad enough consensus amongst vendors to table the discussion 
about adding close to Transferable.

Would you please let me know why ypu believe ArrayBuffer should not have a 
close method?

I would like some clarity here. The Typed Array spec would not be cluttered by 
the addition of a simple close method.

I work much more with ArrayBuffer than Blob. I suspect others will too as they 
progress with more advanced and resource intensive applications.

What is the use-case distinction between close of immutable blob and close of a 
mutable buffer?

-Charles


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-06 Thread Charles Pritchard

On 3/6/12 5:12 PM, Feras Moussa wrote:

>
>  frameRef.src = URL.createObjectURL(blob);
>  blob.close() // method name TBD
>
>  In my opinion, the first (using xhr) should succeed.  In the second, 
frameRef.src works,
>  but subsequent attempts to mint a Blob URI for the same 'blob' resource 
fail.  Does this
>  hold true for you?

We agree that subsequent attempts to mint a blob URI for a blob that has been 
closed
should fail, and is what I tried to clarify in my comments in 'section 6'.
As an aside, the above example shows navigation to a Blob URI - this is not 
something we
Currently support or intend to support.



Then let's try this again.

var a = new Image();
a.onerror = function() { console.log("Oh no, my parent was neutered!"); };
a.src = URL.createObjectURL(blob);
blob.close();

Is that error going to hit?

var a = new Worker('#');
a.postMessage(blob);
blob.close();

Is that blob going to make it to the worker?


-Charles



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-06 Thread Charles Pritchard
On Mar 6, 2012, at 2:25 PM, Kenneth Russell  wrote:

> On Tue, Mar 6, 2012 at 1:31 PM, Arun Ranganathan
>  wrote:
>> Ken,
>> 
>>> I'm not sure that adding close() to Transferable is a good idea. Not
>>> all Transferable types may want to support that explicit operation.
>>> What about adding close() to Blob, and having the neutering operation
>>> on Blob be defined to call close() on it?
>> 
>> 
>> Specifically, you think this is not something ArrayBuffer should inherit?  
>> If it's also a bad idea for MessagePort, then those are really our only two 
>> use cases of Transferable right now.  I'm happy to create something like a 
>> close() on Blob.
> 
> MessagePort already defines a close() operation, so there's really no
> question of whether its presence is a good or bad idea there. A
> close() operation seems necessary in networking style APIs.
> 
> I would be hesitant to impose a close() method on all future
> Transferable types. I don't think adding one to ArrayBuffer would be a
> bad idea but I think that ideally it wouldn't be necessary. On memory
> constrained devices, it would still be more efficient to re-use large
> ArrayBuffers rather than close them and allocate new ones.


By definition, Transferable objects can be neutered; we're talking about an 
explicit method for it. After that, it's up to implementers.

I prefer .close to .release. 

An ArrayBuffer using 12megs of ram is something I want to release ASAP on 
mobile.

.close would still allow for the optimization you're implying in memory mapping.





> 
> 
> On Tue, Mar 6, 2012 at 1:34 PM, Michael Nordman  wrote:
>> Sounds like there's a good case for an explicit blob.close() method
>> independent of 'transferable'. Separately defining blobs to be
>> transferrable feels like an unneeded complexity. A caller wishing to
>> neuter after sending can explicit call .close() rather than relying on
>> more obscure artifacts of having also put the 'blob' in a
>> 'transferrable' array.
> 
> This sounds like a good idea. As you pointed out offline, a key
> difference between Blobs and ArrayBuffers is that Blobs are always
> immutable. It isn't necessary to define Transferable semantics for
> Blobs in order to post them efficiently, but it was essential for
> ArrayBuffers.
> 

Making Blob a Transferable may simplify the structured clone algorithm; Blob an 
File would no longer be explicitly listed. Adding close to Transferable would 
simplify three different objects -- ArrayBuffer, MessagePort and Blob. In 
theory anyway.

While Blob doesn't need the transfer map optimization, it may be helpful in the 
context of web intents postMessage as it would release the Blob references from 
one window, possibly making GC a little easier. That's just a guess... But this 
thread is about enhancing the process. Seems reasonable that this would be a 
side effect.

File.close() may have implementation side effects, such as releasing read locks 
on underlying files.

-Charles


Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Charles Pritchard

On 3/5/2012 5:56 PM, Glenn Maynard wrote:
On Mon, Mar 5, 2012 at 7:04 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


Do you see old behavior working something like the following?


var blob = new Blob("my new big blob");
var keepBlob = blob.slice(); destination.postMessage(blob, '*',
[blob]); // is try/catch needed here?


You don't need to do that.  If you don't want postMessage to transfer 
the blob, then simply don't include it in the transfer parameter, and 
it'll perform a normal structured clone.  postMessage behaves this way 
in part for backwards-compatibility: so exactly in cases like this, we 
can make Blob implement Transferable without breaking existing code.


See http://dev.w3.org/html5/postmsg/#posting-messages and similar 
postMessage APIs.


Web Intents won't have a transfer map argument.
http://dvcs.w3.org/hg/web-intents/raw-file/tip/spec/Overview.html#widl-Intent-data

For the Web Intents structured cloning algorithm, Web Intents would be 
inserting into step 3:

If input is a Transferable object, add it to the transfer map.
http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#internal-structured-cloning-algorithm

Then Web Intents would move the first section of the structured cloning 
algorithm to follow the internal cloning algorithm section, swapping 
their order.

http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#safe-passing-of-structured-data

That's my understanding.

Something like this may be necessary if Blob were a Transferable:
var keepBlob = blob.slice();
var intent = new Intent("-x-my-intent", blob);
navigator.startActivity(intent, callback);


And we might have an error on postMessage stashing it in the
transfer array if it's not a Transferable on an older browser.





Example of how easy the neutered concept applies to Transferrable:

var blob = new Blob("my big blob");
blob.close();


I like the idea of having Blob implement Transferrable and adding close 
to the Transferrable interface.
File.close could have a better relationship with the cache and/or locks 
on data.



Some history on Transferrable and structured clones:

Note: MessagePort does have a close method and is currently the only 
Transferrable mentioned in WHATWG:

http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#transferable-objects

ArrayBuffer is widely implemented. It was the second item to implement 
Transferrable:

http://www.khronos.org/registry/typedarray/specs/latest/#9

Subsequently, ImageData adopted Uint8ClampedArray for one of its 
properties, adopting TypedArrays:

http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#imagedata

This has lead to some instability in the structured clone algorithm for 
ImageData as the typed array object for ImageData is read-only.

https://www.w3.org/Bugs/Public/show_bug.cgi?id=13800

ArrayBuffer is still in a strawman state.

-Charles




Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Charles Pritchard

On 3/5/2012 3:59 PM, Glenn Maynard wrote:
On Fri, Mar 2, 2012 at 6:54 PM, Feras Moussa > wrote:


To address this issue, we propose that a close method be added to
the Blob

interface.

When called, the close method should release the underlying
resource of the

Blob, and future operations on the Blob will return a new error, a
ClosedError.

This allows an application to signal when it's finished using the
Blob.


This is exactly like the "neuter" concept, defined at 
http://dev.w3.org/html5/spec/common-dom-interfaces.html#transferable-objects.  
I recommend using it.  Make Blob a Transferable, and have close() 
neuter the object.  The rest of this wouldn't change much, except 
you'd say "if the object has been neutered" (or "has the neutered flag 
set", or however it's defined) instead of "if the close method has 
been called".


Originally, I think it was assumed that Blobs don't need to be 
Transferable, because they're immutable, which means you don't 
(necessarily) need to make a copy when transferring them between 
threads.  That was only considering the cost of copying the Blob, 
though, not the costs of delayed GC that you're talking about here, so 
I think transferable Blobs do make sense.


Also, the close() method should probably go on Transferable (with a 
name less likely to clash, eg. "neuter"), instead of as a one-off on 
Blob.  If it's useful for Blob, it's probably useful for ArrayBuffer 
and all other future Transferables as well.




Glenn,

Do you see old behavior working something like the following?

var blob = new Blob("my new big blob");
var keepBlob = blob.slice();
destination.postMessage(blob, '*', [blob]); // is try/catch needed here?
blob = keepBlob; // keeping a copy of my blob still in thread.

Sorry to cover too many angles: if Blob is Transferable, then it'll 
neuter; so if we do want a local copy, we'd use slice ahead of time to 
keep it.
And we might have an error on postMessage stashing it in the transfer 
array if it's not a Transferable on an older browser.

The new behavior is pretty easy.
var blob = new Blob("my big blob");
blob.close(); // My blob has been neutered before it could procreate.

-Charles


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-02 Thread Charles Pritchard

On 3/2/2012 4:54 PM, Feras Moussa wrote:


At TPAC we discussed the ability to deterministically close blobs with 
a few


others.



...


To address this issue, we propose that a close method be added to the 
Blob


interface.

When called, the close method should release the underlying resource 
of the


Blob, and future operations on the Blob will return a new error, a 
ClosedError.


This allows an application to signal when it's finished using the Blob.




I suppose the theory of Blob is that it can be written to disk. The 
other theory is that reference counting can somehow work magic.

I'm not sure, but it came up before.

I brought up a close mechanism for ArrayBuffer, which is (I believe) 
supposed to be in memory, always.

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-January/029741.html

Also referenced:
http://www.khronos.org/webgl/public-mailing-list/archives/1009/msg00229.html

ArrayBuffer can now be closed out of the current thread via Transferable 
semantics.

I don't know if it disappears into space with an empty postMessage target.

I don't think that works for Blob, though.

Yes, I'd like to see immediate mechanisms to cleanup Blob and ArrayBuffer.
In practical use, it wasn't an issue on the desktop and the iPhone 
hadn't picked up the semantics yet.


But gosh it's no fun trying to navigate memory management on mobile 
device. Thus my low memory event thread.


-Charles


Re: FileSystem API: Adding file size field to Metadata?

2012-02-28 Thread Charles Pritchard
On Feb 28, 2012, at 1:52 PM, Darin Fisher  wrote:

> On Tue, Feb 28, 2012 at 10:47 AM, Kinuko Yasuda  wrote:
> Hi,
> 
> While looking at the FileSystem API draft I noticed that we only expose 
> 'modificationTime' in 'Metadata' object.  Since FileEntry itself doesn't have 
> 'size' field unlike File object maybe it's reasonable to add 'size' field to 
> Metadata?
> 
> http://www.w3.org/TR/file-system-api/#the-metadata-interface
> 
> Without adding this we can indirectly get the file size by creating 'File' 
> object via FileEntry.file() method and accessing File.size, but when an app 
> wants to display the modificationTime and file size at once (and it sounds 
> very plausible use case) the app needs to call two different async methods-- 
> which doesn't sound very nice.  WDYT?
> 
> Thanks,
> Kinuko
> 
> 
> 
> 
> I think this is a nice improvement.  File size is very obviously something 
> one might expect to be included in meta data for a file :-)

Content type may be nice as well. Whatever the OS reports.

-Charles

[webapps-component] Web and Canvas Components, select, audio controls and ARIA

2012-02-23 Thread Charles Pritchard

Regarding UI components in web applications

For me it's been an uphill battle to have Canvas surfaces recognized as 
a legitimate surface for interactive UI controls. I know XBL advocates 
have had quite a battle in developing a markup syntax for packaging UI 
components. I suppose I should be grateful. Without these battles, I'd 
have nothing to talk about...


What say you all about recognizing this situation as a class of problems 
and addressing it as such?


I've yet to reach a shared reality with Dimitri Glazkov, as he authors 
web components, I've yet to reach a compromise with Tab Atkins and Ian 
Hickson, the editor and assistant person-thing of HTML5, and I can 
barely hold a consensus with Rich Schwerdtfeger, a previous editor and 
prime mover of ARIA. Yet, these all live in the same realm of "Web Apps 
UI". They are somewhat separable from Web Apps APIs.


While I seem to have connected with Richard on the concept that ARIA 
should be applied to UI components (not that difficult, since that 
matches his world view), I'm struggling with others to gain consensus on 
the importance of UI APIs.


Tab and Ian have gone with a "pure" markup model, where the UA 
could/should and all the work declarative. Dimitri is in between, 
providing a markup language and shadow DOM, but not yet addressing other 
imperative APIs.



That's my situation with spec contributors. And now onto element cases:

 has been around a long long time. It's a unique element in 
that it may display outside of the browser window. When I tap on a 
select element, the options list may go beyond the bounds of the client 
window. Neat stuff. It's one of the untouchable elements when it comes 
to CSS. Everyone and their grandmother has programmed some form of 
select via . That's a big part of the motivation behind ARIA's 
creation.


 hasn't changed much. We can now style the font and 
background, and maybe the scroll bars if we use some vendor-prefixed 
CSS. That's about it.  [] is a prime example of 
CSS+HTML Forms failing to work together.


, now that's an interesting case. This media element was 
designed with flexibility in mind. The flexibility being that, you use 
@controls, or you roll your own UI and we'll give you everything in the 
API you need to do it.


It's great, but the built-in UI suffers from the all-or-nothing 
phenomenon. And that sucks, because I have made CSS hacks with with the 
audio tag. I re-discovered that the css @zoom attribute is important for 
web components. I've used CSS transforms and had as much fun with audio 
@controls as I have with repainting the canvas tag to meet my visibility 
needs. That's all great, but I came up against an obstacle in 
implementations. I can't push and pull the  tag out of the DOM 
without causing a stutter in the audio playing in Chrome. I filed a bug, 
sure, but I lost my ability to really use the audio tag in a flexible 
manner with @controls. I'm forced to implement my own @controls UI, even 
though I really would prefer to use the one packaged with the browser. 
All or nothing, and when something, something minor breaks, I'm back to 
nothing. I'd really like to use the progress meter, and play and volume 
buttons that are implemented for . But I can't, because 
it's all or nothing.


Then there's ARIA. There are some great techniques to expose data with 
ARIA. Now that CSS selectors are working with ARIA, I tell you my world 
is getting easier each day. But even ARIA can't help me with audio. I 
can't markup my custom play, pause, and I am indeed playing audio right 
now, elements in ARIA. So the moment I remove my  tag 
from the DOM, because it fails on me, I also remove the opportunity for 
any standard method of pausing or detecting that audio is playing in my 
window.


While we can argue these are about bugs, or these are just shortcomings 
of browser vendors who haven't reached an ideal situation: they just 
haven't made that "final convergence" that the HTML5 editor is hoping 
for, I'd prefer to have a focused discussion on how we can meet 
obligations of programmatic access and of re-usability. I'd like to find 
a means of testing, talking about and solving issues with these UI 
components. ARIA has done a lot for that. I can prove that an AT is 
getting sufficient access to a component. I can prove that the scripting 
environment also has sufficient access. I want to carry that work forward.


To do that, I need to have a consensus of some sort in webapps that we 
are talking about the same thing.


Can I provide details to an  UI that allows the progress 
meter to monitor progress? I've got everything but a means to say, yeah 
dude, you're at minute 1 and 20 seconds. Can I provide details to the UA 
and AT that an audio element is currently playing and spouting out some 
volume? Can I write a  element in canvas where I have indeed 
applied style to individual  elements, and have it work as well 
as a native element; browser zoom and all? If I can do that,

Re: CG for Speech JavaScript API

2012-02-14 Thread Charles Pritchard
Afaik, the grammar part of the spec along with speech strings for tts ought to 
be included. If it's a discussion about speech, it ought to be a full 
discussion.

It takes only minutes to route speech through translation and tts services 
online.

Let's take a look at it.





On Feb 14, 2012, at 3:09 PM, "b...@pettay.fi"  wrote:

> On 02/15/2012 12:58 AM, Charles Pritchard wrote:
>> CG sounds good, but I agree that the technical aspects of speech are
>> not a good match for the webapps mailing list.
>> 
>> This topic is heavy in linguistics; other work in web apps is not.
>> 
>> I'd like to feel free to explore IPA, code switching, grammar;
>> voicexml, a whole group of topics not particularly relevant to the
>> webapps mailing list.
> CG won't be anything about VoiceXML or similar.
> 
> 
>> 
>> I will certainly post threads to webapps when appropriate. The API
>> hooks for webkitspeech;
> The CG will be about this. API for speech. Certainly it is very
> different from the current webkitspeech stuff, but still an API for speech.
> 
> 
>> the applicability of speech to the IME API--
>> those I will post to webapps when they are mature.
>> 
>> But most of the issues around speech are not issues that I'd bring up
>> in webapps.
>> 
>> -Charles
>> 
>> 
>> 
>> On Feb 14, 2012, at 2:45 PM, Olli Pettay
>> wrote:
>> 
>>> So, if I haven't made it clear before, doing the initial
>>> standardization work in CG sounds ok to me. I do expect that there
>>> will be a WG eventually, but perhaps CG is a faster and more
>>> lightweight way to start - well continue from what XG did.
>>> 
>>> -Olli
>>> 
>>> 
>>> On 01/31/2012 06:01 PM, Glen Shires wrote:
>>>> We at Google propose the formation of a new Community Group to
>>>> pursue a JavaScript Speech API. Specifically, we are proposing
>>>> this Javascript API [1], which enables web developers to
>>>> incorporate speech recognition and synthesis into their web
>>>> pages, and supports the majority of use-cases in the Speech
>>>> Incubator Group's Final Report [2]. This API enables developers
>>>> to use scripting to generate text-to-speech output and to use
>>>> speech recognition as an input for forms, continuous dictation
>>>> and control. For this first specification, we believe this
>>>> simplified subset API will accelerate implementation,
>>>> interoperability testing, standardization and ultimately
>>>> developer adoption. However, in the spirit of consensus, we are
>>>> willing to broaden this subset API to include additional
>>>> Javascript API features in the Speech Incubator Final Report.
>>>> 
>>>> We believe that forming a Community Group has the following
>>>> advantages:
>>>> 
>>>> - It’s quick, efficient and minimizes unnecessary process
>>>> overhead.
>>>> 
>>>> - We believe it will allow us, as a group, to reach consensus in
>>>> an efficient manner.
>>>> 
>>>> - We hope it will expedite interoperable implementations in
>>>> multiple browsers. (A good example is the Web Media Text Tracks
>>>> CG, where multiple implementations are happening quickly.)
>>>> 
>>>> - We propose the CG will use the public-webapps@w3.org
>>>> <mailto:public-webapps@w3.org>  as its mailing list to provide
>>>> visibility to a wider audience, with a balanced web-centric view
>>>> for new JavaScript APIs.  This arrangement has worked well for
>>>> the HTML Editing API CG [3]. Contributions to the specification
>>>> produced by the Speech API CG will be governed by the Community
>>>> Group CLA and the CG is responsible for ensuring that all
>>>> Contributions come from participants that have agreed to the CG
>>>> CLA.  We believe the response to the CfC [4] has shown
>>>> substantial interest and support by WebApps members.
>>>> 
>>>> - A CG provides an IPR environment that simplifies future
>>>> transition to standards track.
>>>> 
>>>> Google plans to supply an implementation and a test suite for
>>>> this specification, and will commit to serve as editor.  We hope
>>>> that others will support this CG as they had stated support for
>>>> the similar WebApps CfC. [4]
>>>> 
>>>> Bjorn Bringert Satish Sampath Glen Shires
>>>> 
>>>> [1]
>>>> http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
>>>> 
>>>> 
> [2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
>>>> [3]
>>>> http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
>>>> 
>>>> 
> [4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html
>>> 
>>> 
>> 
> 



Re: CG for Speech JavaScript API

2012-02-14 Thread Charles Pritchard
CG sounds good, but I agree that the technical aspects of speech are not a good 
match for the webapps mailing list.

This topic is heavy in linguistics; other work in web apps is not.

I'd like to feel free to explore IPA, code switching, grammar; voicexml, a 
whole group of topics not particularly relevant to the webapps mailing list.

I will certainly post threads to webapps when appropriate. The API hooks for 
webkitspeech; the applicability of speech to the IME API-- those I will post to 
webapps when they are mature.

But most of the issues around speech are not issues that I'd bring up in 
webapps.

-Charles



On Feb 14, 2012, at 2:45 PM, Olli Pettay  wrote:

> So, if I haven't made it clear before,
> doing the initial standardization work in CG sounds ok to me.
> I do expect that there will be a WG eventually, but perhaps
> CG is a faster and more lightweight way to start - well continue from
> what XG did.
> 
> -Olli
> 
> 
> On 01/31/2012 06:01 PM, Glen Shires wrote:
>> We at Google propose the formation of a new Community Group to pursue a
>> JavaScript Speech API. Specifically, we are proposing this Javascript
>> API [1], which enables web developers to incorporate speech recognition
>> and synthesis into their web pages, and supports the majority of
>> use-cases in the Speech Incubator Group's Final Report [2]. This API
>> enables developers to use scripting to generate text-to-speech output
>> and to use speech recognition as an input for forms, continuous
>> dictation and control. For this first specification, we believe
>> this simplified subset API will accelerate implementation,
>> interoperability testing, standardization and ultimately developer
>> adoption. However, in the spirit of consensus, we are willing to broaden
>> this subset API to include additional Javascript API features in the
>> Speech Incubator Final Report.
>> 
>> We believe that forming a Community Group has the following advantages:
>> 
>> - It’s quick, efficient and minimizes unnecessary process overhead.
>> 
>> - We believe it will allow us, as a group, to reach consensus in an
>> efficient manner.
>> 
>> - We hope it will expedite interoperable implementations in multiple
>> browsers. (A good example is the Web Media Text Tracks CG, where
>> multiple implementations are happening quickly.)
>> 
>> - We propose the CG will use the public-webapps@w3.org
>>  as its mailing list to provide visibility
>> to a wider audience, with a balanced web-centric view for new JavaScript
>> APIs.  This arrangement has worked well for the HTML Editing API CG [3].
>> Contributions to the specification produced by the Speech API CG will be
>> governed by the Community Group CLA and the CG is responsible for
>> ensuring that all Contributions come from participants that have agreed
>> to the CG CLA.  We believe the response to the CfC [4] has shown
>> substantial interest and support by WebApps members.
>> 
>> - A CG provides an IPR environment that simplifies future transition to
>> standards track.
>> 
>> Google plans to supply an implementation and a test suite for this
>> specification, and will commit to serve as editor.  We hope that others
>> will support this CG as they had stated support for the similar WebApps
>> CfC. [4]
>> 
>> Bjorn Bringert
>> Satish Sampath
>> Glen Shires
>> 
>> [1]
>> http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
>> [2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
>> [3] http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
>> [4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html
> 
> 



Re: Elements and Blob

2012-02-14 Thread Charles Pritchard
This was covered and dismissed in earlier form with element.saveData semantics 
in IE.

It's really unnecessary. We an cover the desired cases without this level of 
change.



On Feb 14, 2012, at 12:08 PM, Bronislav Klučka  
wrote:

> Hi,
> regarding current discussion about Blobs, URL, etc. I'd like to have 
> following proposition:
> every element would have following additional methods/fields:
> 
> Blob function saveToBlob();
> that would return a blob containing element data  (e.g. for img element that 
> would be image data, for p element that would be basically innerHTML, for 
> input it would be current value, for script it would be either script element 
> content [if exists], or it would be empty blob [if src is used]). With CORS 
> applied here. There are progress events needed.
> 
> void function loadFromBlob(Blob blob);
> that would load the blob content as element content (e.g. for img element it 
> would display image data in that blob [no changes to src attribute], for p 
> element that would be basically innerHTML, for input it serve as value, for 
> script this would load data as script (and element content) and execute it 
> [no changes to src attribute]). Function should create no reference between 
> element and blob, just load blob data. There are progress events needed.
> 
> attribute Blob blob;
> that would do the same as loadFromBlob, but it would also create reference 
> between element and blob
> 
> 
> and why that:
> 1/ saveToBlob - would create easy access to any element data, we are already 
> talking about media elements (canvas, image), I see no point of limiting it. 
> Do you want blob from image or textarea? Just one function.
> 
> 2/ loadFromBlob, blob - could solve current issue with createObjectUrl (that 
> functionality would remain as it is): no reference issues, intuitive usage
> 
> 
> 
> Brona
> 
> 
> 
> 
> 
> 



Re: Synchronous postMessage for Workers?

2012-02-14 Thread Charles Pritchard

On 2/14/2012 5:31 AM, Arthur Barstow wrote:

On 2/14/12 2:02 AM, ext David Bruant wrote:

Le 13/02/2012 20:44, Ian Hickson a écrit :

Should we just tell authors to get used to the async style?

I think we should. More constructs are coming in ECMAScript. Things
related to language concurrency should probably be left to the core
language unless there is an extreme need (which there isn't as far as I
know).


David - if you have some recommended reading(s) re concurrency 
constructs that are relevant to this discussion and are coming to 
ECMAScript, please let me know.


(I tend to agree if there isn't some real urgency here, we should be 
careful [with our specs]).


We could still use some kind of synchronous semantic for passing 
gestures between frames.


This issue has popped up in various ways with Google Chrome extensions.

We can't have a user click on frame (a) and have frame (b) treat it as a 
gesture, for things like popup windows and the like.


This is of course intentional, for the purpose of security. But if both 
frames want it to happen, it's a side effect of the asynchronous nature 
of postMessage.






Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Charles Pritchard

On 2/14/2012 5:35 AM, Bronislav Klučka wrote:



On 14.2.2012 5:56, Jonas Sicking wrote:

On Thu, Feb 2, 2012 at 4:40 PM, Ian Hickson  wrote:

On Thu, 2 Feb 2012, Arun Ranganathan wrote:

2. Could we modify things so that img.src = blob is a reality? Mainly,
if we modify things for the *most common* use case, that could be 
useful

in mitigating some of our fears. Hixie, is this possible?

Anything's possible, but I think the pain here would far outweigh the
benefits. There would be some really hard questions to answer, too 
(e.g.

what would innerHTML return? If you copied such an image from a
contentEditable section and pasted it lower down the same section, 
would

it still have the image?).

We could define that it returns an empty src attribute, which would
break the copy/paste example. That's the same behavior you'd get with
someone revoking the URL upon load anyway.

/ Jonas



The point of reusable Blob URL is the compatibility with regular URL, 
not having reusable URL would create unpleasant dichotomy in data 
manipulating...


What do you think of a global release mechanism? Such as 
URL.revokeAllObjectUrls();





Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Charles Pritchard

On 2/14/2012 1:24 AM, Paul Bakaus wrote:

window.loadPackage('package.webpf', function() {
 var img = new Image();
 img.src = "package.webpf/myImage.png";
})

Or alternatively, with a local storage system (I prefer option one):

window.loadPackage('package.webpf', function(files) {
 files[0].saveTo('myImage.png');
 var img = new Image();
 img.src = "local:///myImage.png";
})



How about picking up FileSystem API semantics?

var img = new Image(); img.onload = myImageHandler;
var package = 'package.webpf';
var sprite = 'myImage.png';
window.requestFileSystem(package, 0, function(fs) {
myPackagePolyfill(function(root) { img.src = 
root.getFile(sprite).toURL(); }, fs.root, package);

});

Packages would be calculated with the "temporary" file system quota.

I'm fine with it being a different method name, but re-using toURL makes 
a lot of sense.


I've heard more than once about how we shouldn't be requiring more 
formats. So I'd leave the mount format undefined, and just throw an 
error if it's not supported.
This still brings the package into the browser cache, it can be 
re-requested via XHR + ArrayBuffer (or blob), and manually parsed by JS 
polyfill.


From a practical perspective:
Use uncompressed zip for packaging, it's trivial to support in JS these 
days.
Content is already compressed, typically, and deflate can be used in the 
request layer if the client-server relationship supports it.


Use the polyfill method to uncompress the files into the temporary file 
system if it's supported.
And if it's not supported, your polyfill can still use createObjectURL 
and file slice to handle business efficiently.


Just try to cleanup somewhere with a myPackagePolyfill.dispose(package);

-Charles


Re: connection ceremony for iframe postMessage communications

2012-02-10 Thread Charles Pritchard

On 2/10/2012 3:44 PM, John J Barton wrote:

Thanks. As a hint for the next person, it seems like the asymmetric
messages (parent 'ping', iframe 'load') is easier than symmetric
('hello'/'ack')

I think there are two more cases. Because the messages are all async,
the 'it gets missed case" can be "it gets delayed". That causes
additional messages.


I've been pursuing something like this for transferring content between 
frames. That's part of why I was excited to see Web Intents taking off.

I also went with a ping mechanism for messaging.

I recommend taking this thread over to the Web Intents list and 
examining it there. I see every reason to want a loose idea of what Web 
Intents over postMessage would look like.
They have a shim mechanism for demo purposes and backwards 
compatibility, but they don't have the same focus you do (or I do), on 
iframe postMessage.



-Charles



Re: Enabling a Web app to override auto rotation?

2012-02-07 Thread Charles Pritchard
In case it's needed; use case:

User is drawing a sketch on their mobile phone and their rotation is 
intentional as if they are working with a physical piece of paper.

-Charles



On Feb 7, 2012, at 11:17 PM, Tobie Langel  wrote:

> There's no current spec for this, but it's on our plate:
> 
> http://www.w3.org/2008/webapps/wiki/CharterChanges#Additions_Agreed
> 
> 
> --tobie
> 
> On 2/8/12 3:06 AM, "Michael[tm] Smith"  wrote:
> 
>> About portrait-landscape auto rotation on current mobile/tablet
>> browsers/platforms: If a user has auto rotation set on their mobile or
>> tablet, I know it's possible for a particular native application to
>> override that setting and stay in whatever screen orientation it wants.
>> 
>> My question is if it is currently possible for a Web application to do the
>> same thing; that is, to prevent the browser on the device from
>> auto-rotating into a different mode.
>> 
>> --Mike
>> 
>> -- 
>> Michael[tm] Smith
>> http://people.w3.org/mike/+
>> 
> 
> 



Re: [FileAPI] No streaming semantics?

2012-02-05 Thread Charles Pritchard
Use slice; webkitSlice.

They just it themselves put together on the media Apis as well. So that's cool. 
There's an append stream semantic.

-Charles



On Feb 5, 2012, at 5:18 PM, Justin Summerlin  wrote:

> I've been playing around with the FileAPI in both Chrome and Firefox.  The 
> API presented works very well for small files consistent with perhaps circa 
> 1990s web usage.  Previewing small images (~50K), reading and processing 
> small files ("a few" MB) have been cases where uniformly strong results are 
> seen and the proposed API appears to greatly enhance the capability of 
> client-oriented JS-enabled websites.
> 
> Upon considering the implications for local file processing, it became 
> apparent to me that the repercussions for client-side filtering and 
> aggregation can be a potentially huge thing for the internet.  One simple 
> case study demonstrates two-fold inadequacies in the presented File API for 
> very commonly used semantics.
> 
> Consider a web application designed to process user-submitted log files to 
> perform analytics and diagnose problems.  Perhaps these log files can 
> typically be 50GB in size.  Two cases are interesting:
> 
> 1. The application scans through the log file looking for errors up to some 
> maximum number, then reports those to a server-side script.
> 2. The application watches the log file and actively collects information on 
> errors to recommend diagnostics (in this case, no round-trip may be 
> necessary).
> 
> The reason the first case cannot be implemented with the present API is that 
> readAs* in FileReader reads the *entire* file into memory, firing progress 
> events along the way.  It is consistent that both Chrome and Firefox 
> implementations attempt to do this and then fail due to insufficient memory.  
> The reason the second case is impractical is that one must re-read the entire 
> file into memory each time to see any changes in a file, which is problematic 
> at best.
> 
> Unless I'm missing something (I don't believe that I am), the capability of 
> streaming which would solve both of these problems in a very effective way, 
> is not present in the FileAPI.  Perhaps in addition to readAs*, both seek and 
> read[Text|BinaryString|ArrayBuffer](, [, ]).  
> Additionally, in an asynchronous manner, the result is presented in an event:
> 
> function processFile(file, reader) {
>   reader.onread = function (ev) {
> if (has more...) {
>   reader.readText(file, 4096);
> }  else {
>   reader.onread = null;
> }
> // Process chunk...
>   }
>   reader.readText(file, 4096);
> }
> 
> And in the case of reading more data from a file as it's written to, one 
> would simply keep attempting a read and if the read returns no data, do 
> nothing.
> 
> Is this intended and if so, is any streaming semantic to be considered in 
> future JavaScript API considerations?
> 
> Thanks,
> 
> Justin



Re: Concerns regarding cross-origin copy/paste security

2012-02-04 Thread Charles Pritchard

On 2/2/2012 10:48 PM, Ryosuke Niwa wrote:
On Thu, Feb 2, 2012 at 10:43 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


On 2/2/12 10:27 PM, Ryosuke Niwa wrote:

On Thu, Feb 2, 2012 at 10:20 PM, Charles Pritchard
mailto:ch...@jumis.com>> wrote:

Seems like a very minor risk for high security sites, e.g.
banking, in identifying form elements.
In the spirit of giving it some thought:


But even for those websites, what could input / textarea elements
can reveal more than what user sees?

Many sites use  elements with what are essentially
image maps for entering a PIN.


But any element with display:none will be removed so  
should be removed.


It's becoming more common that top level domains are being
restricted or redirected to country codes. It seems plausible that
domains may further be restricted to HTTPS (SSL) signatures. Going
further, sites may be restricted to those which serve appropriate
security headers against XSS attacks. Disabling the "copy"
mechanism for any portion of a site does risk censorship. But, we
are only examining high security portions of high security sites,
such as  and .


input[type=password] is a good one. We should probably get rid of the 
value in that case?


Yes, I think so. I'm working on an application in which I do a lot of 
copy and paste work. I'll let you know if I come across anything I think 
should change.


-Charles


Re: [FileAPI] createObjectURL isReusable proposal

2012-02-02 Thread Charles Pritchard

On 2/2/12 11:08 PM, Bronislav Klučka wrote:

On 3.2.2012 7:51, Charles Pritchard wrote:
I see no reason why an author should expect to stash 100MB of objects 
into createObjectURL, nor any reason why a UA could not manage 100MB 
for the application lifetime; the user can certainly be informed, as 
they are with other APIs, if the limit has gone beyond what the UA is 
comfortable with. That's always useful for debugging/development -- 
when infinite loops are a normal part of the web experience.

Well how about 1GB video :D or 100 high resolution images?
But as you say there's no reason, why desktop application 
(browser) should have any problem with it...
And yes... there may be some disk space problems on rare cases, but 
such problems can appear in any regular usage of any program (well, 
there's simply no space left)...


We're going to be treating that video as a media stream via Blob URL.
Which is a good point -- it's going to be a Blob url of some sort but it 
is a special case.


That 1GB video is not getting loaded into RAM all at once.

A video blob is going to look like some cross of:
http://www.w3.org/TR/streamproc/#media-element-extensions
http://dev.w3.org/2011/webrtc/editor/webrtc.html
http://dev.w3.org/2011/webrtc/editor/webrtc.html#blobcallback

It's gonna be messy!

Remember, we've got createObjectURL(File) as well, and the mess that can 
cause, as recently discussed.

The underlying File data could disappear from the file system at any time

That in mind, my focus is on .
And with 100 high res images, believe me: I'm using thumbnails.

Not only that, I'm packing 8 thumbnails into each blob/file.
So with 100 high res images, we're talking about 12 blobs, less than 60 
megs.

A lot less now that JPEG is widely supported for Canvas.


-Charles



Re: [FileAPI] createObjectURL isReusable proposal

2012-02-02 Thread Charles Pritchard

On 2/2/12 10:40 PM, Bronislav Klučka wrote:



On 3.2.2012 7:34, Bronislav Klučka wrote:



On 28.1.2012 8:47, Ian Hickson wrote:

On Sat, 28 Jan 2012, Kyle Huey wrote:
On Sat, Jan 28, 2012 at 7:10 AM, Darin Fisher  
wrote:

I'm not sure what a concrete proposal would look like.  Maybe
Element.URL.createObjectURL or just Element.createObjectURL?

Wouldn't returning an object (which can be GCd) be a better solution?
The whole point of the API is that we have an object but need a 
string (a URL).




We could always leave the revokeObjectUrl call to simply delete 
content from cache...


Brona



BTW. I know It sounds a lot like something that can be done with 
FileSystem API, but I hope the ease and benefits for developers are 
clear here, FileSystem API would be overkill for this (and easier to 
implement for vendors without FileSystem API implemented yet)


Brona


Yes, revokeObjectURL would simply mark the content as "dirty". Some 
other cache / trash cleanup takes care of the garbage.

There's nothing in createObjectURL that says a Blob must be in RAM.

For all intents, createObjectURL could put items into the Temporary 
FileSystem for a domain. The Chromium team has already lifted file size 
limitations on the temporary file system. That's with the understanding 
that items can be evicted at any time.


However, createObjectURL specifies that an object may not be evicted 
until the page is navigated away from, or until revokeObjectURL is called.
Persistence is typically limited to 5MB per domain. Or something like 
10MB when localStorage is added to the mix.


It seems like this is an interesting middle-ground.

The battle is in the CSS-realm.

With the  tag, we can simply hook into onerror. Within CSS, we are 
unaware as to whether our contested resource has been loaded.

There are of course, iframe tricks, which are used for font loading.

Lots of subtleties in this area.

I see no reason why an author should expect to stash 100MB of objects 
into createObjectURL, nor any reason why a UA could not manage 100MB for 
the application lifetime; the user can certainly be informed, as they 
are with other APIs, if the limit has gone beyond what the UA is 
comfortable with. That's always useful for debugging/development -- when 
infinite loops are a normal part of the web experience.



-Charles



Re: Concerns regarding cross-origin copy/paste security

2012-02-02 Thread Charles Pritchard

On 2/2/12 10:27 PM, Ryosuke Niwa wrote:
On Thu, Feb 2, 2012 at 10:20 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


Seems like a very minor risk for high security sites, e.g.
banking, in identifying form elements.
In the spirit of giving it some thought:


But even for those websites, what could input / textarea elements can 
reveal more than what user sees?


Many sites use  elements with what are essentially image 
maps for entering a PIN.


In that case, a user does not see the PIN, though they do see an image 
map which has been obscured through various means.


I doubt there are security risks in this area.



There are various XSS headers that signal enhanced security for
websites, even to browser extensions.
Perhaps some of them ought to be used in the "copy" mechanism.
That way the data never reaches the clipboard for paste.


That's also an option and may need to be spec'ed to some extent.


It's the best I have to offer, in hypothesizing how we may address the 
concern.
High security sites use high security headers. If they've opted into 
those headers, we can do a lot to limit data exposure.


There are many sorts of XSS attacks for sites that do not implement 
security headers. We can't help those, there are just too many leaks. 
So, I'd focus on specifying additional clipboard constraints for high 
security sites.


I would put out one word of caution: such restrictions could be used for 
censorship. I don't think we have an option there.


It's becoming more common that top level domains are being restricted or 
redirected to country codes. It seems plausible that domains may further 
be restricted to HTTPS (SSL) signatures. Going further, sites may be 
restricted to those which serve appropriate security headers against XSS 
attacks. Disabling the "copy" mechanism for any portion of a site does 
risk censorship. But, we are only examining high security portions of 
high security sites, such as  and .


We're examining those elements for the sake of consumer protection for 
users doing online banking and otherwise cooperating in a secure 
environment for private data. That's a good thing.


-Charles


Re: Concerns regarding cross-origin copy/paste security

2012-02-02 Thread Charles Pritchard

On 2/2/12 10:14 PM, Ryosuke Niwa wrote:
Sorry for the extremely slow reply. It slipped through hundreds of 
emails :(


On Mon, May 16, 2011 at 8:41 PM, Hallvord R. M. Steen 
mailto:hallv...@opera.com>> wrote:


To me, it doesn't make sense to remove the other elements:
- OBJECT: Could be used for SVG as I understand.


OBJECT is considered a form element, so it might have hidden data
associated with it. It can also contain plugin content that could
inject scripts and be used for XSS attacks. It may be too
far-fetched or draconian to remove it though. (SVG is rich enough
to be its own can of worms by the way..)


Given the improved support for inline SVG and MathML, it's probably 
okay to strip it. However, we should add EMBED to the list since it's 
a plugin element.


- INPUT (non-hidden, non-password): Content is already
available via
text/plain.


An input's @name attribute is basically hidden data the user will
not be aware of pasting. I'm not sure how much of a threat this
is, but we should give it some thought.


You mean ? I don't think that'll expose much 
information. I'd prefer not removing these attributes as I've seen 
bugs filed against WebKit for "form control" editors; apparently some 
people would like to create form control editors using contenteditable.




Seems like a very minor risk for high security sites, e.g. banking, in 
identifying form elements.

In the spirit of giving it some thought:

There are various XSS headers that signal enhanced security for 
websites, even to browser extensions.
Perhaps some of them ought to be used in the "copy" mechanism. That way 
the data never reaches the clipboard for paste.


-Charles


Re: Image.toBlob()

2012-02-02 Thread Charles Pritchard

On 2/2/12 9:03 PM, Glenn Maynard wrote:


With that scheme, though, if you were really referencing an image,
then your toDataURL and toBlob output, given no optional
parameters were specified, and the file format were the same, well
it could be a copy of the binary data.


That's the whole idea.  You can transparently bypass the process of 
blitting to a backbuffer for this case (modulo the zero alpha issue). 
 It's just an optimization.


Oh, that's not a use case I need. There's no real overhead.

I need the use case of  Image.toBlob() returning a binary copy of the 
resource, metadata and all, in its original and pure form. That way I 
don't have to XHR after or before hand, and I can still get progressive 
rendering and whatever else comes along with the browser implementation 
of the Image object.


It ought to spout off a state error if image has not loaded yet, a 
security error if cors isn't met, I'm going to be using crossorigin>, cause that's what I do.


I actually don't need Image.toBlob() to be used for image format 
conversion, though that'd be up to the implementer.

Canvas.toBlob() is a PNG by default.
Image.toBlob() is format agnostic by default.


-Charles


Re: Image.toBlob()

2012-02-02 Thread Charles Pritchard

On 2/2/12 8:14 PM, Glenn Maynard wrote:

(You're blue today.)

I can't stop myself from responding on my mobile phone. Bad habit :-).



On Thu, Feb 2, 2012 at 4:34 PM, Charles Pritchard <mailto:ch...@jumis.com>> wrote:


They are a pain, and lossy.

You don't want to do drawImage then a toBlob png on an image
that's a jpeg. We've had to use followup XHR calls which may or
may not do another http fetch. We went the route of doing an XHR
call first, but then we couldn't display nice progressive loading.
Canvas also inflates the size of PNG files, typically.


Not if the browser optimizes for this and avoids the copy entirely 
(eg. as long as you do a 1:1 copy, don't draw anything else on top of 
it, etc).  That seems like a useful optimization anyway.


It's a pretty extreme optimization. I get what you're saying. You could, 
in theory, make one optimization

onDrawImage
if(cleanCanvas && x == 0 && y == 0 && width = canvas.width && height == 
canvas.height) referenceImageBuffer;


There's nothing like it in current implementations. I'd think it rare to 
come across an instance where an author has only made a clone image, and 
done nothing else. They'd just use the  element in such a case.


I'm not sure if alpha complicates this.  If you save a transparent PNG 
back to a PNG, you want the original RGBA, not premultiplied and 
un-premultiplied results.  I don't know if the spec allows bypassing 
those lossy steps in this case.  I don't think 2d canvas specifies the 
color depth of the backbuffer.  If you think of the backbuffer as 
having a high color depth, you'd get the same result.  That doesn't 
work for alpha values of zero, though, where the result would be 
undefined (and is defined as black) rather than the original color.


It's worth trying to make this work before adding new API.


We're talking about jpeg, as well as PNG, and items like WebM. Canvas is 
specified as RGBA. What happens when an indexed or an RGB PNG file is 
put in? You're taking some real liberties with the scheme I mentioned 
above, and there seems to be very few use cases for it.


With that scheme, though, if you were really referencing an image, then 
your toDataURL and toBlob output, given no optional parameters were 
specified, and the file format were the same, well it could be a copy of 
the binary data.


I think that's where perhaps we're at a bit of a disconnect. My concern 
is getting a copy of the binary data of an image, not getting a copy of 
the display pixels. I was hoping Image.toBlob would handle that. There's 
all sorts of yummy information in that binary data. And I dislike the 
extra XHR calls it requires to fish out.


-Charles



Re: Image.toBlob()

2012-02-02 Thread Charles Pritchard
On Feb 2, 2012, at 2:24 PM, "Anne van Kesteren"  wrote:

> On Sat, 28 Jan 2012 09:27:08 +0100, Bronislav Klučka 
>  wrote:
>> would it be possible to have Image.toBlob() function? We are introducing 
>> Canvas.toBlob(), image (and maybe video, audio) would be nice addition
> 
> Are the additional resource required to drawing the image on  first a 
> bottleneck?
> 


They are a pain, and lossy.

You don't want to do drawImage then a toBlob png on an image that's a jpeg. 
We've had to use followup XHR calls which may or may not do another http fetch. 
We went the route of doing an XHR call first, but then we couldn't display nice 
progressive loading. Canvas also inflates the size of PNG files, typically.

Audio and video are a different thing; they will be using chunked streams.


-Charles

Re: [FileAPI] createObjectURL isReusable proposal

2012-02-02 Thread Charles Pritchard




On Feb 2, 2012, at 1:40 PM, Ian Hickson  wrote:

> On Thu, 2 Feb 2012, Arun Ranganathan wrote:
>> 
>> 2. Could we modify things so that img.src = blob is a reality? Mainly, 
>> if we modify things for the *most common* use case, that could be useful 
>> in mitigating some of our fears. Hixie, is this possible?
> 
> Anything's possible, but I think the pain here would far outweigh the 
> benefits. There would be some really hard questions to answer, too (e.g. 
> what would innerHTML return? If you copied such an image from a 
> contentEditable section and pasted it lower down the same section, would 
> it still have the image?).

How about just a convenience method.

Blob.prototype.toString=function(){ return URL.createObjectURL(this);
};




Re: Installing web apps

2012-02-01 Thread Charles Pritchard

I precisely*didn't*  want to get into a detail about whether everyone should use
widgets or will use widgets -- I want to argue for XMLHTTPRequest
being designed to be able to be used not only in an untrusted web page,
but e.g. from an installed widget, or node.js for that matter,
which means returning a defined error response when the privilege is
insufficient, instead of faking a network error.
I've been trying to write code which will work in any of these.


I've been there; even with  I hit some snags (which 
have since been fixed).

You're going to want an async privilege check ahead of time.

chrome.permissions.getAll is a good example if checking permissions via 
async, early on in the script.


It's reasonable, at this point, to assume that permissions will not be 
restricted once they are granted in the lifetime of the script.
I don't know if that's a reasonable assumption for the future. In the 
future case, you may need to do a permissions check before

any XHR call.

Probably best to do that, too.

When doing polyfill work, async booting is necessary.

-Charles


Granting permissions, was Re: Installing web apps

2012-02-01 Thread Charles Pritchard

On 2/1/12 11:57 AM, Boris Zbarsky wrote:

On 2/1/12 2:39 PM, Charles Pritchard wrote:

Mozilla said they were getting rid of their enable privilege API. I
don't know that they have.


It's being removed, slowly.  For example, cross-site XHR (modulo 
whatever CORS allows) is no longer possible even if you 
enablePrivilege in current Gecko.


There may be a different privilege setup eventually, but 
enablePrivilege in its existing form is not a good API, especially for 
the web.


So, in Gecko, is cross-site XHR something now specified explicitly in 
the extension manifest?


Chrome went ahead with specifying "optional permissions" in the 
manifest, and when those are present in the manifest, they can be 
requested via chrome.permissions.request. It's async, which is nice, and 
is quite different than the synchronous Gecko model.


http://code.google.com/chrome/extensions/trunk/permissions.html

The special status of "file://" is still a special thing. 
enablePrivilege in Gecko had semantics for such items, whereas Chrome 
does not. It seems like that part of Gecko is also being put aside, 
which is probably for the best. I've considered running a page from 
file:// a special form of installation.


It still is, with Chrome, as one can do things like iframe local 
content. Not something you can really do otherwise.


-Charles



Re: Installing web apps

2012-02-01 Thread Charles Pritchard

On 2/1/12 11:03 AM, Ian Hickson wrote:

On Wed, 1 Feb 2012, Tim Berners-Lee wrote:

These apps have got to be able to completely
act as agents trusted by the user, like for example

- a web browser

You want to write a Web browser in a Web browser?


Ian, at present, you're the one defining what a web browser means. We've 
had some of this discussion on the "Attitudes and Direction" thread.


I've written an SVG implementation inside of a web browser. I've also 
produced HTML authoring tools inside the browser.


The browser is a great place to do it; it's got backup implementations, 
a safe scripting environment, CSS and DOM parts; it's well tested, and 
modern browsers are already installed on many machines. Many people do 
special kiosk implementations inside the browser. Some people even write 
ATs inside the browser.


Yes, a browser in a browser. Two browser vendors are pushing 
browser-only operating systems.



- an IMAP client

There are lots of mail clients written on the Web today.


As I understand it, gmail is a 300k LOC beast, and drove many early 
innovations in the Chrome web browser.


An IMAP client requires TCP acccess; this is something that I can see 
the Chrome browser is looking to open up through a websockets proxy.




and so on, none of these can you currently write as a web app, because
of CORS.

to work around by just wrapping IMAP in WebSockets. Nothing to do with
CORS (or even the same-origin policy).


It has something to do with CORS if the CORS policy on the remote server 
doesn't allow it.


Yes, I can waste bandwidth and use a trusted proxy server. But I don't 
want to, and I'd like to use my own cookies.

That's the CORS issue.




As a user when I install an app, I want to be able to give it access to
a selection of:

Providing access to these things when the app is installed is IMHO a net
worse security model than granting access to these things implicitly when
the feature is needed.


Mozilla said they were getting rid of their enable privilege API. I 
don't know that they have. Chrome certainly went for the privilege 
model, with the chrome.permissions namespace.


Providing granular access is a nice thing. There are times, such as 
kiosk setups and corporate intranets, where the access should be granted 
all at once. For trusted applications.


I've found that even for applicationCache, it helps to have an "install" 
button. Cause I hate showing off a project to someone and seeing a 
permissions prompt on load.

I've not figured out how to evict the app cache, though.



Of the things you list, the following are already possible without an
up-front permission grant, in a manner more secure than an up-front grant:


- Whether it is permanently available or downloaded or cached for a while


We do not have long running shared workers. There's no means to tell the 
browser that a page should run in the background at system start-up (or 
browser start-up) and continue running. We're close, on that end, but 
there is no BackgroundWindow spec at the moment.


Chrome just hacked on window.open('url', '#background') or #bg, or 
something like that.




- Ability to access anything on the web

What's the use case for this?



Authoring tools, assistive technologies and everything in between. And 
proxies and traffic shaping.


Google did a good job in their latest releases with "Copy Image". That 
feature actually works now to copy an image from one frame to another.
Copy and paste works fairly well (I'm working on the bug reports); 
allowing a user to copy and paste sections of a page from one site to 
another.


But they're only partial solutions, and they certainly don't match the 
ability granted in the chrome.webRequest namespace.





I want to be able to se where all my resources (including CPU, RAM,
'disk')  on my laptop or tablet or phone are being used up, just like I
do with music and movies.

You can do that today without an up-front permission grant. (q.v. Chrome's
task manager, for example.)


And in private namespaces, I'm sure you can access the data 
programatically. Or will be able to at some point.


Perhaps there's confusion here as to what an "application" means to you, 
Ian.





If I can't give power to apps, then the web app platform cannot compete
with native apps.

There's plenty of things we can do to make the Web platform more
compelling and a better competitor to native apps, but adding "installing"
isn't one of them. That would in fact take one of the Web's current
significant advantages over native apps and kill it.


What does "installing" mean to you?

For me, applicationCache is installing. Using web storage is installing.

Hell even using Cookies can be argued as installing. Thus the backlash 
against advertisers installing trackers on a person's computer.


On the iPhone, adding a bookmark to the home screen is installing. On 
the chrome store, it's often the case that people notice "hey, this is 
just a bookmark to a webp

Re: Fwd: Data compression APIs?

2012-01-30 Thread Charles Pritchard

On 1/30/12 6:11 AM, timeless wrote:

There have been some requests for zip support [1], and probably less
relevant for xhr [2]. Note that the use case I'm forwarding [3]
requires support for both compression and decompression.



I'm solely looking for inflate / deflate support. It provides a decent 
balance for compression and is a critical part of several document formats.


I've used inflate for things like opening zip files, docx, pdf content, 
and deflate for creating PNG (other than the supported 32bit per pixel).


We could get compress working before multiplex support gets worked into 
WebSockets, and not have to worry that much about multiplex and 
streaming, since it can be added into binary websocket packet with 
relative ease.


Inflate and deflate really are critical components for web apps. Their 
absence is a barrier for authors like me to support common file format.s



-Charles





Re: [File API] Draft for Review

2012-01-27 Thread Charles Pritchard

On 1/27/12 3:58 PM, Arun Ranganathan wrote:

On 1/26/12 1:21 PM, Arun Ranganathan wrote:



Yes, this is nicer. There's no way to create a new File object: I
can't
set name and lastModifiedDate.

These can't be done, but there is FileSaver/FileWriter).


It's a minor thing, but there are items like FormData, where the File 
object has more semantics than Blob:  "If you specify a Blob as the data 
to append to the FormData object, the filename that will be reported to 
the server in the 'Content-Disposition' header will vary from browser to 
browser"


FileEntry.file( callback ) will return a new file object, but is quite a 
bit of extra work.




Please review the editor's draft at:
http://dev.w3.org/2006/webapi/FileAPI/



I want to see the FileSaver (and a new, FileSaverStream) interface
moved
into the File API.


You mean, you find difficulty in the way they're set up now?  By "moved" you 
mean you want them part of the same specification?  Why?


The File API Blob constructor obsoletes File Writer BlobBuilder.

The FileWriter and FileWriterSync methods are noted in the File Writer 
API as something that ought to be moved to the File System API.


That only leaves one method, FileSaver. FileSaver only requires the File 
API.


Yes, I'm suggesting that FileSaver should be part of the File API 
specification, and FileWriter should be part of the FileSystem 
specification, then the File Writer API document can be retired.


Microsoft has experimented with window.navigator.msSaveBlob and 
msSaveOrOpenBlob in IE10, as mentioned here:

http://msdn.microsoft.com/en-us/library/ie/hh673542(v=vs.85).aspx
Note that they've included it in the File API section examples.

Google hasn't implemented FileSaver yet, they seem to be waiting for 
consensus:

http://code.google.com/p/chromium/issues/detail?id=65615

With FileSaver and FileSaverStream in the main File API document, I'd 
imagine we could develop quicker and cleaner consensus, and it will be 
easier for developers to understand its use and availability.






Re: [File API] Draft for Review

2012-01-27 Thread Charles Pritchard

On 1/26/12 1:21 PM, Arun Ranganathan wrote:

Greetings public-webapps,

I'd like to encourage some review of File API:

http://dev.w3.org/2006/webapi/FileAPI/

You can send comments to this listserv, or file a bug, since this spec. now has 
a Bugzilla component.

Here are some notable changes:

1. Blob is now constructable, following discussions on the listserv [Blob].  
We're not using rest params and ES6 syntactic sugar *yet* (arrays for now), but 
I still think it one-ups BlobBuilder.


Yes, this is nicer. There's no way to create a new File object: I can't 
set name and lastModifiedDate.


There's an error in the blob constructor code. It uses object notation 
for the first argument instead of array notation.

var c = new Blob({b, arraybuffer});
Should be
var c = new Blob([b, arraybuffer]);

I don't understand the following example:
// Simply create a new Blob object
var x = new Blob(12);


Please review the editor's draft at: http://dev.w3.org/2006/webapi/FileAPI/
I want to see the FileSaver (and a new, FileSaverStream) interface moved 
into the File API.


I'd like the file writer API proposal dissolved, with FileWriter moved 
to the FileSystem API.

http://www.w3.org/TR/file-writer-api/

Here's an addition to the FileSaver API, with the intended use case of 
streaming data to disk.
This allows files to be transferred over websockets and to be opened by 
the user and external programs while being written.


http://www.w3.org/TR/file-writer-api/#the-filesaver-interface
interface FileSaverStream : FileSaver {
readonly attribute unsigned long long position;
readonly attribute unsigned long long length;
void append (Blob data) raises (FileException);
void close();
};


That rounds out my desires for the final File API.


-Charles





Re: [File API] Draft for Review

2012-01-26 Thread Charles Pritchard

On 1/26/12 4:39 PM, Jonas Sicking wrote:

On Thu, Jan 26, 2012 at 4:25 PM, Tab Atkins Jr.  wrote:

On Thu, Jan 26, 2012 at 1:21 PM, Arun Ranganathan
  wrote:

2. URL.createObjectURL now takes an optional boolean, following discussions on 
the listserv [oneTimeOnly].

As I argued 
in,
we should absolutely *not* be adding more boolean arguments to the
platform.  They should be exposed as boolean properties in an
dictionary.  Naked bools are impossible to decipher without memorizing
the call signature of every function.

I would be ok with that. We'll possibly want to add other optional
arguments anyway, like headers


I could see: {expires: }, {name: } and {disposition: } as useful.

I could see Allow Origin CORS as something useful too, but that's 
starting to go off the range.


-Charles



Re: Obsolescence notices on old specifications, again

2012-01-23 Thread Charles Pritchard
We have the same issue on the other side of it-- new specs that we're cautioned 
against using.

Let's allow for co-existence. Silly folk like me can work with new Web Apis, 
and be berated for it, and other folk can use old standards, and be berated for 
it. Everybody wins!

-Charles



On Jan 23, 2012, at 1:03 PM, "Tab Atkins Jr."  wrote:

> On Mon, Jan 23, 2012 at 12:38 PM, Glenn Adams  wrote:
>> I work in an industry where devices are certified against final
>> specifications, some of which are mandated by laws and regulations. The
>> current DOM-2 specs are still relevant with respect to these certification
>> processes and regulations.
>> 
>> I do not object to adding an informative, warning notice to the status
>> sections of these docs that new work is underway to replace, and eventually
>> formally obsolete older DOM RECs. However, until replacement specs exist
>> that have achieved sufficient maturity (namely, REC status), it would not be
>> appropriate to formally obsolete the existing DOM specs.
> 
> We have repeated evidence that pretending these specs aren't obsolete
> and useless hurts web implementors and authors.  We're targeting the
> web with our specs, so that's extremely relevant for us, more so than
> non-web industries dealing with personal regulatory issues.
> 
> Ignoring the regulatory issues for a moment, the non-web industries
> harm themselves (or rather, the down-level authors writing content for
> the things those industries are producing) by attempting to use these
> obsolete specs as well, since they'll be producing things that don't
> match the public web.
> 
> But really, the important thing is just that these specs are hurting
> the web, and our primary focus is the web.
> 
> ~TJ
> 



Re: File modification

2012-01-13 Thread Charles Pritchard

On 1/13/12 11:13 AM, Arun Ranganathan wrote:

On 1/12/12 12:53 PM, Arun Ranganathan wrote:

Oh I'm glad to see this one! Is it Blob and File that can be
put into IDB? How do I create a new File (with a name field)
from a Blob?
Charles: see the thread on making Blobs constructable --
follow
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0439.html


http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0918.html

Seems like the consensus was to stay away from Blob to File
methods. FileEntry and [a download] being the heirs apparent.


Well, the consensus was to introduce a constructor to Blob, which I'm 
about to do :)


Sorry, I misread that thread; and misremembered it. I saw it as 
appending multiple strings.

I'm happy to see the Blob constructor happening!



The File API specification should do a better job describing
what happens to a File if the underlying resource changes.  We
use the word immutable, but I think we have to make this
substantially clearer.


My take on a File object that's been modified is that the file no
longer exists.
"User agents MUST process reads on files that no longer exist at
the time of read as errors"
"A file may change on disk since the original file selection, thus
resulting in an invalid read."


Just to be clear, "disappearing" a file from FileList might is the 
same as|


item(index)

|returning null for the the index'th item.  Maybe this should be 
enforced as well for any kind of modification?  This isn't what Fx 
does today.


I'm fine with that being the defined behavior for FileList when 
implementations have elected to remove modified files.
It's still optional: implementations MAY remove modified files from the 
FileList, those that do MUST return null for the index'th item.


I'm hoping they don't go down that route. It may mean a disconnect 
between the results of submitting a post

and the files available to the scripting environment.

Again, I don't imagine that  and a subsequent 
.submit() would result in an empty POST when the user modifies the 
underlying file.
Though it should do so if the user deletes or renames the underlying 
file. I suppose things could get nasty, though, if the user modifies the 
file while the post is happening. I don't want to think about those kind 
of race conditions. Hopefully the UA can put a temporary write lock on 
those files.


Well... I'm satisfied on this topic. I think we've incrementally 
improved what the File API will specify.


-Charles


Re: String to ArrayBuffer

2012-01-12 Thread Charles Pritchard

On 1/12/2012 10:03 AM, Tab Atkins Jr. wrote:

On Thu, Jan 12, 2012 at 9:54 AM, Charles Pritchard  wrote:

I don't see it being a particularly bad thing if vendors expose more
translation encodings. I've only come across one project that would use
them. Binary and utf8 handle everything else I've come across, and I can use
them to build character maps for the rest, if I ever hit another strange
project that needs them.

As always, the problem is that if one browser supports an encoding
that no one else does, then content will be written that depends on
that encoding, and thus is locked into that browser.  Other browsers
will then feel competitive pressure to support the encoding, so that
the content works on them as well.  Repeat this for the union of
encodings that every browser supports.

It's not necessarily true that this will happen for every single
encoding.  History shows us that it will probably happen with at least
*several* encodings, if nothing is done to prevent it.  But there's no
reason to risk it, when we can legislate against it and even test for
common things that browsers *might* support.



Count me as agnostic. I'm fine with simple. I'd like to see MS and Apple 
chime in on this issue.


Here's the "worst case" as I understand it being presented:
http://www.php.net/manual/en/mbstring.supported-encodings.php
http://php.net/manual/en/function.mb-convert-encoding.php


-Charles




Re: File modification

2012-01-12 Thread Charles Pritchard

On 1/12/12 12:53 PM, Arun Ranganathan wrote:
On Jan 12, 2012, at 6:58 AM, Kyle Huey > wrote:



On Thu, Jan 12, 2012 at 3:45 PM, Glenn Maynard mailto:gl...@zewt.org>> wrote:

FYI, I don't think this is clear for File from the spec. 
It's even more important if File objects are stored in

History or IndexedDB; that it should be a *shallow* copy,
with enough information stored to invalidate it if the
underlying file changes, doesn't seem to be specified. 
(As far as I know, nobody implements that yet; being able

to eg. retain open files in History states would be
extremely useful.


Gecko nightlies are capable of storing File objects in
IndexedDB,  We are doing "deep" copies (what is retrieved from
the database is always a copy of the file as it was when it
was placed in the database).


Oh I'm glad to see this one! Is it Blob and File that can be put
into IDB? How do I create a new File (with a name field) from a Blob?


Charles: see the thread on making Blobs constructable -- follow 
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0439.html


http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0918.html

Seems like the consensus was to stay away from Blob to File methods. 
FileEntry and [a download] being the heirs apparent.


The File API specification should do a better job describing what 
happens to a File if the underlying resource changes.  We use the word 
immutable, but I think we have to make this substantially clearer.


My take on a File object that's been modified is that the file no longer 
exists.
"User agents MUST process reads on files that no longer exist at the 
time of read as errors"
"A file may change on disk since the original file selection, thus 
resulting in an invalid read."


FileList is live; and UAs can go several ways with it. They can return 
new File objects, with updated information (yes, please); or they can 
otherwise cut out modified files or later, throw the security error as 
specified:  "Post-selection file modifications occur when a file changes 
on disk after it has been selected. In such cases, user agents MAY throw 
a SecurityError for synchronous read methods, or return a SecurityError 
DOMError for asynchronous reads.".


It's a consideration, anyway. I don't think people expect that, if they 
select a file for uploading, then modify the file before hitting submit, 
that they'll no longer upload a file. It would, I suppose, "disappear" 
from the Chosen file option.


There is a significant difference between selecting files and mounting a 
directory for persistent access. In one of them, the files are listed 
and new files are not added. So I may select a single file, and it'll 
show up in input type=file, or I'll see a count of the files I selected. 
With mounting a directory, I'm given continued access to poll for new 
files. I think that's enough of a distinction to keep the current 
behavior of allowing persistent access to selected files via FileList.


If it's not enough of a distinction, I'd like more semantics added to 
requestFileSystem to re-establish the persistent selection of files.


-Charles


Re: String to ArrayBuffer

2012-01-12 Thread Charles Pritchard
On Jan 12, 2012, at 9:17 AM, Glenn Adams  wrote:

> 
> 
> On Thu, Jan 12, 2012 at 10:10 AM, Tab Atkins Jr.  wrote:
> On Thu, Jan 12, 2012 at 8:59 AM, Glenn Adams  wrote:
> > On Thu, Jan 12, 2012 at 3:49 AM, Henri Sivonen  wrote:
> >> On Thu, Jan 12, 2012 at 1:12 AM, Kenneth Russell  wrote:
> >> > The StringEncoding proposal is the best path forward because it
> >> > provides correct behavior in all cases.
> >>
> >> Do you mean this one? http://wiki.whatwg.org/wiki/StringEncoding
> >>
> >> I see the following problems after a cursory glance:
> >>  4) It says "Browsers MAY support additional encodings." This is a
> >> huge non-interoperability loophole. The spec should have a small and
> >> fixed set of supported encodings that everyone MUST support and
> >> supporting other encodings should be a "MUST NOT".
> >
> >
> > In practice, it will be impractical if not impossible to enforce such a
> > dictum "MUST NOT support other encodings". Implementers will support
> > whatever they like when it comes to character encodings, both for
> > interchange, runtime storage, and persistent storage.
> 
> Actually, such requirements often work relatively well.  Many
> implementors recognize the pain caused by race-to-the-bottom support
> for random encodings.
> 
> I speak of enforcement. Will there be test cases to check for support of a 
> random encoding not included in a blessed list? I suspect not.

If it is an issue, an array of strings from a getSupportedEncodings method 
would solve that one.

I don't see it being a particularly bad thing if vendors expose more 
translation encodings. I've only come across one project that would use them. 
Binary and utf8 handle everything else I've come across, and I can use them to 
build character maps for the rest, if I ever hit another strange project that 
needs them.

-Charles

Re: File modification

2012-01-12 Thread Charles Pritchard
You're at the mercy of many types of latency with web apps.

For the high-importance notepad case, you're talking about one file. I can poll 
every second from my web app and back off when the screen is idle, with current 
code in Chrome. Hooking into OS level file notification would give better 
results, and help me back off the polling. And if it's on NFS, and there's no 
update signal, looping through FileList would still get me my event.

NFS and thousands of files are an extreme case. But they can be managed and an 
update event would improve that management.

-Charles



On Jan 12, 2012, at 10:20 AM, Glenn Maynard  wrote:

> That's not good enough for many use cases.  For example, a notepad app that 
> saves to disk wants to update the display if another program modifies the 
> file.  You don't want that to be delayed until you scan the directory; you 
> want the event pushed at you immediately  when it happens.  This is how I 
> imagine most use cases looking.
> 
> On Jan 12, 2012 10:16 AM, "Charles Pritchard"  wrote:
> On 1/12/2012 6:34 AM, Glenn Maynard wrote:
> 
> Side-effects of event registration are outside of the DOM event model.  UAs 
> can do whatever transparent optimizations they want, of course, but  APIs 
> shouldn't *depend* on that for efficient implementations.
> 
> Occasional polling definitely has significant overhead (directories may have 
> tens of thousands of files, be on network shares, etc), and should be widely 
> avoided.
> 
> I also wonder whether change notifications work over eg. NFS.  It would be 
> bad if this feature only sometimes worked (especially if it breaks on major 
> but less used configurations like NFS), since once deployed, apps will be 
> designed around it.
> 
> 
> On NFS and directories where directory notification is not available: send an 
> onchanged event to FileList when an underlying File object changes upon 
> access of its entry in FileList.
> 
> That is... as I'm looping through FileList grabbing files, it may invalidate 
> a File object. FileList has now "changed".
> I won't know until the next event loop. That's good enough.
> 
> 
> 
> 


Re: File modification

2012-01-12 Thread Charles Pritchard

On 1/12/2012 6:34 AM, Glenn Maynard wrote:


Side-effects of event registration are outside of the DOM event 
model.  UAs can do whatever transparent optimizations they want, of 
course, but  APIs shouldn't *depend* on that for efficient 
implementations.


Occasional polling definitely has significant overhead (directories 
may have tens of thousands of files, be on network shares, etc), and 
should be widely avoided.


I also wonder whether change notifications work over eg. NFS.  It 
would be bad if this feature only sometimes worked (especially if it 
breaks on major but less used configurations like NFS), since once 
deployed, apps will be designed around it.




On NFS and directories where directory notification is not available: 
send an onchanged event to FileList when an underlying File object 
changes upon access of its entry in FileList.


That is... as I'm looping through FileList grabbing files, it may 
invalidate a File object. FileList has now "changed".

I won't know until the next event loop. That's good enough.







  1   2   3   >