[Bug 16206] Editing spec should clarify normative and non-normative sections

2012-03-05 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16206

Aryeh Gregor a...@aryeh.name changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED

--- Comment #1 from Aryeh Gregor a...@aryeh.name 2012-03-05 18:05:08 UTC ---
Well, if you mention lawyers . . .

http://dvcs.w3.org/hg/editing/rev/1b07a32aae70

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



[Bug 16207] Editing states such as style with css flag should be reset when the document is replaced

2012-03-05 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16207

Aryeh Gregor a...@aryeh.name changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED

--- Comment #1 from Aryeh Gregor a...@aryeh.name 2012-03-05 18:06:25 UTC ---
IIUC, the HTML spec says that when you load a new document, the Document object
is replaced entirely.  Therefore, any state associated with it is replaced.  I
added a note to this effect.  I also added that it gets reset on
document.open(), which was not explicitly required anywhere before AFAICT.

http://dvcs.w3.org/hg/editing/rev/3859b3153f0c

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



RE: [IndexedDB] Multientry and duplicate elements

2012-03-05 Thread Israel Hilerio
The approach you described makes sense to us.
Thanks for clarifying.

Israel

On Saturday, March 03, 2012 5:07 PM, Jonas Sicking wrote:
 On Fri, Mar 2, 2012 at 8:49 PM, Israel Hilerio isra...@microsoft.com wrote:
  We would like some clarification on this scenario.  When you say that
  FF will result on 1 index entry for each index that implies that the
  duplicates are automatically removed.  That implies that the
  multiEntry flag doesn't take unique into consideration.  Is this correct?
 
 Not quite.
 
 In Firefox multiEntry indexes still honor the 'unique' constraint.
 However whenever a multiEntry index adds an Array of entries to an index, it
 first removes any duplicate values from the Array. Only after that do we start
 inserting entries into the index. But if such an insertion does cause a 
 'unique'
 constraint violation then we still abort with a ConstraintError.
 
 Let me show some examples:
 
 store = db.createObjectStore(store);
 index = store.createIndex(index, a, { multiEntry: true } ); store.add({ 
 x: 10 },
 1); // Operation succeeds, store contains one entry store.add({ a: 10 }, 2); 
 //
 Operation succeeds, store contains two entries // index contains one entry: 10
 - 2 store.add({ a: [10, 20, 20] }, 3); // Operation succeeds, store contains 
 three
 entries // index contains three entries: 10-2, 10-3, 20-3 store.add({ a: 
 [30,
 30, 30] }, 4); // Operation succeeds, store contains four entries // index
 contains four entries: 10-2, 10-3, 20-3, 30-4 store.put({ a: [20, 20] }, 
 3); //
 Operation succeeds, store contains four entries // index contains three 
 entries:
 10-2, 20-3, 30-4
 
 
 Similar things happen for unique entries (assume that the transaction has a
 errorhandler which calls preventDefault() on all error events so that the
 transaction doesn't get aborted by the failed inserts)
 
 store = db.createObjectStore(store);
 index = store.createIndex(index, a, { multiEntry: true, unique: true } );
 store.add({ x: 10 }, 1); // Operation succeeds, store contains one entry
 store.add({ a: 10 }, 2); // Operation succeeds, store contains two entries //
 index contains one entry: 10 - 2 store.add({ a: [10] }, 3); // Operation 
 fails due
 to the 10 key already existing in the index store.add({ a: [20, 20, 30] }, 
 4); //
 Operation succeeds, store contains three entries // index contains three
 entries: 10-2, 20-4, 30-4 store.add({ a: [20, 40, 40] }, 5); // Operation 
 fails
 due to the 20 key already existing in the index store.add({ a: [40, 40] }, 
 6); //
 Operation succeeds, store contains four entries // index contains four 
 entries:
 10-2, 20-4, 30-4, 40-6 store.put({ a: [10] }, 4); // Operation fails due 
 to the
 10 key already existing in the index // store still contains four entries // 
 index
 still contains four entries: 10-2, 20-4, 30-4, 40-6 store.put({ a: [10, 
 50] }, 2);
 // Operation succeeds, store contains five entries // index contains six 
 entries:
 10-2, 20-4, 30-4, 40-6, 50-2
 
 
 To put it in spec terms:
 One way to fix this would be to add the following to Object Store Storage
 Operation step 7.4:
 Also remove any duplicate elements from /index key/ such that only one
 instance of the duplicate value exists in /index key/.
 
 Maybe also add a note which says:
 For example, the following value of /index key/ [10, 20, null, 30, 20] is
 converted to [10, 20, 30]
 
 
 For what it's worth, we haven't implemented this in Firefox by preprocessing
 the array to remove duplicate entries. Instead we for non-unique indexes keep
 a btree key'ed on indexKey + primaryKey. When inserting into this btree we
 simply ignore any collisions since they must be due to multiple identical 
 entries
 in a multiEntry array.
 
 For unique indexies we keep a btree key'ed on indexKey. If we hit a collision
 when doing an insertion, and we're inserting into a multiEntry index, we do a
 lookup to see what primaryKey the indexKey maps to. If it maps to the
 primaryKey we're currently inserting for, we know that it was due to a
 duplicate entry in the array and we just move on with no error. If it maps to
 another primaryKey we roll back the operation and fire a ConstraintError.
 
 Let me know if there's still any scenarios that are unclear.
 
 / Jonas





CfC: LCWD of Cross-Origin Resource Sharing (CORS); deadline March 9

2012-03-05 Thread Arthur Barstow
All - WebAppSec has agreed to publish a LCWD of CORS. Since this spec is 
a joint deliverable with WebApps, we are now having a short CfC to 
publish this LC.


If you have any comments or concerns about this CfC, please send them to 
public-webapps@w3.org by March 9 at the latest. Positive response is 
preferred and encouraged and silence will be assumed to be agreement 
with the proposal.


-Thanks, AB

 Original Message 
Subject: 	Re: Transition Request: Cross-Origin Resource Sharing (CORS) 
to Last Call

Resent-Date:Mon, 5 Mar 2012 20:19:03 +
Resent-From:public-webapp...@w3.org
Date:   Mon, 5 Mar 2012 15:17:57 -0500
From:   ext Arthur Barstow art.bars...@nokia.com
To: ext Thomas Roessler t...@w3.org, Hill, Brad bh...@paypal-inc.com
CC: 	cha...@w3.org, w3t-c...@w3.org Team w3t-c...@w3.org, 
public-webapp...@w3.org, Eric Rescorla e...@rtfm.com, Anne van 
Kesteren (ann...@opera.com) ann...@opera.com




Since last December's CfC was indeed sent to both WGs, I agree with
Thomas that a short CfC for WebApps would be appropriate.

Brad - how about I start that a CfC on public-webapps now and end it on
March 9 and if all goes well, that would enable a LC publication on
March 13. Can you live with that?

-Art

On 3/5/12 2:56 PM, ext Thomas Roessler wrote:

 hi Brad, thanks.

 Note that a Last Call isn't actually a transition, but instead a
 decision made by the WG that is announced to a number of lists.  Given
 that this is a joint deliverable with webapps, you'll want to make
 sure that the web applications WG concurs with the last call.

 Art?

 Thanks,
 --
 Thomas Roessler, W3Ct...@w3.orgmailto:t...@w3.org   (@roessler
 https://twitter.com/roessler)







 On 2012-03-05, at 20:47 +0100, Hill, Brad wrote:


 Thomas,
 On behalf of the Web Application Security WG, we request that the
 Cross-Origin Resource Sharing specification transition to Last Call
 in the following location:
 http://www.w3.org/TR/2012/LC-cors-20120308/
 This can be published effective Thursday, March 8.
 The WG has documented its agreement to advance this specification by
 issuing a Call for Consensus on December 19, 2011, responding to
 objections raised, and resolving to proceed during our call on
 February 28^th .
 Thank you,

 Brad Hill
 Co-chair, WebAppSec WG







Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Arthur Barstow
Feras - this seems kinda' late, especially since the two-week pre-LC 
comment period for File API ended Feb 24.


Is this a feature that can be postponed to v.next?

On 3/2/12 7:54 PM, ext Feras Moussa wrote:


At TPAC we discussed the ability to deterministically close blobs with 
a few


others.

As we’ve discussed in the createObjectURL thread[1], a Blob may represent

an expensive resource (eg. expensive in terms of memory, battery, or disk

space). At present there is no way for an application to 
deterministically


release the resource backing the Blob. Instead, an application must 
rely on


the resource being cleaned up through a non-deterministic garbage 
collector


once all references have been released. We have found that not having 
a way


to deterministically release the resource causes a performance impact 
for a


certain class of applications, and is especially important for mobile 
applications


or devices with more limited resources.

In particular, we’ve seen this become a problem for media intensive 
applications


which interact with a large number of expensive blobs. For example, a 
gallery


application may want to cycle through displaying many large images 
downloaded


through websockets, and without a deterministic way to immediately 
release


the reference to each image Blob, can easily begin to consume vast 
amounts of


resources before the garbage collector is executed.

To address this issue, we propose that a close method be added to the 
Blob


interface.

When called, the close method should release the underlying resource 
of the


Blob, and future operations on the Blob will return a new error, a 
ClosedError.


This allows an application to signal when it's finished using the Blob.

To support this change, the following changes in the File API spec are 
needed:


* In section 6 (The Blob Interface)

- Addition of a close method. When called, the close method releases the

underlying resource of the Blob. Close renders the blob invalid, and 
further


operations such as URL.createObjectURL or the FileReader read methods on

the closed blob will fail and return a ClosedError. If there are any 
non-revoked


URLs to the Blob, these URLs will continue to resolve until they have 
been


revoked.

- For the slice method, state that the returned Blob is a new Blob 
with its own


lifetime semantics – calling close on the new Blob is independent of 
calling close


on the original Blob.

*In section 8 (The FIleReader Interface)

- State the FileReader reads directly over the given Blob, and not a 
copy with


an independent lifetime.

* In section 10 (Errors and Exceptions)

- Addition of a ClosedError. If the File or Blob has had the close 
method called,


then for asynchronous read methods the error attribute MUST return a

“ClosedError” DOMError and synchronous read methods MUST throw a

ClosedError exception.

* In section 11.8 (Creating and Revoking a Blob URI)

- For createObjectURL – If this method is called with a closed Blob 
argument,


then user agents must throw a ClosedError exception.

Similarly to how slice() clones the initial Blob to return one with 
its own


independent lifetime, the same notion will be needed in other APIs which

conceptually clone the data – namely FormData, any place the 
Structured Clone


Algorithm is used, and BlobBuilder.

Similarly to how FileReader must act directly on the Blob’s data, the 
same notion


will be needed in other APIs which must act on the data - namely 
XHR.send and


WebSocket. These APIs will need to throw an error if called on a Blob 
that was


closed and the resources are released.

We’ve recently implemented this in experimental builds and have seen 
measurable


performance improvements.

The feedback we heard from our discussions with others at TPAC 
regarding our


proposal to add a close() method to the Blob interface was that 
objects in the web


platform potentially backed by expensive resources should have a 
deterministic


way to be released.

Thanks,

Feras

[1] 
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1499.html






Re: FileReader abort, again

2012-03-05 Thread Eric U
On Thu, Mar 1, 2012 at 11:20 AM, Arun Ranganathan
aranganat...@mozilla.com wrote:
 Eric,

   So we could:
   1. Say not to fire a loadend if onloadend or onabort

 Do you mean if onload, onerror, or onabort...?


 No, actually.  I'm looking for the right sequence of steps that results in 
 abort's loadend not firing if terminated by another read*.  Since abort will 
 fire an abort event and a loadened event as spec'd 
 (http://dev.w3.org/2006/webapi/FileAPI/#dfn-abort), if *those* event handlers 
 initiate a readAs*, we could then suppress abort's loadend.  This seems messy.

Ah, right--so a new read initiated from onload or onerror would NOT
suppress the loadend of the first read.  And I believe that this
matches XHR2, so we're good.  Nevermind.



 Actually, if we really want to match XHR2, we should qualify all the
 places that we fire loadend.  If the user calls XHR2's open in
 onerror
 or onload, that cancels its loadend.  However, a simple check on
 readyState at step 6 won't do it.  Because the user could call
 readAsText in onerror, then call abort in the second read's
 onloadstart, and we'd see readyState as DONE and fire loadend twice.

 To emulate XHR2 entirely, we'd need to have read methods dequeue any
 leftover tasks for previous read methods AND terminate the abort
 algorithm AND terminate the error algorithm of any previous read
 method.  What a mess.


 This may be the way to do it.

 The problem with emulating XHR2 is that open() and send() are distinct 
 concepts in XHR2, but in FileAPI, they are the same.  So in XHR2 an open() 
 canceling abort does make sense; abort() cancels a send(), and thus an open() 
 should cancel an abort().  But in FileAPI, our readAs* methods are equivalent 
 to *both* open() and send().  In FileAPI, an abort() cancels a readAs*; we 
 now have a scenario where a readAs* may cancel an abort().  How to make that 
 clear?

I'm not sure why it's any more confusing that read* is open+send.
read* can cancel abort, and abort can cancel read*.  OK.


 Perhaps there's a simpler way to say successfully calling a read
 method inhibits any previous read's loadend?

 I'm in favor of any shorthand :)  But this may not do justice to each readAs* 
 algorithm being better defined.

Hack 1: Don't call loadend synchronously.  Enqueue it, and let read*
methods clear the queues when they start up.  This differs from XHR,
though, and is a little odd.
Hack 2: Add a virtual generation counter/timestamp, not exposed to
script.  Increment it in read*, check it in abort before sending
loadend.  This is kind of complex, but works [and might be how I end
up implementing this in Chrome].

But really, I don't think either of those is better than just saying,
in read*, something like terminate the algorithm for any abort
sequence being processed.

Eric



[webcomponents] Progress Update

2012-03-05 Thread Dimitri Glazkov
Hello, public-webapps!

There's a lot work happening in the Web Components land, and for those
not following closely, here is a summary. I hope to start sending this
out regularly.

As already mentioned, there's
https://plus.google.com/b/103330502635338602217/ where I post more
granular updates. Other ways to follow the bug traffic are to
subscribe to Mercurial RSS feed
(http://dvcs.w3.org/hg/webcomponents/rss-log), or follow the meta bugs
for each section.

SHADOW DOM (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978)

* The spec is an Editor's Draft, and I've been concentrating on
tightening it up based on implementer feedback. Here are some
interesting changes:
   - every HTML element has an implied shadow DOM subtree:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15300
   - selection property added to ShadowRoot:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16097
   - activeElement property added to ShadowRoot:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15969
   - mutation events are now disallowed in shadow DOM subtrees:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16096
   - a few of encapsulation tweaks:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15968,
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15970
   - insertion points behave as HTMLUnknownElement outside of the
shadow DOM subtrees:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16011

* There's an active implementation effort in WebKit, with the
experimental support for Shadow DOM currently in Chrome Canary:
https://plus.google.com/b/103330502635338602217/103330502635338602217/posts/NpjkdUjHtWe

* Dominic Cooney is actively working on a test suite for the spec.
Nothing to show yet, but it's coming.

HTML TEMPLATES (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=15476):

* The first draft hasn't been finished yet. My intent is to have
something readable in a couple of weeks.

* Following discussion on parsing
(http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/thread.html#msg595),
I studied WebKit parser to determine the extent of changes. You can
see the patch here: https://bugs.webkit.org/show_bug.cgi?id=78734

CODE SAMPLES (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14956):

* A few recipes/examples have been written, aiming to explain possible
applications of the spec:
http://dvcs.w3.org/hg/webcomponents/raw-file/tip/samples/index.html,
https://github.com/dglazkov/Tabs

* There's a Web Components Polyfill, which relies on the experimental
Shadow DOM implementation. It allows trying out the feel of the
declarative syntax and APIs:
https://github.com/dglazkov/Web-Components-Polyfill. The intent is
eventually provide limited emulation of shadow DOM, as well. Han Dijk
has been doing some work on that front.

:DG



RE: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Feras Moussa
The feedback is implementation feedback that we have refined in the past few 
weeks as we've updated our implementation. 
We're happy with it to be treated as a LC comment, but we'd also give this 
feedback in CR too since in recent weeks we've found it to be a problem in apps 
which make extensive use of the APIs.

 -Original Message-
 From: Arthur Barstow [mailto:art.bars...@nokia.com]
 Sent: Monday, March 05, 2012 12:52 PM
 To: Feras Moussa; Arun Ranganathan; Jonas Sicking
 Cc: public-webapps@w3.org; Adrian Bateman
 Subject: Re: [FileAPI] Deterministic release of Blob proposal
 
 Feras - this seems kinda' late, especially since the two-week pre-LC comment
 period for File API ended Feb 24.
 
 Is this a feature that can be postponed to v.next?
 
 On 3/2/12 7:54 PM, ext Feras Moussa wrote:
 
  At TPAC we discussed the ability to deterministically close blobs with
  a few
 
  others.
 
  As we've discussed in the createObjectURL thread[1], a Blob may
  represent
 
  an expensive resource (eg. expensive in terms of memory, battery, or
  disk
 
  space). At present there is no way for an application to
  deterministically
 
  release the resource backing the Blob. Instead, an application must
  rely on
 
  the resource being cleaned up through a non-deterministic garbage
  collector
 
  once all references have been released. We have found that not having
  a way
 
  to deterministically release the resource causes a performance impact
  for a
 
  certain class of applications, and is especially important for mobile
  applications
 
  or devices with more limited resources.
 
  In particular, we've seen this become a problem for media intensive
  applications
 
  which interact with a large number of expensive blobs. For example, a
  gallery
 
  application may want to cycle through displaying many large images
  downloaded
 
  through websockets, and without a deterministic way to immediately
  release
 
  the reference to each image Blob, can easily begin to consume vast
  amounts of
 
  resources before the garbage collector is executed.
 
  To address this issue, we propose that a close method be added to the
  Blob
 
  interface.
 
  When called, the close method should release the underlying resource
  of the
 
  Blob, and future operations on the Blob will return a new error, a
  ClosedError.
 
  This allows an application to signal when it's finished using the Blob.
 
  To support this change, the following changes in the File API spec are
  needed:
 
  * In section 6 (The Blob Interface)
 
  - Addition of a close method. When called, the close method releases
  the
 
  underlying resource of the Blob. Close renders the blob invalid, and
  further
 
  operations such as URL.createObjectURL or the FileReader read methods
  on
 
  the closed blob will fail and return a ClosedError. If there are any
  non-revoked
 
  URLs to the Blob, these URLs will continue to resolve until they have
  been
 
  revoked.
 
  - For the slice method, state that the returned Blob is a new Blob
  with its own
 
  lifetime semantics - calling close on the new Blob is independent of
  calling close
 
  on the original Blob.
 
  *In section 8 (The FIleReader Interface)
 
  - State the FileReader reads directly over the given Blob, and not a
  copy with
 
  an independent lifetime.
 
  * In section 10 (Errors and Exceptions)
 
  - Addition of a ClosedError. If the File or Blob has had the close
  method called,
 
  then for asynchronous read methods the error attribute MUST return a
 
  ClosedError DOMError and synchronous read methods MUST throw a
 
  ClosedError exception.
 
  * In section 11.8 (Creating and Revoking a Blob URI)
 
  - For createObjectURL - If this method is called with a closed Blob
  argument,
 
  then user agents must throw a ClosedError exception.
 
  Similarly to how slice() clones the initial Blob to return one with
  its own
 
  independent lifetime, the same notion will be needed in other APIs
  which
 
  conceptually clone the data - namely FormData, any place the
  Structured Clone
 
  Algorithm is used, and BlobBuilder.
 
  Similarly to how FileReader must act directly on the Blob's data, the
  same notion
 
  will be needed in other APIs which must act on the data - namely
  XHR.send and
 
  WebSocket. These APIs will need to throw an error if called on a Blob
  that was
 
  closed and the resources are released.
 
  We've recently implemented this in experimental builds and have seen
  measurable
 
  performance improvements.
 
  The feedback we heard from our discussions with others at TPAC
  regarding our
 
  proposal to add a close() method to the Blob interface was that
  objects in the web
 
  platform potentially backed by expensive resources should have a
  deterministic
 
  way to be released.
 
  Thanks,
 
  Feras
 
  [1]
  http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1499.htm
  l
 





Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Glenn Maynard
On Fri, Mar 2, 2012 at 6:54 PM, Feras Moussa fer...@microsoft.com wrote:

 To address this issue, we propose that a close method be added to the Blob
 

 interface.

 When called, the close method should release the underlying resource of
 the 

 Blob, and future operations on the Blob will return a new error, a
 ClosedError. 

 This allows an application to signal when it's finished using the Blob.


This is exactly like the neuter concept, defined at
http://dev.w3.org/html5/spec/common-dom-interfaces.html#transferable-objects.
I recommend using it.  Make Blob a Transferable, and have close() neuter
the object.  The rest of this wouldn't change much, except you'd say if
the object has been neutered (or has the neutered flag set, or however
it's defined) instead of if the close method has been called.

Originally, I think it was assumed that Blobs don't need to be
Transferable, because they're immutable, which means you don't
(necessarily) need to make a copy when transferring them between threads.
That was only considering the cost of copying the Blob, though, not the
costs of delayed GC that you're talking about here, so I think transferable
Blobs do make sense.

Also, the close() method should probably go on Transferable (with a name
less likely to clash, eg. neuter), instead of as a one-off on Blob.  If
it's useful for Blob, it's probably useful for ArrayBuffer and all other
future Transferables as well.

-- 
Glenn Maynard


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Charles Pritchard

On 3/5/2012 3:59 PM, Glenn Maynard wrote:
On Fri, Mar 2, 2012 at 6:54 PM, Feras Moussa fer...@microsoft.com 
mailto:fer...@microsoft.com wrote:


To address this issue, we propose that a close method be added to
the Blob

interface.

When called, the close method should release the underlying
resource of the

Blob, and future operations on the Blob will return a new error, a
ClosedError.

This allows an application to signal when it's finished using the
Blob.


This is exactly like the neuter concept, defined at 
http://dev.w3.org/html5/spec/common-dom-interfaces.html#transferable-objects.  
I recommend using it.  Make Blob a Transferable, and have close() 
neuter the object.  The rest of this wouldn't change much, except 
you'd say if the object has been neutered (or has the neutered flag 
set, or however it's defined) instead of if the close method has 
been called.


Originally, I think it was assumed that Blobs don't need to be 
Transferable, because they're immutable, which means you don't 
(necessarily) need to make a copy when transferring them between 
threads.  That was only considering the cost of copying the Blob, 
though, not the costs of delayed GC that you're talking about here, so 
I think transferable Blobs do make sense.


Also, the close() method should probably go on Transferable (with a 
name less likely to clash, eg. neuter), instead of as a one-off on 
Blob.  If it's useful for Blob, it's probably useful for ArrayBuffer 
and all other future Transferables as well.




Glenn,

Do you see old behavior working something like the following?

var blob = new Blob(my new big blob);
var keepBlob = blob.slice();
destination.postMessage(blob, '*', [blob]); // is try/catch needed here?
blob = keepBlob; // keeping a copy of my blob still in thread.

Sorry to cover too many angles: if Blob is Transferable, then it'll 
neuter; so if we do want a local copy, we'd use slice ahead of time to 
keep it.
And we might have an error on postMessage stashing it in the transfer 
array if it's not a Transferable on an older browser.

The new behavior is pretty easy.
var blob = new Blob(my big blob);
blob.close(); // My blob has been neutered before it could procreate.

-Charles


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Glenn Maynard
On Mon, Mar 5, 2012 at 7:04 PM, Charles Pritchard ch...@jumis.com wrote:

  Do you see old behavior working something like the following?


 var blob = new Blob(my new big blob);
 var keepBlob = blob.slice();
 destination.postMessage(blob, '*', [blob]); // is try/catch needed here?
 blob = keepBlob; // keeping a copy of my blob still in thread.

Sorry to cover too many angles: if Blob is Transferable, then it'll neuter;
 so if we do want a local copy, we'd use slice ahead of time to keep it.


You don't need to do that.  If you don't want postMessage to transfer the
blob, then simply don't include it in the transfer parameter, and it'll
perform a normal structured clone.  postMessage behaves this way in part
for backwards-compatibility: so exactly in cases like this, we can make
Blob implement Transferable without breaking existing code.

See http://dev.w3.org/html5/postmsg/#posting-messages and similar
postMessage APIs.


 And we might have an error on postMessage stashing it in the transfer
 array if it's not a Transferable on an older browser.


It'll throw TypeError, which you'll need to handle if you need to support
older browsers.

The new behavior is pretty easy.
 var blob = new Blob(my big blob);
 blob.close(); // My blob has been neutered before it could procreate.


Sorry, I'm not really sure what you're trying to say.  This still works
when using the neutered concept; it just uses an existing mechanism, and
allows transfers.

-- 
Glenn Maynard


[Bug 16233] New: Wrong description for lowerBound() and upperBound() methods

2012-03-05 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16233

   Summary: Wrong description for lowerBound() and upperBound()
methods
   Product: WebAppsWG
   Version: unspecified
  Platform: PC
OS/Version: Windows NT
Status: NEW
  Severity: normal
  Priority: P2
 Component: Indexed Database API
AssignedTo: dave.n...@w3.org
ReportedBy: isra...@microsoft.com
 QAContact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org


The spec currently says:

* lowerBound - Creates and returns a new key range with lower set to lower, 
lowerOpen set to open, upper set to undefined and and upperOpen set to true.
* upperBound - Creates and returns a new key range with lower set to undefined,
lowerOpen set to true, upper set to value and and upperOpen set to open.

It should say the following in order to match the interface definition (change
lower to bound and value to bound, respectively):

* lowerBound - Creates and returns a new key range with lower set to bound,
lowerOpen set to open, upper set to undefined and and upperOpen set to true.
* upperBound - Creates and returns a new key range with lower set to undefined,
lowerOpen set to true, upper set to bound and and upperOpen set to open.

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: [IndexedDB] Multientry and duplicate elements

2012-03-05 Thread Jonas Sicking
Awesome, I've updated the spec to hopefully be clear on this.

/ Jonas

On Mon, Mar 5, 2012 at 12:09 PM, Israel Hilerio isra...@microsoft.com wrote:
 I was originally referring to the second scenario.  However, I agree with you 
 that we shouldn't support this scenario.  I just wanted to confirm this.
 Thanks,

 Israel

 On Saturday, March 03, 2012 6:24 PM, Jonas Sicking wrote:
 On Fri, Mar 2, 2012 at 8:49 PM, Israel Hilerio isra...@microsoft.com wrote:
  There seems to be some cases where it might be useful to be able to
  get a count of all the duplicates contained in a multiEntry index.  Do
  you guys see this as an important scenario?

 Not exactly sure what you mean here.

 Do you mean duplicates in the form of the same value existing for multiple
 entries, i.e. (assuming there's an index on the 'a' property)

 store.add({ a: [10, 20], ... }, 1);
 store.add({ a: [10, 30], ... }, 2);

 here there's a duplicate of the value 10. I.e. here you can count the 
 duplicates
 by doing:

 index.count(10).onsuccess = ...;

 This will give 2 as result.

 Or do you mean duplicates in the form of the same value existing for the same
 entry, i.e.:

 store.add({ a: [30, 30] }, 3);

 Currently in firefox this won't produce a duplicate entry in the index. I.e.

 index.count(30).onsuccess = ...;

 will give 1 as result.

 It seems to me that it would introduce a lot of complexities if we were to 
 insert
 two rows here to allow tracking duplicates. First of all there would be no 
 way
 to tell the two entries apart using the API that we have now which seems 
 like it
 could be problematic for pages.
 Second, cursor's iteration is currently defined in terms of going to the next
 entry which has a higher key than the one the cursor is currently located 
 at. So
 if two entries have the exact same key, the cursor would still skip over the
 second one of them. In other words, we would have to redefine cursor
 iteration.

 This is all certainly doable, but it seems non-trivial.

 It would also complicate the datamodel in the implementation since the index
 would no longer be simply a btree of indexKey + primaryKey. An additional key
 would need to be added in order to tell duplicate entries apart.

 / Jonas





Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Charles Pritchard

On 3/5/2012 5:56 PM, Glenn Maynard wrote:
On Mon, Mar 5, 2012 at 7:04 PM, Charles Pritchard ch...@jumis.com 
mailto:ch...@jumis.com wrote:


Do you see old behavior working something like the following?


var blob = new Blob(my new big blob);
var keepBlob = blob.slice(); destination.postMessage(blob, '*',
[blob]); // is try/catch needed here?


You don't need to do that.  If you don't want postMessage to transfer 
the blob, then simply don't include it in the transfer parameter, and 
it'll perform a normal structured clone.  postMessage behaves this way 
in part for backwards-compatibility: so exactly in cases like this, we 
can make Blob implement Transferable without breaking existing code.


See http://dev.w3.org/html5/postmsg/#posting-messages and similar 
postMessage APIs.


Web Intents won't have a transfer map argument.
http://dvcs.w3.org/hg/web-intents/raw-file/tip/spec/Overview.html#widl-Intent-data

For the Web Intents structured cloning algorithm, Web Intents would be 
inserting into step 3:

If input is a Transferable object, add it to the transfer map.
http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#internal-structured-cloning-algorithm

Then Web Intents would move the first section of the structured cloning 
algorithm to follow the internal cloning algorithm section, swapping 
their order.

http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#safe-passing-of-structured-data

That's my understanding.

Something like this may be necessary if Blob were a Transferable:
var keepBlob = blob.slice();
var intent = new Intent(-x-my-intent, blob);
navigator.startActivity(intent, callback);


And we might have an error on postMessage stashing it in the
transfer array if it's not a Transferable on an older browser.





Example of how easy the neutered concept applies to Transferrable:

var blob = new Blob(my big blob);
blob.close();


I like the idea of having Blob implement Transferrable and adding close 
to the Transferrable interface.
File.close could have a better relationship with the cache and/or locks 
on data.



Some history on Transferrable and structured clones:

Note: MessagePort does have a close method and is currently the only 
Transferrable mentioned in WHATWG:

http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#transferable-objects

ArrayBuffer is widely implemented. It was the second item to implement 
Transferrable:

http://www.khronos.org/registry/typedarray/specs/latest/#9

Subsequently, ImageData adopted Uint8ClampedArray for one of its 
properties, adopting TypedArrays:

http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#imagedata

This has lead to some instability in the structured clone algorithm for 
ImageData as the typed array object for ImageData is read-only.

https://www.w3.org/Bugs/Public/show_bug.cgi?id=13800

ArrayBuffer is still in a strawman state.

-Charles




Re: [IndexedDB] Multientry and duplicate elements

2012-03-05 Thread Odin Hørthe Omdal

On Tue, 06 Mar 2012 03:44:57 +0100, Jonas Sicking jo...@sicking.cc wrote:

Awesome, I've updated the spec to hopefully be clear on this.


I've been following along and silently +1'ing the outcomes of the  
different emails, just without the added email noise which wasn't really  
required from my side as I had nothing to add. :-)


--
Odin Hørthe Omdal · Core QA, Opera Software · http://opera.com /