[IndexDB] Collation Algorithm?

2010-06-09 Thread Mikeal Rogers
One of the things I noticed that seems to be missing from the IndexDB
specification is the collation algorithm used for sorting the index
keys.

There are lots of collation differences between databases, if left
unspecified I'm afraid this would negatively affect interoperability
between IndexDB implementations.

CouchDB has a good collation specification for rich keys (any JSON
type) and defers to the Unicode Collation Algorithm once it hits
string comparisons. This might be a good starting point.

http://wiki.apache.org/couchdb/View_collation#Collation_Specification

http://www.unicode.org/reports/tr10/

-Mikeal




Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Mikeal Rogers
I've been looking through the current spec and all the proposed changes.

Great work. I'm going to be building a CouchDB compatible API on top
of IndexedDB that can support peer-to-peer replication without other
CouchDB instances.

One of the things that will entail is a by-sequence index for all the
changes in a give "database" (in my case a database will be scoped to
more than one ObjectStore). In order to accomplish this I'll need to
keep the last known sequence around so that each new write can create
a new entry in the by-sequence index. The problem is that if another
tab/window writes to the database it'll increment that sequence and I
won't be notified so I would have to start every transaction with a
check on the sequence index for the last sequence which seems like a
lot of extra cursor calls.

What I really need is an event listener on an ObjectStore that fires
after a transaction is committed to the store but before the next
transaction is run that gives me information about the commits to the
ObjectStore.

Thoughts?

-Mikeal

On Wed, Jun 9, 2010 at 11:40 AM, Jeremy Orlow  wrote:
> On Wed, Jun 9, 2010 at 7:25 PM, Jonas Sicking  wrote:
>>
>> On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
>> > On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
>> >>
>> >> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
>> >> wrote:
>> >> > I'm not sure I like the idea of offering sync cursors either since
>> >> > the
>> >> > UA
>> >> > will either need to load everything into memory before starting or
>> >> > risk
>> >> > blocking on disk IO for large data sets.  Thus I'm not sure I support
>> >> > the
>> >> > idea of synchronous cursors.  But, at the same time, I'm concerned
>> >> > about
>> >> > the
>> >> > overhead of firing one event per value with async cursors.  Which is
>> >> > why I
>> >> > was suggesting an interface where the common case (the data is in
>> >> > memory) is
>> >> > done synchronously but the uncommon case (we'd block if we had to
>> >> > respond
>> >> > synchronously) has to be handled since we guarantee that the first
>> >> > time
>> >> > will
>> >> > be forced to be asynchronous.
>> >> > Like I said, I'm not super happy with what I proposed, but I think
>> >> > some
>> >> > hybrid async/sync interface is really what we need.  Have you guys
>> >> > spent
>> >> > any
>> >> > time thinking about something like this?  How dead-set are you on
>> >> > synchronous cursors?
>> >>
>> >> The idea is that synchronous cursors load all the required data into
>> >> memory, yes. I think it would help authors a lot to be able to load
>> >> small chunks of data into memory and read and write to it
>> >> synchronously. Dealing with asynchronous operations constantly is
>> >> certainly possible, but a bit of a pain for authors.
>> >>
>> >> I don't think we should obsess too much about not keeping things in
>> >> memory, we already have things like canvas and the DOM which adds up
>> >> to non-trivial amounts of memory.
>> >>
>> >> Just because data is loaded from a database doesn't mean it's huge.
>> >>
>> >> I do note that you're not as concerned about getAll(), which actually
>> >> have worse memory characteristics than synchronous cursors since you
>> >> need to create the full JS object graph in memory.
>> >
>> > I've been thinking about this off and on since the original proposal was
>> > made, and I just don't feel right about getAll() or synchronous cursors.
>> >  You make some good points about there already being many ways to
>> > overwhelm
>> > ram with webAPIs, but is there any place we make it so easy?  You're
>> > right
>> > that just because it's a database doesn't mean it needs to be huge, but
>> > often times they can get quite big.  And if a developer doesn't spend
>> > time
>> > making sure they test their app with the upper ends of what users may
>> > possibly see, it just seems like this is a recipe for problems.
>> > Here's a concrete example: structured clone allows you to store image
>> > data.
>> >  Lets say I'm building an image hosting site and that I cache all the
>> > images
>> > along with their thumbnails locally in an IndexedDB entity store.  Lets
>> > say
>> > each thumbnail is a trivial amount, but each image is 1MB.  I have an
>> > album
>> > with 1000 images.  I do |var photos =
>> > albumIndex.getAllObjects(albumName);|
>> > and then iterate over that to get the thumbnails.  But I've just loaded
>> > over
>> > 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
>> > suppose it's possible JavaScript engines could build mechanisms to fetch
>> > this stuff lazily (like you could even with a synchronous cursor) but
>> > that
>> > will take time/effort and introduce lag in the page (while fetching
>> > additional info from disk).
>> >
>> > I'm not completely against the idea of getAll/sync cursors, but I do
>> > think
>> > they should be de-coupled from this proposed API.  I would also suggest
>> > that
>> > we re-consider them only after at least

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Shawn Wilsher

On 6/9/2010 3:36 PM, Tab Atkins Jr. wrote:

At the very least, explicitly loading things into an honest-to-god
array can make it more obvious that you're eating memory in the form
of a big array, as opposed to just a "magically transform my blob of
data into something more convenient".
I'm sorry, but if a developer can't figure out that if they are given a 
big array (that is a proper Array in JavaScript) that it is the cause of 
large amounts of memory usage, I don't see how them populating it 
themselves is going to raise any additional flags.


Cheers,

Shawn



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jonas Sicking
On Wed, Jun 9, 2010 at 3:36 PM, Tab Atkins Jr.  wrote:
> On Wed, Jun 9, 2010 at 3:27 PM, Jonas Sicking  wrote:
>> I'm well aware of this. My argument is that I think we'll see people
>> write code like this:
>>
>> results = [];
>> db.objectStore("foo").openCursor(range).onsuccess = function(e) {
>>  var cursor = e.result;
>>  if (!cursor) {
>>    weAreDone(results);
>>  }
>>  results.push(cursor.value);
>>  cursor.continue();
>> }
>>
>> While the indexedDB implementation doesn't hold much data in memory at
>> a time, the webpage will hold just as much as if we had had a getAll
>> function. Thus we havn't actually improved anything, only forced the
>> author to write more code.
>>
>>
>> Put it another way: The raised concern is that people won't think
>> about the fact that getAll can load a lot of data into memory. And the
>> proposed solution is to remove the getAll function and tell people to
>> use openCursor. However if they weren't thinking about that a lot of
>> data will be in memory at one time, then why wouldn't they write code
>> like the above? Which results as just as much data being in memory?
>
> At the very least, explicitly loading things into an honest-to-god
> array can make it more obvious that you're eating memory in the form
> of a big array, as opposed to just a "magically transform my blob of
> data into something more convenient".

I don't fully understand this. getAll also returns an honest-to-god array.

> (That said, I dislike cursors and explicitly avoid them in my own
> code.  In the PHP db abstraction layer I wrote for myself, every query
> slurps the results into an array and just returns that - I don't give
> myself any access to the cursor at all.  I probably like this better
> simply because I can easily foreach through an array, while I can't do
> the same with a cursor unless I write some moderately more complex
> code.  I hate using while loops when foreach is beckoning to me.)

This is what I'd expect many/most people to do.

/ Jonas



Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Shawn Wilsher

On 6/9/2010 3:48 PM, Kris Zyp wrote:

Another option would be to have cursors essentially implement a JS
array-like API:

db.objectStore("foo").openCursor(range).forEach(function(object){
   // do something with each object
}).onsuccess = function(){
// all done
});

(Or perhaps the cursor with a forEach would be nested inside a
callback, not sure).

The standard "some" function is also useful if you know you probably
won't need to iterate through everything

db.objectStore("foo").openCursor(range).some(function(object){
   return object.name == "John";
}).onsuccess = function(johnIsInDatabase){
if(johnIsInDatabase){
  ...
}
});

This allows us to have an async interface (the callbacks can be called
at any time) and still follows normal JS array patterns, for
programmer convenience (so programmers wouldn't need to iterate over a
cursor and push the results into another array). I don't think anyone
would miss getAll() with this design, since cursors would already be
array-like.
To me, this feels like we are basically doing what we expect a library 
to do: make the syntactic sugar work.  I don't see why a library 
couldn't provide a some or forEach method with the currently proposed API.


Cheers,

Shawn



smime.p7s
Description: S/MIME Cryptographic Signature


IETF BoF @IETF-78 Maastricht: HASMAT - HTTP Application Security Minus Authentication and Transport

2010-06-09 Thread =JeffH

Hi,

We will be hosting the "HTTP Application Security Minus Authentication and
Transport (HASMAT)" Birds-of-a-Feather (BoF) session at IETF-78 in Maastricht
NL during the week of July 25-30, 2010  (see [0] for mailing list).

The purpose of IETF BoFs is to determine whether there is a problem worth
solving, and whether the IETF is the right group to solve it. To that end, the
problem statement is summarized below in the Draft HASMAT Working Group
Charter, and is drawn from this paper [1].

Various facets of this work are already underway, as outlined below in the
draft WG charter, e.g. Strict Transport Security (STS) [2].

Of course the scope of "HTTP application security" is quite broad (as outlined
in [1]), thus the intent is to coordinate this work closely with related work
likely to land in the W3C (and possibly other orgs), e.g. Content Security
Policy (CSP) [3].

We have created a public mailing list [0] for pre-BoF discussion --
has...@ietf.org -- to which you can freely subscribe here:


We encourage all interested parties to join the hasmat@ mailing list and engage
in the on-going discussion there.

thanks,

=JeffH (current IETF HTTPstate WG chair)
Peter Saint-Andre  (IETF Applications Area Director)
Hannes Tschofenig  (IAB, IETF WG chair)


[0] HASMAT mailing list.
https://www.ietf.org/mailman/listinfo/hasmat

[1] Hodges and Steingruebl, "The Need for a Coherent Web Security Policy
Framework", W2SP position paper, 2010.
http://w2spconf.com/2010/papers/p11.pdf

[2] Hodges, Jackson, and Barth, "Strict Transport Security (STS)",
revision -06.
http://lists.w3.org/Archives/Public/www-archive/2009Dec/att-0048/draft-hodges-strict-transport-sec-06.plain.html 



see also: http://en.wikipedia.org/wiki/Strict_Transport_Security


[3] Sterne and Stamm, "Content Security Policy (CSP)".
https://wiki.mozilla.org/Security/CSP/Specification
see also: http://people.mozilla.org/~bsterne/content-security-policy/
  https://wiki.mozilla.org/Security/CSP/Design_Considerations


###

Proposed HASMAT BoF agenda
--

Chairs: Hannes Tschofenig and Jeff Hodges

5 min   Agenda bashing (Chairs)

10 min  Description of the problem space (TBD)

20 min  Motivation for standardizing (TBD)
draft-abarth-mime-sniff
draft-abarth-origin
draft-hodges-stricttransportsec (to-be-submitted)

15 min  Presentation of charter text (TBD)

60 min  Discussion of charter text and choice of the initial
specifications (All)

10 min  Conclusion (Chairs/ADs)



###

Draft Charter for HASMAT:

   HTTP Application Security Minus Authentication and Transport WG


Problem Statement

Although modern Web applications are built on top of HTTP, they provide
rich functionality and have requirements beyond the original vision of
static web pages.  HTTP, and the applications built on it, have evolved
organically.  Over the past few years, we have seen a proliferation of
AJAX-based web applications (AJAX being shorthand for asynchronous
JavaScript and XML), as well as Rich Internet Applications (RIAs), based
on so-called Web 2.0 technologies.  These applications bring both
luscious eye-candy and convenient functionality, e.g. social networking,
to their users, making them quite compelling.  At the same time, we are
seeing an increase in attacks against these applications and their
underlying technologies.

The list of attacks is long and includes Cross-Site-Request Forgery
(CSRF)-based attacks, content-sniffing cross-site-scripting (XSS)
attacks, attacks against browsers supporting anti-XSS policies,
clickjacking attacks, malvertising attacks, as well as man-in-the-middle
(MITM) attacks against "secure" (e.g. Transport Layer Security
(TLS/SSL)-based) web sites along with distribution of the tools to carry
out such attacks (e.g. sslstrip).


Objectives and Scope

With the arrival of new attacks the introduction of new web security
indicators, security techniques, and policy communication mechanisms
have sprinkled throughout the various layers of the Web and HTTP.

The goal of this working group is to standardize a small number of
selected specifications that have proven to improve security of Internet
Web applications. The requirements guiding the work will be taken from
the Web application and Web security communities.  Initial work will be
limited to the following topics:

   - Same origin policy, as discussed in draft-abarth-origin

   - Strict transport security, as discussed in
 draft-hodges-stricttransportsec (to be submitted shortly)

   - Media type sniffing, as discussed in draft-abarth-mime-sniff

In addition, this working group will consider the overall topic of HTTP
application security and compose a "problem statement and requirements"
document that can be used to guide further work.

This working group will work closely with IETF Apps Area WGs (such as
HYBI, HTTPstate, and HTTPbis

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 

On 6/9/2010 4:27 PM, Jonas Sicking wrote:
> On Wed, Jun 9, 2010 at 11:39 AM, Laxmi Narsimha Rao Oruganti
>  wrote:
>> Inline...
>>
>> -Original Message-
>> From: public-webapps-requ...@w3.org
[mailto:public-webapps-requ...@w3.org] On Behalf Of Jonas Sicking
>> Sent: Wednesday, June 09, 2010 11:55 PM
>> To: Jeremy Orlow
>> Cc: Shawn Wilsher; Webapps WG
>> Subject: Re: [IndexDB] Proposal for async API changes
>>
>> On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
>>> On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:

 On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
 wrote:
> I'm not sure I like the idea of offering sync cursors either since the
> UA
> will either need to load everything into memory before starting or risk
> blocking on disk IO for large data sets.  Thus I'm not sure I support
> the
> idea of synchronous cursors.  But, at the same time, I'm concerned
about
> the
> overhead of firing one event per value with async cursors.  Which is
> why I
> was suggesting an interface where the common case (the data is in
> memory) is
> done synchronously but the uncommon case (we'd block if we had to
> respond
> synchronously) has to be handled since we guarantee that the first time
> will
> be forced to be asynchronous.
> Like I said, I'm not super happy with what I proposed, but I think some
> hybrid async/sync interface is really what we need.  Have you guys
spent
> any
> time thinking about something like this?  How dead-set are you on
> synchronous cursors?

 The idea is that synchronous cursors load all the required data into
 memory, yes. I think it would help authors a lot to be able to load
 small chunks of data into memory and read and write to it
 synchronously. Dealing with asynchronous operations constantly is
 certainly possible, but a bit of a pain for authors.

 I don't think we should obsess too much about not keeping things in
 memory, we already have things like canvas and the DOM which adds up
 to non-trivial amounts of memory.

 Just because data is loaded from a database doesn't mean it's huge.

 I do note that you're not as concerned about getAll(), which actually
 have worse memory characteristics than synchronous cursors since you
 need to create the full JS object graph in memory.
>>>
>>> I've been thinking about this off and on since the original proposal was
>>> made, and I just don't feel right about getAll() or synchronous cursors.
>>>  You make some good points about there already being many ways to
overwhelm
>>> ram with webAPIs, but is there any place we make it so easy?  You're
right
>>> that just because it's a database doesn't mean it needs to be huge, but
>>> often times they can get quite big.  And if a developer doesn't spend
time
>>> making sure they test their app with the upper ends of what users may
>>> possibly see, it just seems like this is a recipe for problems.
>>> Here's a concrete example: structured clone allows you to store image
data.
>>>  Lets say I'm building an image hosting site and that I cache all the
images
>>> along with their thumbnails locally in an IndexedDB entity store. 
Lets say
>>> each thumbnail is a trivial amount, but each image is 1MB.  I have an
album
>>> with 1000 images.  I do |var photos =
albumIndex.getAllObjects(albumName);|
>>> and then iterate over that to get the thumbnails.  But I've just
loaded over
>>> 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
>>> suppose it's possible JavaScript engines could build mechanisms to fetch
>>> this stuff lazily (like you could even with a synchronous cursor) but
that
>>> will take time/effort and introduce lag in the page (while fetching
>>> additional info from disk).
>>>
>>> I'm not completely against the idea of getAll/sync cursors, but I do
think
>>> they should be de-coupled from this proposed API.  I would also
suggest that
>>> we re-consider them only after at least one implementation has normal
>>> cursors working and there's been some experimentation with it.  Until
then,
>>> we're basing most of our arguments on intuition and assumptions.
>>
>> I'm not married to the concept of sync cursors. However I pretty
>> strongly feel that getAll is something we need. If we just allow
>> cursors for getting multiple results I think we'll see an extremely
>> common pattern of people using a cursor to loop through a result set
>> and put values into an array.
>>
>> Yes, it can be misused, but I don't see a reason why people wouldn't
>> misuse a cursor just as much. If they don't think about the fact that
>> a range contains lots of data when using getAll, why would they think
>> about it when using cursors?
>>
>> [Laxmi] Cursor is a streaming operator that means only the current row
or page is available in memory and the rest sits on the disk.  As

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jonas Sicking
On Wed, Jun 9, 2010 at 11:40 AM, Jeremy Orlow  wrote:
> On Wed, Jun 9, 2010 at 7:25 PM, Jonas Sicking  wrote:
>>
>> On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
>> > On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
>> >>
>> >> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
>> >> wrote:
>> >> > I'm not sure I like the idea of offering sync cursors either since
>> >> > the
>> >> > UA
>> >> > will either need to load everything into memory before starting or
>> >> > risk
>> >> > blocking on disk IO for large data sets.  Thus I'm not sure I support
>> >> > the
>> >> > idea of synchronous cursors.  But, at the same time, I'm concerned
>> >> > about
>> >> > the
>> >> > overhead of firing one event per value with async cursors.  Which is
>> >> > why I
>> >> > was suggesting an interface where the common case (the data is in
>> >> > memory) is
>> >> > done synchronously but the uncommon case (we'd block if we had to
>> >> > respond
>> >> > synchronously) has to be handled since we guarantee that the first
>> >> > time
>> >> > will
>> >> > be forced to be asynchronous.
>> >> > Like I said, I'm not super happy with what I proposed, but I think
>> >> > some
>> >> > hybrid async/sync interface is really what we need.  Have you guys
>> >> > spent
>> >> > any
>> >> > time thinking about something like this?  How dead-set are you on
>> >> > synchronous cursors?
>> >>
>> >> The idea is that synchronous cursors load all the required data into
>> >> memory, yes. I think it would help authors a lot to be able to load
>> >> small chunks of data into memory and read and write to it
>> >> synchronously. Dealing with asynchronous operations constantly is
>> >> certainly possible, but a bit of a pain for authors.
>> >>
>> >> I don't think we should obsess too much about not keeping things in
>> >> memory, we already have things like canvas and the DOM which adds up
>> >> to non-trivial amounts of memory.
>> >>
>> >> Just because data is loaded from a database doesn't mean it's huge.
>> >>
>> >> I do note that you're not as concerned about getAll(), which actually
>> >> have worse memory characteristics than synchronous cursors since you
>> >> need to create the full JS object graph in memory.
>> >
>> > I've been thinking about this off and on since the original proposal was
>> > made, and I just don't feel right about getAll() or synchronous cursors.
>> >  You make some good points about there already being many ways to
>> > overwhelm
>> > ram with webAPIs, but is there any place we make it so easy?  You're
>> > right
>> > that just because it's a database doesn't mean it needs to be huge, but
>> > often times they can get quite big.  And if a developer doesn't spend
>> > time
>> > making sure they test their app with the upper ends of what users may
>> > possibly see, it just seems like this is a recipe for problems.
>> > Here's a concrete example: structured clone allows you to store image
>> > data.
>> >  Lets say I'm building an image hosting site and that I cache all the
>> > images
>> > along with their thumbnails locally in an IndexedDB entity store.  Lets
>> > say
>> > each thumbnail is a trivial amount, but each image is 1MB.  I have an
>> > album
>> > with 1000 images.  I do |var photos =
>> > albumIndex.getAllObjects(albumName);|
>> > and then iterate over that to get the thumbnails.  But I've just loaded
>> > over
>> > 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
>> > suppose it's possible JavaScript engines could build mechanisms to fetch
>> > this stuff lazily (like you could even with a synchronous cursor) but
>> > that
>> > will take time/effort and introduce lag in the page (while fetching
>> > additional info from disk).
>> >
>> > I'm not completely against the idea of getAll/sync cursors, but I do
>> > think
>> > they should be de-coupled from this proposed API.  I would also suggest
>> > that
>> > we re-consider them only after at least one implementation has normal
>> > cursors working and there's been some experimentation with it.  Until
>> > then,
>> > we're basing most of our arguments on intuition and assumptions.
>>
>> I'm not married to the concept of sync cursors. However I pretty
>> strongly feel that getAll is something we need. If we just allow
>> cursors for getting multiple results I think we'll see an extremely
>> common pattern of people using a cursor to loop through a result set
>> and put values into an array.
>>
>> Yes, it can be misused, but I don't see a reason why people wouldn't
>> misuse a cursor just as much. If they don't think about the fact that
>> a range contains lots of data when using getAll, why would they think
>> about it when using cursors?
>
> Once again, I feel like there is a lot of speculation (more than normal)
> happening here.  I'd prefer we take the Async API without the sync cursors
> or getAll and give the rest of the API some time to bake before considering
> it again.  Ideally by then we'd have at least one or two early ad

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Tab Atkins Jr.
On Wed, Jun 9, 2010 at 3:27 PM, Jonas Sicking  wrote:
> I'm well aware of this. My argument is that I think we'll see people
> write code like this:
>
> results = [];
> db.objectStore("foo").openCursor(range).onsuccess = function(e) {
>  var cursor = e.result;
>  if (!cursor) {
>    weAreDone(results);
>  }
>  results.push(cursor.value);
>  cursor.continue();
> }
>
> While the indexedDB implementation doesn't hold much data in memory at
> a time, the webpage will hold just as much as if we had had a getAll
> function. Thus we havn't actually improved anything, only forced the
> author to write more code.
>
>
> Put it another way: The raised concern is that people won't think
> about the fact that getAll can load a lot of data into memory. And the
> proposed solution is to remove the getAll function and tell people to
> use openCursor. However if they weren't thinking about that a lot of
> data will be in memory at one time, then why wouldn't they write code
> like the above? Which results as just as much data being in memory?

At the very least, explicitly loading things into an honest-to-god
array can make it more obvious that you're eating memory in the form
of a big array, as opposed to just a "magically transform my blob of
data into something more convenient".

(That said, I dislike cursors and explicitly avoid them in my own
code.  In the PHP db abstraction layer I wrote for myself, every query
slurps the results into an array and just returns that - I don't give
myself any access to the cursor at all.  I probably like this better
simply because I can easily foreach through an array, while I can't do
the same with a cursor unless I write some moderately more complex
code.  I hate using while loops when foreach is beckoning to me.)

~TJ



Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jonas Sicking
On Wed, Jun 9, 2010 at 11:39 AM, Laxmi Narsimha Rao Oruganti
 wrote:
> Inline...
>
> -Original Message-
> From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
> Behalf Of Jonas Sicking
> Sent: Wednesday, June 09, 2010 11:55 PM
> To: Jeremy Orlow
> Cc: Shawn Wilsher; Webapps WG
> Subject: Re: [IndexDB] Proposal for async API changes
>
> On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
>> On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
>>>
>>> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
>>> wrote:
>>> > I'm not sure I like the idea of offering sync cursors either since the
>>> > UA
>>> > will either need to load everything into memory before starting or risk
>>> > blocking on disk IO for large data sets.  Thus I'm not sure I support
>>> > the
>>> > idea of synchronous cursors.  But, at the same time, I'm concerned about
>>> > the
>>> > overhead of firing one event per value with async cursors.  Which is
>>> > why I
>>> > was suggesting an interface where the common case (the data is in
>>> > memory) is
>>> > done synchronously but the uncommon case (we'd block if we had to
>>> > respond
>>> > synchronously) has to be handled since we guarantee that the first time
>>> > will
>>> > be forced to be asynchronous.
>>> > Like I said, I'm not super happy with what I proposed, but I think some
>>> > hybrid async/sync interface is really what we need.  Have you guys spent
>>> > any
>>> > time thinking about something like this?  How dead-set are you on
>>> > synchronous cursors?
>>>
>>> The idea is that synchronous cursors load all the required data into
>>> memory, yes. I think it would help authors a lot to be able to load
>>> small chunks of data into memory and read and write to it
>>> synchronously. Dealing with asynchronous operations constantly is
>>> certainly possible, but a bit of a pain for authors.
>>>
>>> I don't think we should obsess too much about not keeping things in
>>> memory, we already have things like canvas and the DOM which adds up
>>> to non-trivial amounts of memory.
>>>
>>> Just because data is loaded from a database doesn't mean it's huge.
>>>
>>> I do note that you're not as concerned about getAll(), which actually
>>> have worse memory characteristics than synchronous cursors since you
>>> need to create the full JS object graph in memory.
>>
>> I've been thinking about this off and on since the original proposal was
>> made, and I just don't feel right about getAll() or synchronous cursors.
>>  You make some good points about there already being many ways to overwhelm
>> ram with webAPIs, but is there any place we make it so easy?  You're right
>> that just because it's a database doesn't mean it needs to be huge, but
>> often times they can get quite big.  And if a developer doesn't spend time
>> making sure they test their app with the upper ends of what users may
>> possibly see, it just seems like this is a recipe for problems.
>> Here's a concrete example: structured clone allows you to store image data.
>>  Lets say I'm building an image hosting site and that I cache all the images
>> along with their thumbnails locally in an IndexedDB entity store.  Lets say
>> each thumbnail is a trivial amount, but each image is 1MB.  I have an album
>> with 1000 images.  I do |var photos = albumIndex.getAllObjects(albumName);|
>> and then iterate over that to get the thumbnails.  But I've just loaded over
>> 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
>> suppose it's possible JavaScript engines could build mechanisms to fetch
>> this stuff lazily (like you could even with a synchronous cursor) but that
>> will take time/effort and introduce lag in the page (while fetching
>> additional info from disk).
>>
>> I'm not completely against the idea of getAll/sync cursors, but I do think
>> they should be de-coupled from this proposed API.  I would also suggest that
>> we re-consider them only after at least one implementation has normal
>> cursors working and there's been some experimentation with it.  Until then,
>> we're basing most of our arguments on intuition and assumptions.
>
> I'm not married to the concept of sync cursors. However I pretty
> strongly feel that getAll is something we need. If we just allow
> cursors for getting multiple results I think we'll see an extremely
> common pattern of people using a cursor to loop through a result set
> and put values into an array.
>
> Yes, it can be misused, but I don't see a reason why people wouldn't
> misuse a cursor just as much. If they don't think about the fact that
> a range contains lots of data when using getAll, why would they think
> about it when using cursors?
>
> [Laxmi] Cursor is a streaming operator that means only the current row or 
> page is available in memory and the rest sits on the disk.  As the program 
> moves the cursor thru the result, old pages are thrown away and new pages are 
> loaded from the result set.  Whereas with getAll everything h

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jeremy Orlow
On Wed, Jun 9, 2010 at 7:25 PM, Jonas Sicking  wrote:

> On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
> > On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
> >>
> >> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
> >> wrote:
> >> > I'm not sure I like the idea of offering sync cursors either since the
> >> > UA
> >> > will either need to load everything into memory before starting or
> risk
> >> > blocking on disk IO for large data sets.  Thus I'm not sure I support
> >> > the
> >> > idea of synchronous cursors.  But, at the same time, I'm concerned
> about
> >> > the
> >> > overhead of firing one event per value with async cursors.  Which is
> >> > why I
> >> > was suggesting an interface where the common case (the data is in
> >> > memory) is
> >> > done synchronously but the uncommon case (we'd block if we had to
> >> > respond
> >> > synchronously) has to be handled since we guarantee that the first
> time
> >> > will
> >> > be forced to be asynchronous.
> >> > Like I said, I'm not super happy with what I proposed, but I think
> some
> >> > hybrid async/sync interface is really what we need.  Have you guys
> spent
> >> > any
> >> > time thinking about something like this?  How dead-set are you on
> >> > synchronous cursors?
> >>
> >> The idea is that synchronous cursors load all the required data into
> >> memory, yes. I think it would help authors a lot to be able to load
> >> small chunks of data into memory and read and write to it
> >> synchronously. Dealing with asynchronous operations constantly is
> >> certainly possible, but a bit of a pain for authors.
> >>
> >> I don't think we should obsess too much about not keeping things in
> >> memory, we already have things like canvas and the DOM which adds up
> >> to non-trivial amounts of memory.
> >>
> >> Just because data is loaded from a database doesn't mean it's huge.
> >>
> >> I do note that you're not as concerned about getAll(), which actually
> >> have worse memory characteristics than synchronous cursors since you
> >> need to create the full JS object graph in memory.
> >
> > I've been thinking about this off and on since the original proposal was
> > made, and I just don't feel right about getAll() or synchronous cursors.
> >  You make some good points about there already being many ways to
> overwhelm
> > ram with webAPIs, but is there any place we make it so easy?  You're
> right
> > that just because it's a database doesn't mean it needs to be huge, but
> > often times they can get quite big.  And if a developer doesn't spend
> time
> > making sure they test their app with the upper ends of what users may
> > possibly see, it just seems like this is a recipe for problems.
> > Here's a concrete example: structured clone allows you to store image
> data.
> >  Lets say I'm building an image hosting site and that I cache all the
> images
> > along with their thumbnails locally in an IndexedDB entity store.  Lets
> say
> > each thumbnail is a trivial amount, but each image is 1MB.  I have an
> album
> > with 1000 images.  I do |var photos =
> albumIndex.getAllObjects(albumName);|
> > and then iterate over that to get the thumbnails.  But I've just loaded
> over
> > 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
> > suppose it's possible JavaScript engines could build mechanisms to fetch
> > this stuff lazily (like you could even with a synchronous cursor) but
> that
> > will take time/effort and introduce lag in the page (while fetching
> > additional info from disk).
> >
> > I'm not completely against the idea of getAll/sync cursors, but I do
> think
> > they should be de-coupled from this proposed API.  I would also suggest
> that
> > we re-consider them only after at least one implementation has normal
> > cursors working and there's been some experimentation with it.  Until
> then,
> > we're basing most of our arguments on intuition and assumptions.
>
> I'm not married to the concept of sync cursors. However I pretty
> strongly feel that getAll is something we need. If we just allow
> cursors for getting multiple results I think we'll see an extremely
> common pattern of people using a cursor to loop through a result set
> and put values into an array.
>
> Yes, it can be misused, but I don't see a reason why people wouldn't
> misuse a cursor just as much. If they don't think about the fact that
> a range contains lots of data when using getAll, why would they think
> about it when using cursors?
>

Once again, I feel like there is a lot of speculation (more than normal)
happening here.  I'd prefer we take the Async API without the sync cursors
or getAll and give the rest of the API some time to bake before considering
it again.  Ideally by then we'd have at least one or two early adopters that
can give their perspective on the issue.

J


RE: [IndexDB] Proposal for async API changes

2010-06-09 Thread Laxmi Narsimha Rao Oruganti
Inline...

-Original Message-
From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
Behalf Of Jonas Sicking
Sent: Wednesday, June 09, 2010 11:55 PM
To: Jeremy Orlow
Cc: Shawn Wilsher; Webapps WG
Subject: Re: [IndexDB] Proposal for async API changes

On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
> On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
>>
>> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
>> wrote:
>> > I'm not sure I like the idea of offering sync cursors either since the
>> > UA
>> > will either need to load everything into memory before starting or risk
>> > blocking on disk IO for large data sets.  Thus I'm not sure I support
>> > the
>> > idea of synchronous cursors.  But, at the same time, I'm concerned about
>> > the
>> > overhead of firing one event per value with async cursors.  Which is
>> > why I
>> > was suggesting an interface where the common case (the data is in
>> > memory) is
>> > done synchronously but the uncommon case (we'd block if we had to
>> > respond
>> > synchronously) has to be handled since we guarantee that the first time
>> > will
>> > be forced to be asynchronous.
>> > Like I said, I'm not super happy with what I proposed, but I think some
>> > hybrid async/sync interface is really what we need.  Have you guys spent
>> > any
>> > time thinking about something like this?  How dead-set are you on
>> > synchronous cursors?
>>
>> The idea is that synchronous cursors load all the required data into
>> memory, yes. I think it would help authors a lot to be able to load
>> small chunks of data into memory and read and write to it
>> synchronously. Dealing with asynchronous operations constantly is
>> certainly possible, but a bit of a pain for authors.
>>
>> I don't think we should obsess too much about not keeping things in
>> memory, we already have things like canvas and the DOM which adds up
>> to non-trivial amounts of memory.
>>
>> Just because data is loaded from a database doesn't mean it's huge.
>>
>> I do note that you're not as concerned about getAll(), which actually
>> have worse memory characteristics than synchronous cursors since you
>> need to create the full JS object graph in memory.
>
> I've been thinking about this off and on since the original proposal was
> made, and I just don't feel right about getAll() or synchronous cursors.
>  You make some good points about there already being many ways to overwhelm
> ram with webAPIs, but is there any place we make it so easy?  You're right
> that just because it's a database doesn't mean it needs to be huge, but
> often times they can get quite big.  And if a developer doesn't spend time
> making sure they test their app with the upper ends of what users may
> possibly see, it just seems like this is a recipe for problems.
> Here's a concrete example: structured clone allows you to store image data.
>  Lets say I'm building an image hosting site and that I cache all the images
> along with their thumbnails locally in an IndexedDB entity store.  Lets say
> each thumbnail is a trivial amount, but each image is 1MB.  I have an album
> with 1000 images.  I do |var photos = albumIndex.getAllObjects(albumName);|
> and then iterate over that to get the thumbnails.  But I've just loaded over
> 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
> suppose it's possible JavaScript engines could build mechanisms to fetch
> this stuff lazily (like you could even with a synchronous cursor) but that
> will take time/effort and introduce lag in the page (while fetching
> additional info from disk).
>
> I'm not completely against the idea of getAll/sync cursors, but I do think
> they should be de-coupled from this proposed API.  I would also suggest that
> we re-consider them only after at least one implementation has normal
> cursors working and there's been some experimentation with it.  Until then,
> we're basing most of our arguments on intuition and assumptions.

I'm not married to the concept of sync cursors. However I pretty
strongly feel that getAll is something we need. If we just allow
cursors for getting multiple results I think we'll see an extremely
common pattern of people using a cursor to loop through a result set
and put values into an array.

Yes, it can be misused, but I don't see a reason why people wouldn't
misuse a cursor just as much. If they don't think about the fact that
a range contains lots of data when using getAll, why would they think
about it when using cursors?

[Laxmi] Cursor is a streaming operator that means only the current row or page 
is available in memory and the rest sits on the disk.  As the program moves the 
cursor thru the result, old pages are thrown away and new pages are loaded from 
the result set.  Whereas with getAll everything has to come to memory before 
returning to the caller.  If there is not enough memory to keep the result all 
at a time, we would end up in out-of-memory.  In short, getAll suites well f

Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jonas Sicking
On Wed, Jun 9, 2010 at 7:42 AM, Jeremy Orlow  wrote:
> On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:
>>
>> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
>> wrote:
>> > I'm not sure I like the idea of offering sync cursors either since the
>> > UA
>> > will either need to load everything into memory before starting or risk
>> > blocking on disk IO for large data sets.  Thus I'm not sure I support
>> > the
>> > idea of synchronous cursors.  But, at the same time, I'm concerned about
>> > the
>> > overhead of firing one event per value with async cursors.  Which is
>> > why I
>> > was suggesting an interface where the common case (the data is in
>> > memory) is
>> > done synchronously but the uncommon case (we'd block if we had to
>> > respond
>> > synchronously) has to be handled since we guarantee that the first time
>> > will
>> > be forced to be asynchronous.
>> > Like I said, I'm not super happy with what I proposed, but I think some
>> > hybrid async/sync interface is really what we need.  Have you guys spent
>> > any
>> > time thinking about something like this?  How dead-set are you on
>> > synchronous cursors?
>>
>> The idea is that synchronous cursors load all the required data into
>> memory, yes. I think it would help authors a lot to be able to load
>> small chunks of data into memory and read and write to it
>> synchronously. Dealing with asynchronous operations constantly is
>> certainly possible, but a bit of a pain for authors.
>>
>> I don't think we should obsess too much about not keeping things in
>> memory, we already have things like canvas and the DOM which adds up
>> to non-trivial amounts of memory.
>>
>> Just because data is loaded from a database doesn't mean it's huge.
>>
>> I do note that you're not as concerned about getAll(), which actually
>> have worse memory characteristics than synchronous cursors since you
>> need to create the full JS object graph in memory.
>
> I've been thinking about this off and on since the original proposal was
> made, and I just don't feel right about getAll() or synchronous cursors.
>  You make some good points about there already being many ways to overwhelm
> ram with webAPIs, but is there any place we make it so easy?  You're right
> that just because it's a database doesn't mean it needs to be huge, but
> often times they can get quite big.  And if a developer doesn't spend time
> making sure they test their app with the upper ends of what users may
> possibly see, it just seems like this is a recipe for problems.
> Here's a concrete example: structured clone allows you to store image data.
>  Lets say I'm building an image hosting site and that I cache all the images
> along with their thumbnails locally in an IndexedDB entity store.  Lets say
> each thumbnail is a trivial amount, but each image is 1MB.  I have an album
> with 1000 images.  I do |var photos = albumIndex.getAllObjects(albumName);|
> and then iterate over that to get the thumbnails.  But I've just loaded over
> 1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
> suppose it's possible JavaScript engines could build mechanisms to fetch
> this stuff lazily (like you could even with a synchronous cursor) but that
> will take time/effort and introduce lag in the page (while fetching
> additional info from disk).
>
> I'm not completely against the idea of getAll/sync cursors, but I do think
> they should be de-coupled from this proposed API.  I would also suggest that
> we re-consider them only after at least one implementation has normal
> cursors working and there's been some experimentation with it.  Until then,
> we're basing most of our arguments on intuition and assumptions.

I'm not married to the concept of sync cursors. However I pretty
strongly feel that getAll is something we need. If we just allow
cursors for getting multiple results I think we'll see an extremely
common pattern of people using a cursor to loop through a result set
and put values into an array.

Yes, it can be misused, but I don't see a reason why people wouldn't
misuse a cursor just as much. If they don't think about the fact that
a range contains lots of data when using getAll, why would they think
about it when using cursors?

/ Jonas



Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Jeremy Orlow
On Tue, May 18, 2010 at 8:34 PM, Jonas Sicking  wrote:

> On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow 
> wrote:
> > I'm not sure I like the idea of offering sync cursors either since the UA
> > will either need to load everything into memory before starting or risk
> > blocking on disk IO for large data sets.  Thus I'm not sure I support the
> > idea of synchronous cursors.  But, at the same time, I'm concerned about
> the
> > overhead of firing one event per value with async cursors.  Which is
> why I
> > was suggesting an interface where the common case (the data is in memory)
> is
> > done synchronously but the uncommon case (we'd block if we had to respond
> > synchronously) has to be handled since we guarantee that the first time
> will
> > be forced to be asynchronous.
> > Like I said, I'm not super happy with what I proposed, but I think some
> > hybrid async/sync interface is really what we need.  Have you guys spent
> any
> > time thinking about something like this?  How dead-set are you on
> > synchronous cursors?
>
> The idea is that synchronous cursors load all the required data into
> memory, yes. I think it would help authors a lot to be able to load
> small chunks of data into memory and read and write to it
> synchronously. Dealing with asynchronous operations constantly is
> certainly possible, but a bit of a pain for authors.
>
> I don't think we should obsess too much about not keeping things in
> memory, we already have things like canvas and the DOM which adds up
> to non-trivial amounts of memory.
>
> Just because data is loaded from a database doesn't mean it's huge.
>
> I do note that you're not as concerned about getAll(), which actually
> have worse memory characteristics than synchronous cursors since you
> need to create the full JS object graph in memory.
>

I've been thinking about this off and on since the original proposal was
made, and I just don't feel right about getAll() or synchronous cursors.
 You make some good points about there already being many ways to overwhelm
ram with webAPIs, but is there any place we make it so easy?  You're right
that just because it's a database doesn't mean it needs to be huge, but
often times they can get quite big.  And if a developer doesn't spend time
making sure they test their app with the upper ends of what users may
possibly see, it just seems like this is a recipe for problems.

Here's a concrete example: structured clone allows you to store image data.
 Lets say I'm building an image hosting site and that I cache all the images
along with their thumbnails locally in an IndexedDB entity store.  Lets say
each thumbnail is a trivial amount, but each image is 1MB.  I have an album
with 1000 images.  I do |var photos = albumIndex.getAllObjects(albumName);|
and then iterate over that to get the thumbnails.  But I've just loaded over
1GB of stuff into ram (assuming no additional inefficiency/blowup).  I
suppose it's possible JavaScript engines could build mechanisms to fetch
this stuff lazily (like you could even with a synchronous cursor) but that
will take time/effort and introduce lag in the page (while fetching
additional info from disk).


I'm not completely against the idea of getAll/sync cursors, but I do think
they should be de-coupled from this proposed API.  I would also suggest that
we re-consider them only after at least one implementation has normal
cursors working and there's been some experimentation with it.  Until then,
we're basing most of our arguments on intuition and assumptions.

J


RfC: LCWD of API and Ontology for Media Resource 1.0; deadline 11 July 2010

2010-06-09 Thread Arthur Barstow
All - the Media Annotations WG asked WebApps to review two of their 
LCWDs. Details below including the mail list for comments (deadline for 
comments is July 11).


-Art Barstow

 Original Message 
Subject: 	Last Call Working Drafts transition announcement of the API 
and Ontology for Media Resource 1.0

Date:   Wed, 9 Jun 2010 09:22:39 +0200
From:   ext Thierry MICHEL 




CC: 


Chairs and Team Contact,


(1) This is a Last Call Working Draft transition announcement for the
following two Recommendation Track specifications:

(2) Document Titles and URIs

* API for Media Resource 1.0
http://www.w3.org/TR/2010/WD-mediaont-api-1.0-20100608

* Ontology for Media Resource 1.0
http://www.w3.org/TR/2010/WD-mediaont-10-20100608

(3) Instructions for providing feedback

If you wish to make comments regarding these specifications please send
them to  which is an email list publicly
archived at http://lists.w3.org/Archives/Public/public-media-annotation/
Please use "[LC Comment API]" or "[LC Comment ONT]"in the subject line
of your email, regarding the specification you are commenting.

(4) Review end date

The Last Call period for these documents ends on July 11, 2010.





OFF TOPIC: SVG Progress Contest

2010-06-09 Thread Doug Schepers

Hey, folks-

Sorry to spam this list, but I thought that folks who are familiar with 
the new progress events might be interested in applying those skills to 
SVG.  This is a fun contest with some great prizes for making an SVG 
progress indicator.


The contest ends in a couple days, so don't delay!

 http://www.w3.org/QA/2010/06/svg_contest.html
 http://westciv.com/nobit/

Regards-
-Doug



[Bug 9888] New: Constants are not accessible when they're needed for most IndexedDB interfaces.

2010-06-09 Thread bugzilla
http://www.w3.org/Bugs/Public/show_bug.cgi?id=9888

   Summary: Constants are not accessible when they're needed for
most IndexedDB interfaces.
   Product: WebAppsWG
   Version: unspecified
  Platform: PC
OS/Version: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: Indexed Database API
AssignedTo: nikunj.me...@oracle.com
ReportedBy: jor...@chromium.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


In indexedDB, there are many cases where it's impossible to access a constant. 
For example, if I'm trying to create a key range, I might want to pass in
KeyRange.LEFT_OPEN.  Unfortunately, the only way to get at LEFT_OPEN is to
create a KeyRange.  This is true of pretty much all the objects and their
constants.

Another example is that you can't access the error constants without causing an
exception so you can get an IDBDatabaseException object.  But that's not even
possible outside of workers.

We could create global constructors, but it doesn't seem worth it to pollute
the namespace.  We could pull all the constants up one level.  For example, the
constants for Cursors would be added to IDBIndexes and IDBObjectstores since
those are both the places where you'd create a cursor.  If we wanted to avoid
the duplication, we could even pull all the constants up to the
IndexedDatabaseRequest level (where mozilla's proposal suggests we put the
keyRange constructors).

Until this is fixed, people using IndexedDB will need to use pretty ugly hacks
to get at the constants or hard code in the numbers themselves.

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: JS crypto pointer

2010-06-09 Thread Nathan

Robin Berjon wrote:

Hi,

since some people were asking about JS crypto here not long ago, I thought I'd 
point this one out:

  http://bitwiseshiftleft.github.com/sjcl/



Thanks Robin! will come in handy :)



JS crypto pointer

2010-06-09 Thread Robin Berjon
Hi,

since some people were asking about JS crypto here not long ago, I thought I'd 
point this one out:

  http://bitwiseshiftleft.github.com/sjcl/

-- 
Robin Berjon - http://berjon.com/






Last Call Working Drafts transition announcement of the API and Ontology for Media Resource 1.0

2010-06-09 Thread Thierry MICHEL

Chairs and Team Contact,


(1) This is a Last Call Working Draft transition announcement for the
following two Recommendation Track specifications:

(2) Document Titles and URIs

* API for Media Resource 1.0
http://www.w3.org/TR/2010/WD-mediaont-api-1.0-20100608

* Ontology for Media Resource 1.0
http://www.w3.org/TR/2010/WD-mediaont-10-20100608

(3) Instructions for providing feedback

If you wish to make comments regarding these specifications please send
them to  which is an email list publicly
archived at http://lists.w3.org/Archives/Public/public-media-annotation/
Please use "[LC Comment API]" or "[LC Comment ONT]"in the subject line 
of your email, regarding the specification you are commenting.


(4) Review end date

The Last Call period for these documents ends on July 11, 2010.

(5) A reference to the group's decision to make this transition

The Media Annotations Working Group made the decision for this
transition at its teleconference on 01 June 2010
"resolution: both documents can be moved to lc
Resolution: API and Onthology moving LC"
see
http://www.w3.org/2010/06/01-mediaann-minutes.html

(6) Evidence that the document satisfies group's requirements. Include a
link to requirements

The Media Annotations Working Group believes that these specifications
satisfy the requirements of the working group's charter at
http://www.w3.org/2008/01/media-annotations-wg.html

and the "Use Cases and Requirements for Ontology and API for Media
Resource 1.0" at
http://www.w3.org/TR/2010/WD-media-annot-reqs-20100121/

(7) The names of groups with dependencies, explicitly inviting review
from them.

The following groups are known or suspected to have dependencies on one
or more of these specifications:

* Semantic Web Deployment Working Group
* Semantic Web Coordination Group
* Scalable Vector Graphics Working Group (SVG)
* Web Applications (WebApps) Working Group
* HyperText Markup Language (HTML) Working Group
* The Device API and Policy (DAP) Working Group

also the following groups have liaisons on one or more of these
specifications:
* Protocol for Web Description Resources (POWDER) Working Group
* Protocols and Formats Working Group

The Media Annotations Working Group requests review from each of these
working groups.  The chairs of the working group listed have been copied
on the distribution list of this transition announcement as well as
other individuals known to have expressed prior interest.


(8) Report of any Formal Objections

The Working Group received no Formal Objection during the preparation of
these specifications.


(9) Patent Disclosure Page Link can be found at
http://www.w3.org/2004/01/pp-impl/42786/status

This Transition Announcement has been prepared according to the
guidelines concerning such announcements at
http://www.w3.org/2005/08/online_xslt/xslt?xmlfile=http://www.w3.org/2005/08/01-transitions.html&xslfile=http://www.w3.org/2005/08/transitions.xsl&docstatus=lc-wd-tr#trans-annc

Regards,

Thierry Michel (on behalf of the Media Annotations Working Group chairs)
Team Contact for the Media Annotations WG.