Re: Updates to FileAPI
On Thu, 11 Nov 2010 08:43:21 +0100, Arun Ranganathan aranganat...@mozilla.com wrote: Jian Li is right. I'm fixing this in the editor's draft. Why does lastModified even return a DOMString? Can it not just return a Date? That seems much nicer. -- Anne van Kesteren http://annevankesteren.nl/
[Bug 10527] [IndexedDB] Result of IDBCursor.remove and update unspecified.
http://www.w3.org/Bugs/Public/show_bug.cgi?id=10527 Jeremy Orlow jor...@chromium.org changed: What|Removed |Added Status|NEW |RESOLVED Resolution||FIXED --- Comment #2 from Jeremy Orlow jor...@chromium.org 2010-11-11 10:35:57 UTC --- Should be. -- Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug.
Re: [IndexedDB] Events and requests
On Tue, Nov 9, 2010 at 11:35 AM, Jonas Sicking jo...@sicking.cc wrote: Hi All, One of the things we briefly discussed at the summit was that we should make IDBErrorEvents have a .transaction. This since we are allowing you to place new requests from within error handlers, but we currently provide no way to get from an error handler to any useful objects. Instead developers will have to use closures to get to the transaction or other object stores. Another thing that is somewhat strange is that we only make the result available through the success event. There is no way after that to get it from the request. So instead we use special event interfaces with supply access to source, transaction and result. Compare this to how XMLHttpRequests work. Here the result and error code is available on the request object itself. The 'load' event, which is equivalent to our 'success' event didn't supply any information until we recently added progress event support. But still it only supplies information about the progress, not the actual value itself. One thing we could do is to move .source .transaction .result .error to IDBRequest. Then make success and error events be simple events which only implement the Event interface. I.e. we could get rid of the IDBEvent, IDBSuccessEvent, IDBTransactionEvent and IDBErrorEvent interfaces. We'd still have to keep IDBVersionChangeEvent, but it can inherit Event directly. The request created from IDBFactory.open would return a IDBRequest where .transaction and .source is null. We already fire a IDBEvent where .source is null (actually, the spec currently doesn't define what the source should be I see now). The only major downside with this setup that I can see is that the current syntax: db.transaction([foo]).objectStore(foo).get(mykey).onsuccess = function(e) { alert(e.result); } would turn into the slightly more verbose db.transaction([foo]).objectStore(foo).get(mykey).onsuccess = function(e) { alert(e.target.result); } (And note that with the error handling that we have discussed, the above code snippets are actually plausible (apart from the alert() of course)). The upside that I can see is that we behave more like XMLHttpRequest. It seems that people currently follow a coding pattern where they place a request and at some later point hand the request to another piece of code. At that point the code can either get the result from the .result property, or install a onload handler and wait for the result if it isn't yet available. However I only have anecdotal evidence that this is a common coding pattern, so not much to go on. Here's a counter proposal: Let's add .transaction, .source, and .result to IDBEvent and just specify them to be null when there is no transaction, source, and/or result. We then remove readyState from IDBResult as it serves no purpose. What I'm proposing would result in an API that's much more similar to what we have at the moment, but would be a bit different than XHR. It is definitely good to have similar patterns for developers to follow, but I feel as thought the model of IndexedDB is already pretty different from XHR. For example, method calls are supplied parameters and return an IDBRequest object vs you using new to create the XHR object and then making method calls to set it up and then making a method call to start it. In fact, if you think about it, there's really not that much XHR and IndexedDB have in common except that they use event handlers. As for your proposal, let me think about it for a bit and forward it on to some people I know who are playing with IndexedDB already. J
Re: [Bug 11280] New: IDBFactory.databases doesn't work
I think it's useful and it's one of the things I recall people asking for early on, but I agree it's flawed as is. I guess we should just remove it for now and come back to it later based on demand. On Wed, Nov 10, 2010 at 1:49 AM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11280 Summary: IDBFactory.databases doesn't work Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org I've somehow missed this until now, but apparently IDBFactory has a .databases property. While I could see the use for this, it can't be implemented in a non-racy way. The problem is that other processes or threads can create and delete databases at any time, so there is no way to guarantee that a database which existed when .databases is checked, will exist a few milliseconds later when the knowledge of a database's existence is used. For example if (indexedDB.databases.contains(hello)) { indexedDB.open(hello).onsuccess = ...; } has a race condition. Another problem is that it can't be implemented without blocking the main thread while going off to another thread or process where the indexedDB implementation lives. It'll likely also require synchronous IO to get the list of databases from file. I suggest we simply remove it for now. -- Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug.
Re: [Bug 11270] New: Interaction between in-line keys and key generators
On Thu, Nov 11, 2010 at 2:37 AM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Nov 10, 2010 at 3:15 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Nov 10, 2010 at 2:07 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Nov 10, 2010 at 1:50 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Nov 10, 2010 at 1:43 PM, Pablo Castro pablo.cas...@microsoft.com wrote: From: public-webapps-requ...@w3.org [mailto: public-webapps-requ...@w3.org] On Behalf Of bugzi...@jessica.w3.org Sent: Monday, November 08, 2010 5:07 PM So what happens if trying save in an object store which has the following keypath, the following value. (The generated key is 4): foo.bar { foo: {} } Here the resulting object is clearly { foo: { bar: 4 } } But what about foo.bar { foo: { bar: 10 } } Does this use the value 10 rather than generate a new key, does it throw an exception or does it store the value { foo: { bar: 4 } }? I suspect that all options are somewhat arbitrary here. I'll just propose that we error out to ensure that nobody has the wrong expectations about the implementation preserving the initial value. I would be open to other options except silently overwriting the initial value with a generated one, as that's likely to confuse folks. It's relatively common for me to need to supply a manual value for an id field that's automatically generated when working with databases, and I don't see any particular reason that my situation would change if using IndexedDB. So I think that a manually-supplied key should be kept. I'm fine with either solution here. My database experience is too weak to have strong opinions on this matter. What do databases usually do with columns that use autoincrement but a value is still supplied? My recollection is that that is generally allowed? I can only speak from my experience with mySQL, which is generally very permissive, but which has very sensible behavior here imo. You are allowed to insert values manually into an AUTO_INCREMENT column. The supplied value is stored as normal. If the value was larger than the current autoincrement value, the value is increased so that the next auto-numbered row will have an id one higher than the row you just inserted. That is, given the following inserts: insert row(val) values (1); insert row(id,val) values (5,2); insert row(val) values (3); The table will contain [{id:1, val:1}, {id:5, val:2}, {id:6, val:3}]. If you have uniqueness constraints on the field, of course, those are also used. Basically, AUTO_INCREMENT just alters your INSERT before it hits the db if there's a missing value; otherwise the query is treated exactly as normal. This is how sqlite works too. It'd be great if we could make this required behavior. What would we do if what they provided was not an integer? What happens if the number they insert is so big that the next one causes overflow? What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? J
Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist
I really like this idea. I only skimmed the arguments against it, but they all seemed pretty hand-wavy to me. J On Mon, Nov 8, 2010 at 9:06 PM, Keean Schupke ke...@fry-it.com wrote: It would make sense if you make setting a key to undefined semantically equivalent to deleting the value (and no error if it does not exist), and return undefined on a get when no such key exists. That way 'undefined' cannot exist as a value in the object store, and is a safe marker for the key not existing in that index. Cheers, Keean. On 8 November 2010 17:52, Tab Atkins Jr. jackalm...@gmail.com wrote: On Mon, Nov 8, 2010 at 8:24 AM, Jonas Sicking jo...@sicking.cc wrote: Hi All, One of the things we discussed at TPAC was the fact that IDBObjectStore.get() and IDBObjectStore.delete() currently fire an error event if no record with the supplied key exists. Especially for .delete() this seems suboptimal as the author wanted the entry with the given key removed anyway. A better alternative here seems to be to return (through a success event) true or false to indicate if a record was actually removed. For IDBObjectStore.get() it also seems like it will create an error event in situations which aren't unexpected at all. For example checking for the existence of certain information, or getting information if it's there, but using some type of default if it's not. An obvious choice here is to simply return (through a success event) undefined if no entry is found. The downside with this is that you can't tell the lack of an entry apart from an entry stored with the value undefined. However it seemed more rare to want to tell those apart (you can generally store something other than undefined), than to end up in situations where you'd want to get() something which possibly didn't exist. Additionally, you can still use openCursor() to tell the two apart if really desired. I've for now checked in this change [1], but please speak up if you think this is a bad idea for whatever reason. In general I'd disagree with you on get(), and point to basically all hash-table implementations which all give a way of telling whether you got a result or not, but the fact that javascript has false, null, *and* undefined makes me okay with this. I believe it's sufficient to use 'undefined' as the flag for there was nothing for this key in the objectstore, and just tell authors don't put undefined in an objectstore; use false or null instead. ~TJ
Re: [IndexedDB] .value of no-duplicate cursors
On Tue, Nov 9, 2010 at 12:21 AM, Jonas Sicking jo...@sicking.cc wrote: This discussion seemed to die off with no clear resolution. Since I had forgotten about this thread I specified that the first item is always the one returned for _NO_DUPLICATE cursors. Where first means with lowest object-store key. It seems as though first should mean with the highest key in the case of reverse cursors. This is how it's implemented in Chromium. J I don't feel strongly either way if they should be removed or not. SQL has 'unique', but we of course we're not aiming to match SQL's feature set. / Jonas
Re: CfC: FPWD of Web Messaging; deadline November 13
On Sat, 06 Nov 2010 12:48:40 +0100, Arthur Barstow art.bars...@nokia.com wrote: Ian, All - during WebApps' November 1 gathering, participants expressed in an interest in publishing a First Public Working Draft of Web Messaging [1] and this is a CfC to do so: http://dev.w3.org/html5/postmsg/ Opera supports publication. cheers This CfC satisfies the group's requirement to record the group's decision to request advancement. By publishing this FPWD, the group sends a signal to the community to begin reviewing the document. The FPWD reflects where the group is on this spec at the time of publication; it does not necessarily mean there is consensus on the spec's contents. As with all of our CfCs, positive response is preferred and encouraged and silence will be assumed to be assent. The deadline for comments is November 13. -Art Barstow [1] http://www.w3.org/2010/11/01-webapps-minutes.html#item04 Original Message Subject:ACTION-598: Start a CfC to publish a FPWD of Web Messaging (Web Applications Working Group) Date: Mon, 1 Nov 2010 11:35:29 +0100 From: ext Web Applications Working Group Issue Tracker sysbot+trac...@w3.org Reply-To: Web Applications Working Group WG public-webapps@w3.org To: Barstow Art (Nokia-CIC/Boston) art.bars...@nokia.com ACTION-598: Start a CfC to publish a FPWD of Web Messaging (Web Applications Working Group) http://www.w3.org/2008/webapps/track/actions/598 On: Arthur Barstow Due: 2010-11-08 -- Charles McCathieNevile Opera Software, Standards Group je parle français -- hablo español -- jeg lærer norsk http://my.opera.com/chaals Try Opera: http://www.opera.com
Re: [Bug 11270] New: Interaction between in-line keys and key generators
Integers can be big 8bytes is common. It is generally assumed that the auto-increment counter will be big enough, overflow would wrap, and if the ID already exists there would be an error. In my experience auto-increment columns must be integers. Cheers, Keean. On 11 November 2010 12:20, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 2:37 AM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Nov 10, 2010 at 3:15 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Nov 10, 2010 at 2:07 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Nov 10, 2010 at 1:50 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Nov 10, 2010 at 1:43 PM, Pablo Castro pablo.cas...@microsoft.com wrote: From: public-webapps-requ...@w3.org [mailto: public-webapps-requ...@w3.org] On Behalf Of bugzi...@jessica.w3.org Sent: Monday, November 08, 2010 5:07 PM So what happens if trying save in an object store which has the following keypath, the following value. (The generated key is 4): foo.bar { foo: {} } Here the resulting object is clearly { foo: { bar: 4 } } But what about foo.bar { foo: { bar: 10 } } Does this use the value 10 rather than generate a new key, does it throw an exception or does it store the value { foo: { bar: 4 } }? I suspect that all options are somewhat arbitrary here. I'll just propose that we error out to ensure that nobody has the wrong expectations about the implementation preserving the initial value. I would be open to other options except silently overwriting the initial value with a generated one, as that's likely to confuse folks. It's relatively common for me to need to supply a manual value for an id field that's automatically generated when working with databases, and I don't see any particular reason that my situation would change if using IndexedDB. So I think that a manually-supplied key should be kept. I'm fine with either solution here. My database experience is too weak to have strong opinions on this matter. What do databases usually do with columns that use autoincrement but a value is still supplied? My recollection is that that is generally allowed? I can only speak from my experience with mySQL, which is generally very permissive, but which has very sensible behavior here imo. You are allowed to insert values manually into an AUTO_INCREMENT column. The supplied value is stored as normal. If the value was larger than the current autoincrement value, the value is increased so that the next auto-numbered row will have an id one higher than the row you just inserted. That is, given the following inserts: insert row(val) values (1); insert row(id,val) values (5,2); insert row(val) values (3); The table will contain [{id:1, val:1}, {id:5, val:2}, {id:6, val:3}]. If you have uniqueness constraints on the field, of course, those are also used. Basically, AUTO_INCREMENT just alters your INSERT before it hits the db if there's a missing value; otherwise the query is treated exactly as normal. This is how sqlite works too. It'd be great if we could make this required behavior. What would we do if what they provided was not an integer? What happens if the number they insert is so big that the next one causes overflow? What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? J
Re: [Bug 11257] New: Should IDBCursor.update be able to create a new entry?
On Mon, Nov 8, 2010 at 2:12 PM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11257 Summary: Should IDBCursor.update be able to create a new entry? Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org What should happen in the following case: db.transaction([foo]).objectStore(foo).openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.delete(); cursor.update({ id: 1234, value: Benny }); } This situation can of course arrive in more subtle ways: os = db.transaction([foo]).objectStore(foo); os.openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.update({ id: 1234, value: Benny }); } os.delete(1234); As specified, IDBCursor.update behaves just like IDBObjectStore.put and just creates a new entry, but this might be somewhat unexpected behavior. Let's just remove update and delete from IDBCursor and be done with it. J
Re: [Bug 11269] New: Evaluating keyPaths needs to be better specified
On Tue, Nov 9, 2010 at 3:38 AM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11269 Summary: Evaluating keyPaths needs to be better specified Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org This bug is very similar to bug 9832, however since that bug is discussing a lot of related issues, I wanted to file a separate one to make sure this isn't forgotten. Currently the syntax for parsing a keyPath isn't explicitly defined. It is clear that for keypath/object foo.bar.baz { foo: { bar: { baz: 4 } } } the result is 4. However does keypath/object foo[1].bar { foo: [ false, { bar: 4 }, true ] } evaluate to 4? What about Chromium supports all of the examples you've listed so far. Making it more restrictive could be a good idea though. Does anyone have specific thoughts? foo[x].bar { foo: [ { bar: 4 } ], x: 0 } or 4 Do either of those evaluate to 4? The last example could be useful when out-of-line keys are used, but still wanting to be able to search on the stored value. This seems like a bit of a stretch for v1. J
Re: Relational Data Model Example
Hi, Here are the Mozilla IndexedDB examples converted to us the relational data model. Points to note: - The database is validated (that is the schema in the JavaScript is either used to create the database if it does not exit, or to make sure that the database conforms to the schema if it does exist. Currently we require an exact match for validation to succeed, however the final version will use nullable and default values to allow attributes to be added to existing relations, or attributes ignored providing the required pre-conditions are met). - The 'true' at the end of the validate function tells it to drop the existing relations, so we always start with an empty database. - We add more data than the original insert example so there are some results from the join query. - There is no single-value-per-group test yet for project. But effectively when grouping by a unique attribute (like id) any attribute in the same (pre-join) relation is acceptable, as well as the attribute joined to in the other relation, but no other attribute if the joined to column is not unique (the case in the example). var rdm = new RelationalDataModel; var rdb = new rdm.WebSQLiteDataAdapter; var kids = rdm.relation('kids', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candy = rdm.relation('candy', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candySales = rdm.relation('candySales', { kid: rdm.attribute('kid', rdm.integer), candy: rdm.attribute('candy', rdm.integer), date: rdm.attribute('date', rdm.string) }); var v = rdb.validate('CandyDB', 1.0, [kids, candy, candySales], true).onsuccess = function(db) { // new database has been created, or existing database has been _validated_ var i = db.transaction(function(tx) { [ {id: 1, name: 'Anna'}, {id: 2, name: 'Betty'}, {id: 3, name: 'Christine'} ].forEach(function(k) { tx.insert(kids, k).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + k.name + ' with id ' + id + '\n'; }; }); [ {id: 1, name: 'toffee-apple'}, {id: 2, name: 'bonbon'} ].forEach(function(c) { tx.insert(candy, c).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + c.name + ' with id ' + id + '\n'; }; }); [ {kid: 1, candy: 1, date: '1/1/2010'}, {kid: 1, candy: 2, date: '2/1/2010'}, {kid: 2, candy: 2, date: '2/1/2010'}, {kid: 3, candy: 1, date: '1/1/2010'}, {kid: 3, candy: 1, date: '2/1/2010'}, {kid: 3, candy: 1, date: '3/1/2010'} ].forEach(function(s) { tx.insert(candySales, s).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + s.kid + '/' + s.candy + ' with id ' + id + '\n'; }; }); }); i.onsuccess = function() { var q1 = db.transaction(function(tx) { tx.query(kids.project(kids.attributes.name)).onsuccess = function(t, names) { names.forEach(function(name) { document.getElementById('kidList').textContent += '\t' + name + '\n'; }); }; }); q1.onsuccess = function() { var q2 = db.transaction(function(tx) { tx.query( kids.join(candySales, kids.attributes.id.eq(candySales.attributes.kid)) .group(candySales.attributes.kid) .project({name:kids.attributes.name, count:kids.attributes.name.count()}) ).onsuccess = function(t, results) { var display = document.getElementById('purchaseList'); results.forEach(function(item) { display.textContent += '\t' + item.name + ' bought ' + item.count + ' pieces\n'; }); }; }); }; }; } Cheers, Keean. On 9 November 2010 17:13, Keean Schupke
Re: Updates to FileAPI
On Thu, Nov 11, 2010 at 1:28 AM, Anne van Kesteren ann...@opera.com wrote: On Thu, 11 Nov 2010 08:43:21 +0100, Arun Ranganathan aranganat...@mozilla.com wrote: Jian Li is right. I'm fixing this in the editor's draft. Why does lastModified even return a DOMString? Can it not just return a Date? That seems much nicer. Probably because WebIDL doesn't (didn't?) have a date type. That's a silly reason in the first place, and heycam is fixing (has fixed?) it in the second place. ~TJ
Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist
On Thu, Nov 11, 2010 at 4:26 AM, Jeremy Orlow jor...@chromium.org wrote: I really like this idea. I only skimmed the arguments against it, but they all seemed pretty hand-wavy to me. Which idea specifically do you like? / Jonas
Re: [IndexedDB] .value of no-duplicate cursors
On Thu, Nov 11, 2010 at 4:29 AM, Jeremy Orlow jor...@chromium.org wrote: On Tue, Nov 9, 2010 at 12:21 AM, Jonas Sicking jo...@sicking.cc wrote: This discussion seemed to die off with no clear resolution. Since I had forgotten about this thread I specified that the first item is always the one returned for _NO_DUPLICATE cursors. Where first means with lowest object-store key. It seems as though first should mean with the highest key in the case of reverse cursors. This is how it's implemented in Chromium. The reason I specced it they way I did, with the lowest key always being used, is that this way a NEXT_NO_DUPLICATE and a PREV_NO_DUPLICATE cursor iterate the same entries. It seems unexpected that reversing direction would return different results? / Jonas
Re: [IndexedDB] .value of no-duplicate cursors
On Thu, Nov 11, 2010 at 8:07 AM, Jonas Sicking jo...@sicking.cc wrote: The reason I specced it they way I did, with the lowest key always being used, is that this way a NEXT_NO_DUPLICATE and a PREV_NO_DUPLICATE cursor iterate the same entries. It seems unexpected that reversing direction would return different results? I agree. -Ben
Re: Comments on the http://www.w3.org/TR/widgets/
On 11/9/10 12:13 PM, viji wrote: Hello Here are some issues/clarifications on the pc test suite 1. ta-uLHyIMvLwz/000 : dl.wgt The archive is not encrypted. The test description mentions that it is encrypted 2. i18n-lro/020 i18nlro20.wgt Is the expectation of the testcase correct. Should the word SED change to DES, to be in sync with the example The Awesome Super bdo dir=rtlDude/bdoWidget in pc spec. Does it expect the character to be changed from lt to gt. Same behavior applies to i18n-ltr 010 020, i18n-rlo 020 etc I had a look, I think the tests are correct. -- Marcos Caceres Opera Software
Re: Updates to FileAPI
- Original Message - On Thu, Nov 11, 2010 at 1:28 AM, Anne van Kesteren ann...@opera.com wrote: On Thu, 11 Nov 2010 08:43:21 +0100, Arun Ranganathan aranganat...@mozilla.com wrote: Jian Li is right. I'm fixing this in the editor's draft. Why does lastModified even return a DOMString? Can it not just return a Date? That seems much nicer. Probably because WebIDL doesn't (didn't?) have a date type. That's a silly reason in the first place, and heycam is fixing (has fixed?) it in the second place. I agree that a readonly Date object returned for lastModified is one way to go, but considered it overkill for the feature. If you think a Date object provides greater utility to simply get at the lastModified data, I'm entirely amenable to putting that in the editor's draft. -- A*
Discussion of File API at TPAC in Lyon
At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). There were additional discussions about Content-Disposition and further headers introduced to Blob URIs, but we agreed that this should go to the listserv for further discussion. The question of *further* HTTP-like behaviors on Blob URIs is still open for discussion. Notably, Content-Disposition is desired for download management, but using a header to toggle browser behavior seems a bit arbitrary, and there may be better ways to approach the issue. While I look forward to the minutes from the WebApps meeting, does anyone in attendance agree or disagree that these are the main points to take away, or wish to add something else? Note that at least two implementations are around the corner with window.createObjectURL and window.revokeObjectURL. Vendor prefixing is a viable option in the mean time. -- A* [1] http://www.w3.org/TR/FileAPI/
FileAPI editorial nit
Hi there, A valid Blob URI could look like: blob:550e8400-e29b-41d4-a716-44665544#aboutABBA ...that's actually a URI *reference*, not a URI (because of the fragment identifier) Best regards, Julian
Re: Relational Data Model Example
Hi Keean, This is awesome stuff! Very excited to see libraries that can run both on top of IndexedDB and on top of WebSQL. Would love to hear more about your experience working against the IndexedDB API. / Jonas On Thu, Nov 11, 2010 at 5:42 AM, Keean Schupke ke...@fry-it.com wrote: Hi, Here are the Mozilla IndexedDB examples converted to us the relational data model. Points to note: - The database is validated (that is the schema in the JavaScript is either used to create the database if it does not exit, or to make sure that the database conforms to the schema if it does exist. Currently we require an exact match for validation to succeed, however the final version will use nullable and default values to allow attributes to be added to existing relations, or attributes ignored providing the required pre-conditions are met). - The 'true' at the end of the validate function tells it to drop the existing relations, so we always start with an empty database. - We add more data than the original insert example so there are some results from the join query. - There is no single-value-per-group test yet for project. But effectively when grouping by a unique attribute (like id) any attribute in the same (pre-join) relation is acceptable, as well as the attribute joined to in the other relation, but no other attribute if the joined to column is not unique (the case in the example). var rdm = new RelationalDataModel; var rdb = new rdm.WebSQLiteDataAdapter; var kids = rdm.relation('kids', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candy = rdm.relation('candy', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candySales = rdm.relation('candySales', { kid: rdm.attribute('kid', rdm.integer), candy: rdm.attribute('candy', rdm.integer), date: rdm.attribute('date', rdm.string) }); var v = rdb.validate('CandyDB', 1.0, [kids, candy, candySales], true).onsuccess = function(db) { // new database has been created, or existing database has been _validated_ var i = db.transaction(function(tx) { [ {id: 1, name: 'Anna'}, {id: 2, name: 'Betty'}, {id: 3, name: 'Christine'} ].forEach(function(k) { tx.insert(kids, k).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + k.name + ' with id ' + id + '\n'; }; }); [ {id: 1, name: 'toffee-apple'}, {id: 2, name: 'bonbon'} ].forEach(function(c) { tx.insert(candy, c).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + c.name + ' with id ' + id + '\n'; }; }); [ {kid: 1, candy: 1, date: '1/1/2010'}, {kid: 1, candy: 2, date: '2/1/2010'}, {kid: 2, candy: 2, date: '2/1/2010'}, {kid: 3, candy: 1, date: '1/1/2010'}, {kid: 3, candy: 1, date: '2/1/2010'}, {kid: 3, candy: 1, date: '3/1/2010'} ].forEach(function(s) { tx.insert(candySales, s).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + s.kid + '/' + s.candy + ' with id ' + id + '\n'; }; }); }); i.onsuccess = function() { var q1 = db.transaction(function(tx) { tx.query(kids.project(kids.attributes.name)).onsuccess = function(t, names) { names.forEach(function(name) { document.getElementById('kidList').textContent += '\t' + name + '\n'; }); }; }); q1.onsuccess = function() { var q2 = db.transaction(function(tx) { tx.query( kids.join(candySales, kids.attributes.id.eq(candySales.attributes.kid)) .group(candySales.attributes.kid) .project({name:kids.attributes.name, count:kids.attributes.name.count()}) ).onsuccess = function(t, results) { var display = document.getElementById('purchaseList');
Re: [Bug 11257] New: Should IDBCursor.update be able to create a new entry?
On Thu, Nov 11, 2010 at 5:11 AM, Jeremy Orlow jor...@chromium.org wrote: On Mon, Nov 8, 2010 at 2:12 PM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11257 Summary: Should IDBCursor.update be able to create a new entry? Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org What should happen in the following case: db.transaction([foo]).objectStore(foo).openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.delete(); cursor.update({ id: 1234, value: Benny }); } This situation can of course arrive in more subtle ways: os = db.transaction([foo]).objectStore(foo); os.openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.update({ id: 1234, value: Benny }); } os.delete(1234); As specified, IDBCursor.update behaves just like IDBObjectStore.put and just creates a new entry, but this might be somewhat unexpected behavior. Let's just remove update and delete from IDBCursor and be done with it. The problem is that you can't always get to the key of the objectStore entry to delete/update. Specifically if the objectStore uses out-of-line keys the cursor doesn't expose those. Also, I think that if we simply implement .update and .delete as calls to .put and .delete on the object store, the implementation burden is minimal. / Jonas
Re: CfC: FPWD of Web Messaging; deadline November 13
On Thu, 11 Nov 2010, Arthur Barstow wrote: When WebApps re-chartered last Spring, Web Messaging was added to our Charter thus there is an expectation we will publish it. I really don't think that what our charters say sets much of an expectation. There would be much more concern over them being accurate if that was the case. :-) Assuming we get consensus to publish the FPWD, one way to move forward with the publication would be for me [and Mike Smith if he's available] to copy the latest ED and only make required changes to the text to pass Pub Rules e.g. update the Status of the Doc section. Would that be OK? Honestly I don't really see what value publishing this draft has. Just doing it because our charter says to do it is just bureaucracy for bureaucracy's sake. In any case, I do not think we should publish this draft without first solving these problems: I'm also a bit concerned that every time we publish anything on the TR/ page, we end up littering the Web with obsolete drafts (since the specs are maintained much faster than we publish them). I'd really rather just move away from publishing drafts on the TR/ page at all, if we could update the patent policy accordingly. I frequently get questions in private e-mails from implementors who are looking at obsolete drafts on the TR/ page about issues that have long been solved in the up to date drafts on dev.w3.org or at the WHATWG. If there wasn't such high overhead to publishing on the TR/ page, an alternative would be to publish a new draft there frequently. In fact, the best thing on the short term might be to publish a new REC-level draft there every week or every month or some such (probably the best interval would be whatever the patent policy's exclusion window is), since that would actually make the patent policy work again. (Currently the patent policy at the W3C is almost as useless as at the IETF since when we follow the process properly, we almost never get to REC.) These problems are technically easy to solve, only politics would prevent us from addressing them. I'm not really interested in discussing the politics, though. The problems are pretty obvious to anyone who's involved in the development of actively-used Web standards; IMHO it's just something W3C staff should fix, there's no need for any discussion really. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). Actually, this was a brain-fart on my part. What was suggested was that we simply allow: img.src = myFile; img.src = myBlob; img.src = myFutureStream; img.src = http://www.sweden.se/ABBA.jpg;; These things could be implemented without lifetime worries. What we might need is a IDL construct so that a specification can just say interface HTMLImageElement { ... attribute URLThingamajig src; ... }; Which would automatically define that it accepts files/blobs/strings. And gives us a central place to update when we want to add streams and other things. / Jonas
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). While I agree that we came up with the new top-level object [called the dummy object in the minutes] to hold createObjectURL and revokeObjectURL, I don't think we actually threw away the second method. It would still be useful to be able to throw away Blob URLs explicitly, so as to avoid keeping the Blobs around forever in long-lived windows. Also, I believe we decided that this should be disjoint from the URL object that abarth is speccing: arun: is it worth make global dummy object the same thing being specced by adam barth no jonas: abarth's thing is to solve parsing urls. this isn't want we need to do with blob urls anne: not so sure jonas: there's a vague resemblance given that they both revolve around URLs sam: agrees ... especially since adam's thing doens't exist yet Checking again, my interpretation of the minutes is the same as my memory, so I can't possibly be mistaken ;'. There were additional discussions about Content-Disposition and further headers introduced to Blob URIs, but we agreed that this should go to the listserv for further discussion. The question of *further* HTTP-like behaviors on Blob URIs is still open for discussion. Notably, Content-Disposition is desired for download management, but using a header to toggle browser behavior seems a bit arbitrary, and there may be better ways to approach the issue. Yeah, I think we made some good progress there, but no conclusions. I'll start another thread about the headers. While I look forward to the minutes from the WebApps meeting, does anyone in attendance agree or disagree that these are the main points to take away, or wish to add something else? Note that at least two implementations are around the corner with window.createObjectURL and window.revokeObjectURL. Vendor prefixing is a viable option in the mean time. -- A* [1] http://www.w3.org/TR/FileAPI/
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 10:02 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). Actually, this was a brain-fart on my part. What was suggested was that we simply allow: img.src = myFile; img.src = myBlob; img.src = myFutureStream; img.src = http://www.sweden.se/ABBA.jpg;; These things could be implemented without lifetime worries. What we might need is a IDL construct so that a specification can just say interface HTMLImageElement { ... attribute URLThingamajig src; ... }; Which would automatically define that it accepts files/blobs/strings. And gives us a central place to update when we want to add streams and other things. While this is a clean API, it doesn't work for passing URLs to plugins, and it doesn't work when folks construct a bunch of DOM via innerHTML. And if you add a way to get a string from one of these objects, you're back with the lifetime problem again.
Re: [Bug 11270] New: Interaction between in-line keys and key generators
On Thu, Nov 11, 2010 at 6:41 PM, Tab Atkins Jr. jackalm...@gmail.comwrote: On Thu, Nov 11, 2010 at 4:20 AM, Jeremy Orlow jor...@chromium.org wrote: What would we do if what they provided was not an integer? The behavior isn't very important; throwing would be fine here. In mySQL, you can only put AUTO_INCREMENT on columns in the integer family. What happens if the number they insert is so big that the next one causes overflow? The same thing that happens if you do ++ on a variable holding a number that's too large. Or, more directly, the same thing that happens if you somehow fill up a table to the integer limit (probably deleting rows along the way to free up space), and then try to add a new row. What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? A big one is importing some data into a live table. Many smaller ones are related to implicit data constraints that exist in the application but aren't directly expressed in the table. I've had several times when I could normally just rely on auto-numbering for something, but occasionally, due to other data I was inserting elsewhere, had to specify a particular id. This assumes that your autonumbers aren't going to overlap and is going to behave really badly when they do. Honestly, I don't care too much about this, but I'm skeptical we're doing the right thing here. J
Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist
The email I responded to: It would make sense if you make setting a key to undefined semantically equivalent to deleting the value (and no error if it does not exist), and return undefined on a get when no such key exists. That way 'undefined' cannot exist as a value in the object store, and is a safe marker for the key not existing in that index. undefined should be symmetric. If something not existing returns undefined then passing in undefined should make it not exist. Overloading the meaning of a get returning undefined is ugly. And simply disallowing a value also seems a bit odd. But I think this is pretty elegant semantically. J On Thu, Nov 11, 2010 at 7:04 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 4:26 AM, Jeremy Orlow jor...@chromium.org wrote: I really like this idea. I only skimmed the arguments against it, but they all seemed pretty hand-wavy to me. Which idea specifically do you like? / Jonas
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 11:18 AM, Eric Uhrhane er...@google.com wrote: On Thu, Nov 11, 2010 at 10:02 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). Actually, this was a brain-fart on my part. What was suggested was that we simply allow: img.src = myFile; img.src = myBlob; img.src = myFutureStream; img.src = http://www.sweden.se/ABBA.jpg;; These things could be implemented without lifetime worries. What we might need is a IDL construct so that a specification can just say interface HTMLImageElement { ... attribute URLThingamajig src; ... }; Which would automatically define that it accepts files/blobs/strings. And gives us a central place to update when we want to add streams and other things. While this is a clean API, it doesn't work for passing URLs to plugins, and it doesn't work when folks construct a bunch of DOM via innerHTML. And if you add a way to get a string from one of these objects, you're back with the lifetime problem again. Oh, definitely, we still need the createObjectURL/revokeObjectURL functions. Sorry, that was probably unclear. However we're still left without a place to put them. Maybe it's as simple as putting them on the document object? That works nicely since their lifetime is scoped to that of the document object. Another possibility is putting them on the URL interface object. I.e. not using URL objects themselves, but rather something like this: x = URL.createObjectURL(myblock); typeof x == string; URL.revokeObjectURL(x); But I think I prefer the document solution. / Jonas / Jonas
Re: [IndexedDB] .value of no-duplicate cursors
When I think of this, I think of it returning the first item for a particular value. I can't think of any use cases where it'd matter either way though. Can you? J On Thu, Nov 11, 2010 at 7:13 PM, ben turner bent.mozi...@gmail.com wrote: On Thu, Nov 11, 2010 at 8:07 AM, Jonas Sicking jo...@sicking.cc wrote: The reason I specced it they way I did, with the lowest key always being used, is that this way a NEXT_NO_DUPLICATE and a PREV_NO_DUPLICATE cursor iterate the same entries. It seems unexpected that reversing direction would return different results? I agree. -Ben
Re: [Bug 11257] New: Should IDBCursor.update be able to create a new entry?
On Thu, Nov 11, 2010 at 8:46 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 5:11 AM, Jeremy Orlow jor...@chromium.org wrote: On Mon, Nov 8, 2010 at 2:12 PM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11257 Summary: Should IDBCursor.update be able to create a new entry? Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org What should happen in the following case: db.transaction([foo]).objectStore(foo).openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.delete(); cursor.update({ id: 1234, value: Benny }); } This situation can of course arrive in more subtle ways: os = db.transaction([foo]).objectStore(foo); os.openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.update({ id: 1234, value: Benny }); } os.delete(1234); As specified, IDBCursor.update behaves just like IDBObjectStore.put and just creates a new entry, but this might be somewhat unexpected behavior. Let's just remove update and delete from IDBCursor and be done with it. The problem is that you can't always get to the key of the objectStore entry to delete/update. Specifically if the objectStore uses out-of-line keys the cursor doesn't expose those. Why not fix this use case then? I.e. change the cursor to return .indexKey, .primaryKey, .value (or something like that). If we did this, we could even get rid of the different between object cursors and key cursors (which overload the .value to mean the primary key, which is quite confusing). J
Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist
On 11/11/2010 11:44 AM, Jeremy Orlow wrote: The email I responded to: It would make sense if you make setting a key to undefined semantically equivalent to deleting the value (and no error if it does not exist), and return undefined on a get when no such key exists. That way 'undefined' cannot exist as a value in the object store, and is a safe marker for the key not existing in that index. undefined should be symmetric. If something not existing returns undefined then passing in undefined should make it not exist. Overloading the meaning of a get returning undefined is ugly. And simply disallowing a value also seems a bit odd. But I think this is pretty elegant semantically. Sorry, but I disagree. I feel that calling put results in a deletion to be highly counter-intuitive, even if it makes sense when you think about it. Cheers, Shawn smime.p7s Description: S/MIME Cryptographic Signature
Re: Discussion of File API at TPAC in Lyon
On Nov/11/2010 11:52 AM, ext Arun Ranganathan wrote: While I look forward to the minutes from the WebApps meeting, The minutes from File* discussion are: http://www.w3.org/2010/11/02-webapps-minutes.html#item16 http://www.w3.org/2010/11/02-webapps-minutes.html#item17 -AB
Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist
On Thu, Nov 11, 2010 at 11:44 AM, Jeremy Orlow jor...@chromium.org wrote: The email I responded to: It would make sense if you make setting a key to undefined semantically equivalent to deleting the value (and no error if it does not exist), and return undefined on a get when no such key exists. That way 'undefined' cannot exist as a value in the object store, and is a safe marker for the key not existing in that index. undefined should be symmetric. If something not existing returns undefined then passing in undefined should make it not exist. Overloading the meaning of a get returning undefined is ugly. And simply disallowing a value also seems a bit odd. But I think this is pretty elegant semantically. As I've asked previously in the tread. What problem are you trying to solve? Can you describe the type of application that gets easier to write/possible to write/has cleaner code/runs faster if we make this change? It seems like deleting on .put(undefined) creates a very unexpected behavior just to try to cover a rare edge case, wanting to both store undefined, and tell it apart from the lack of value. In fact, the proposal doesn't even solve that edge case since it no longer is possible to store undefined. Which brings me back to the question above of what problem you are trying to solve. / Jonas
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 11:18 AM, Eric Uhrhane er...@google.com wrote: On Thu, Nov 11, 2010 at 10:02 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). Actually, this was a brain-fart on my part. What was suggested was that we simply allow: img.src = myFile; img.src = myBlob; img.src = myFutureStream; img.src = http://www.sweden.se/ABBA.jpg;; These things could be implemented without lifetime worries. What we might need is a IDL construct so that a specification can just say interface HTMLImageElement { ... attribute URLThingamajig src; ... }; Which would automatically define that it accepts files/blobs/strings. And gives us a central place to update when we want to add streams and other things. While this is a clean API, it doesn't work for passing URLs to plugins, and it doesn't work when folks construct a bunch of DOM via innerHTML. And if you add a way to get a string from one of these objects, you're back with the lifetime problem again. Oh, definitely, we still need the createObjectURL/revokeObjectURL functions. Sorry, that was probably unclear. However we're still left without a place to put them. Maybe it's as simple as putting them on the document object? That works nicely since their lifetime is scoped to that of the document object. If we're going to keep both functions around, then it's honestly not *that much* of an improvement to move them from window* to document*, is it? In this case, since we're going to add something to HTMLImageElement, why not leave createObjectURL and revokeObject URL well alone as part of window*? So it looks like we'll add a [Supplemental] to interfaces like HTMLImageElement allowing them to take a src object, and we can then define *that* src object to accomodate Stream and Blob use case scenarios. I'm amenable to first introducing that extension to HTMLImageElement in File API if everyone else is :) -- A*
Re: [Bug 11257] New: Should IDBCursor.update be able to create a new entry?
On Thu, Nov 11, 2010 at 11:51 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 8:46 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 5:11 AM, Jeremy Orlow jor...@chromium.org wrote: On Mon, Nov 8, 2010 at 2:12 PM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11257 Summary: Should IDBCursor.update be able to create a new entry? Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org What should happen in the following case: db.transaction([foo]).objectStore(foo).openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.delete(); cursor.update({ id: 1234, value: Benny }); } This situation can of course arrive in more subtle ways: os = db.transaction([foo]).objectStore(foo); os.openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.update({ id: 1234, value: Benny }); } os.delete(1234); As specified, IDBCursor.update behaves just like IDBObjectStore.put and just creates a new entry, but this might be somewhat unexpected behavior. Let's just remove update and delete from IDBCursor and be done with it. The problem is that you can't always get to the key of the objectStore entry to delete/update. Specifically if the objectStore uses out-of-line keys the cursor doesn't expose those. Why not fix this use case then? I.e. change the cursor to return .indexKey, .primaryKey, .value (or something like that). If we did this, we could even get rid of the different between object cursors and key cursors (which overload the .value to mean the primary key, which is quite confusing). I would be ok with exposing some new property which exposes the objectstore key. But I still think that .update and .delete are useful and logical API which has very little implementation cost. / Jonas
Re: Discussion of File API at TPAC in Lyon
On Thu, Nov 11, 2010 at 1:06 PM, Arun Ranganathan aranganat...@mozilla.com wrote: On Thu, Nov 11, 2010 at 11:18 AM, Eric Uhrhane er...@google.com wrote: On Thu, Nov 11, 2010 at 10:02 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 8:52 AM, Arun Ranganathan aranganat...@mozilla.com wrote: At the recent Technical Plenary and All WG Meetings in Lyon, File API[1] was discussed, and there are some take away action items that I minuted for myself for File API, but I'm not sure they are reflected in ACTION items, etc. From my own notes: Essentially, strong opinions were voiced against having top-level methods createObjectURL and revokeObjectURL. So the biggest change was to introduce a new top-level object (ObjectURL) which would have methods to obtain a string Blob URI. This removes the need for a revocation mechanism, since now the ObjectURL object (which would take as a constructor the Blob object) would oversee lifetime issues. This is a big change, but potentially one that allows us to work with the emerging URL API (which hopefully is going somewhere). Actually, this was a brain-fart on my part. What was suggested was that we simply allow: img.src = myFile; img.src = myBlob; img.src = myFutureStream; img.src = http://www.sweden.se/ABBA.jpg;; These things could be implemented without lifetime worries. What we might need is a IDL construct so that a specification can just say interface HTMLImageElement { ... attribute URLThingamajig src; ... }; Which would automatically define that it accepts files/blobs/strings. And gives us a central place to update when we want to add streams and other things. While this is a clean API, it doesn't work for passing URLs to plugins, and it doesn't work when folks construct a bunch of DOM via innerHTML. And if you add a way to get a string from one of these objects, you're back with the lifetime problem again. Oh, definitely, we still need the createObjectURL/revokeObjectURL functions. Sorry, that was probably unclear. However we're still left without a place to put them. Maybe it's as simple as putting them on the document object? That works nicely since their lifetime is scoped to that of the document object. If we're going to keep both functions around, then it's honestly not *that much* of an improvement to move them from window* to document*, is it? In this case, since we're going to add something to HTMLImageElement, why not leave createObjectURL and revokeObject URL well alone as part of window*? I think the concern is that functions on window can collide with javascript functions that webpages can define. I.e. there's a risk that there are pages out there with code like: function createObjectURL(x, y, z) { doSomethingCompletelyUnrelatedToBlobs(z, x + y); } So it looks like we'll add a [Supplemental] to interfaces like HTMLImageElement allowing them to take a src object, and we can then define *that* src object to accomodate Stream and Blob use case scenarios. I'm amenable to first introducing that extension to HTMLImageElement in File API if everyone else is :) I'd be fine with that, but might also be easy to ask that this is added to the HTML5 spec. / Jonas
Re: [IndexedDB] .value of no-duplicate cursors
On Thu, Nov 11, 2010 at 11:48 AM, Jeremy Orlow jor...@chromium.org wrote: When I think of this, I think of it returning the first item for a particular value. I can't think of any use cases where it'd matter either way though. Can you? Define first :) I also can't think of use cases where it matters which is returned, but I still think it's confusing that it'd change depending on which order things are iterated. Consider a page with displays a table of results and which has the ability to sort results by a particular column by clicking the header in that column. It would seem strange if the contents of that table change if you switched a column between ascending and descending sorting. / Jonas
Re: [Bug 11270] New: Interaction between in-line keys and key generators
On Thu, Nov 11, 2010 at 11:41 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 6:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Nov 11, 2010 at 4:20 AM, Jeremy Orlow jor...@chromium.org wrote: What would we do if what they provided was not an integer? The behavior isn't very important; throwing would be fine here. In mySQL, you can only put AUTO_INCREMENT on columns in the integer family. What happens if the number they insert is so big that the next one causes overflow? The same thing that happens if you do ++ on a variable holding a number that's too large. Or, more directly, the same thing that happens if you somehow fill up a table to the integer limit (probably deleting rows along the way to free up space), and then try to add a new row. What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? A big one is importing some data into a live table. Many smaller ones are related to implicit data constraints that exist in the application but aren't directly expressed in the table. I've had several times when I could normally just rely on auto-numbering for something, but occasionally, due to other data I was inserting elsewhere, had to specify a particular id. This assumes that your autonumbers aren't going to overlap and is going to behave really badly when they do. Honestly, I don't care too much about this, but I'm skeptical we're doing the right thing here. Pablo did bring up a good use case, which is wanting to migrate existing data to a new object store, for example with a new schema. And every database examined so far has some ability to specify autonumbered columns. overlaps aren't a problem in practice since 64bit integers are really really big. So unless someone maliciously sets a number close to the upper bound of that then overlaps won't be a problem. / Jonas
Re: Relational Data Model Example
Well the implementation is not running on IndexedDB yet... however I can see no fundamental problems that will stop the implementation. I am sure once I get into the details there will be issues - but I expect these to be performance related. The plan is to continue to refine the common abstraction part of the prototype - I want to complete the relational data model - then start the IndexedDB backend. I'll let you know when I have something on IndexedDB. Cheers, Keean. On 11 November 2010 17:35, Jonas Sicking jo...@sicking.cc wrote: Hi Keean, This is awesome stuff! Very excited to see libraries that can run both on top of IndexedDB and on top of WebSQL. Would love to hear more about your experience working against the IndexedDB API. / Jonas On Thu, Nov 11, 2010 at 5:42 AM, Keean Schupke ke...@fry-it.com wrote: Hi, Here are the Mozilla IndexedDB examples converted to us the relational data model. Points to note: - The database is validated (that is the schema in the JavaScript is either used to create the database if it does not exit, or to make sure that the database conforms to the schema if it does exist. Currently we require an exact match for validation to succeed, however the final version will use nullable and default values to allow attributes to be added to existing relations, or attributes ignored providing the required pre-conditions are met). - The 'true' at the end of the validate function tells it to drop the existing relations, so we always start with an empty database. - We add more data than the original insert example so there are some results from the join query. - There is no single-value-per-group test yet for project. But effectively when grouping by a unique attribute (like id) any attribute in the same (pre-join) relation is acceptable, as well as the attribute joined to in the other relation, but no other attribute if the joined to column is not unique (the case in the example). var rdm = new RelationalDataModel; var rdb = new rdm.WebSQLiteDataAdapter; var kids = rdm.relation('kids', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candy = rdm.relation('candy', { id: rdm.attribute('id', rdm.integer, {auto_increment: true}), name: rdm.attribute('name', rdm.string) }); var candySales = rdm.relation('candySales', { kid: rdm.attribute('kid', rdm.integer), candy: rdm.attribute('candy', rdm.integer), date: rdm.attribute('date', rdm.string) }); var v = rdb.validate('CandyDB', 1.0, [kids, candy, candySales], true).onsuccess = function(db) { // new database has been created, or existing database has been _validated_ var i = db.transaction(function(tx) { [ {id: 1, name: 'Anna'}, {id: 2, name: 'Betty'}, {id: 3, name: 'Christine'} ].forEach(function(k) { tx.insert(kids, k).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + k.name + ' with id ' + id + '\n'; }; }); [ {id: 1, name: 'toffee-apple'}, {id: 2, name: 'bonbon'} ].forEach(function(c) { tx.insert(candy, c).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + c.name + ' with id ' + id + '\n'; }; }); [ {kid: 1, candy: 1, date: '1/1/2010'}, {kid: 1, candy: 2, date: '2/1/2010'}, {kid: 2, candy: 2, date: '2/1/2010'}, {kid: 3, candy: 1, date: '1/1/2010'}, {kid: 3, candy: 1, date: '2/1/2010'}, {kid: 3, candy: 1, date: '3/1/2010'} ].forEach(function(s) { tx.insert(candySales, s).onsuccess = function(t, id) { document.getElementById('display').textContent += '\tSaved record for ' + s.kid + '/' + s.candy + ' with id ' + id + '\n'; }; }); }); i.onsuccess = function() { var q1 = db.transaction(function(tx) { tx.query(kids.project(kids.attributes.name)).onsuccess = function(t, names) { names.forEach(function(name) { document.getElementById('kidList').textContent += '\t' + name + '\n';
Re: [Bug 11257] New: Should IDBCursor.update be able to create a new entry?
On Fri, Nov 12, 2010 at 12:11 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 11:51 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 8:46 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 5:11 AM, Jeremy Orlow jor...@chromium.org wrote: On Mon, Nov 8, 2010 at 2:12 PM, bugzi...@jessica.w3.org wrote: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11257 Summary: Should IDBCursor.update be able to create a new entry? Product: WebAppsWG Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Indexed Database API AssignedTo: dave.n...@w3.org ReportedBy: jo...@sicking.cc QAContact: member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org What should happen in the following case: db.transaction([foo]).objectStore(foo).openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.delete(); cursor.update({ id: 1234, value: Benny }); } This situation can of course arrive in more subtle ways: os = db.transaction([foo]).objectStore(foo); os.openCursor().onsuccess = function(e) { var cursor = e.result; if (!cursor) return; cursor.update({ id: 1234, value: Benny }); } os.delete(1234); As specified, IDBCursor.update behaves just like IDBObjectStore.put and just creates a new entry, but this might be somewhat unexpected behavior. Let's just remove update and delete from IDBCursor and be done with it. The problem is that you can't always get to the key of the objectStore entry to delete/update. Specifically if the objectStore uses out-of-line keys the cursor doesn't expose those. Why not fix this use case then? I.e. change the cursor to return .indexKey, .primaryKey, .value (or something like that). If we did this, we could even get rid of the different between object cursors and key cursors (which overload the .value to mean the primary key, which is quite confusing). I would be ok with exposing some new property which exposes the objectstore key. And thus get rid of the openKeyCursor and getKey? This would make the spec a bit more complicated (we'd need to have 2 IDBCursor objects, one that inherits from the other), but seems much simpler to use. Shall I file a bug? But I still think that .update and .delete are useful and logical API which has very little implementation cost. I still don't understand why you think low implementation is important at all when talking about these APIs. If something is so insanely complex that implementors would likely not implement it, then I can understand bringing it up, but otherwise I think most people on this list can agree that it should cary _very_ little weight when deciding whether API surface area is worth it. J
Re: [Bug 11270] New: Interaction between in-line keys and key generators
On Thu, Nov 11, 2010 at 9:22 PM, Jeremy Orlow jor...@chromium.org wrote: On Fri, Nov 12, 2010 at 12:32 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 11:41 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 6:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Nov 11, 2010 at 4:20 AM, Jeremy Orlow jor...@chromium.org wrote: What would we do if what they provided was not an integer? The behavior isn't very important; throwing would be fine here. In mySQL, you can only put AUTO_INCREMENT on columns in the integer family. What happens if the number they insert is so big that the next one causes overflow? The same thing that happens if you do ++ on a variable holding a number that's too large. Or, more directly, the same thing that happens if you somehow fill up a table to the integer limit (probably deleting rows along the way to free up space), and then try to add a new row. What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? A big one is importing some data into a live table. Many smaller ones are related to implicit data constraints that exist in the application but aren't directly expressed in the table. I've had several times when I could normally just rely on auto-numbering for something, but occasionally, due to other data I was inserting elsewhere, had to specify a particular id. This assumes that your autonumbers aren't going to overlap and is going to behave really badly when they do. Honestly, I don't care too much about this, but I'm skeptical we're doing the right thing here. Pablo did bring up a good use case, which is wanting to migrate existing data to a new object store, for example with a new schema. And every database examined so far has some ability to specify autonumbered columns. overlaps aren't a problem in practice since 64bit integers are really really big. So unless someone maliciously sets a number close to the upper bound of that then overlaps won't be a problem. Yes, but we'd need to spec this, implement it, and test it because someone will try to do this maliciously. I'd say it's fine to treat the range of IDs as a hardware limitation. I.e. similarly to how we don't specify how much data a webpage is allowed to put into DOMStrings, at some point every implementation is going to run out of memory and effectively limit it. In practice this isn't a problem since the limit is high enough. Another would be to define that the ID is 64 bit and if you run out of IDs no more rows can be inserted into the objectStore. At that point the page is responsible for creating a new object store and compacting down IDs. In practice no page will run into this limitation if they use IDs increasing by one. Even if you generate a new ID a million times a second, it'll still take you over half a million years to run out of 64bit IDs. And, in the email you replied right under, I brought up the point that this feature won't help someone who's trying to import data into a table that already has data in it because some of it might clash. So, just to make sure we're all on the same page, the use case for this is restoring data into an _empty_ object store, right? (Because I don't think this is a good solution for much else.) That's the main scenario I can think of that would require this yes. / Jonas
Re: [Bug 11270] New: Interaction between in-line keys and key generators
On Fri, Nov 12, 2010 at 10:08 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 9:22 PM, Jeremy Orlow jor...@chromium.org wrote: On Fri, Nov 12, 2010 at 12:32 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 11:41 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 6:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Nov 11, 2010 at 4:20 AM, Jeremy Orlow jor...@chromium.org wrote: What would we do if what they provided was not an integer? The behavior isn't very important; throwing would be fine here. In mySQL, you can only put AUTO_INCREMENT on columns in the integer family. What happens if the number they insert is so big that the next one causes overflow? The same thing that happens if you do ++ on a variable holding a number that's too large. Or, more directly, the same thing that happens if you somehow fill up a table to the integer limit (probably deleting rows along the way to free up space), and then try to add a new row. What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? A big one is importing some data into a live table. Many smaller ones are related to implicit data constraints that exist in the application but aren't directly expressed in the table. I've had several times when I could normally just rely on auto-numbering for something, but occasionally, due to other data I was inserting elsewhere, had to specify a particular id. This assumes that your autonumbers aren't going to overlap and is going to behave really badly when they do. Honestly, I don't care too much about this, but I'm skeptical we're doing the right thing here. Pablo did bring up a good use case, which is wanting to migrate existing data to a new object store, for example with a new schema. And every database examined so far has some ability to specify autonumbered columns. overlaps aren't a problem in practice since 64bit integers are really really big. So unless someone maliciously sets a number close to the upper bound of that then overlaps won't be a problem. Yes, but we'd need to spec this, implement it, and test it because someone will try to do this maliciously. I'd say it's fine to treat the range of IDs as a hardware limitation. I.e. similarly to how we don't specify how much data a webpage is allowed to put into DOMStrings, at some point every implementation is going to run out of memory and effectively limit it. In practice this isn't a problem since the limit is high enough. Another would be to define that the ID is 64 bit and if you run out of IDs no more rows can be inserted into the objectStore. At that point the page is responsible for creating a new object store and compacting down IDs. In practice no page will run into this limitation if they use IDs increasing by one. Even if you generate a new ID a million times a second, it'll still take you over half a million years to run out of 64bit IDs. This seems reasonable. OK, let's do it. And, in the email you replied right under, I brought up the point that this feature won't help someone who's trying to import data into a table that already has data in it because some of it might clash. So, just to make sure we're all on the same page, the use case for this is restoring data into an _empty_ object store, right? (Because I don't think this is a good solution for much else.) That's the main scenario I can think of that would require this yes. / Jonas
Re: [Bug 11270] New: Interaction between in-line keys and key generators
The other thing you could do is specify that when you get a wrap (IE someone inserts a key of MAXINT - 1) you auto-compact the table. If you really have run out of indexes there is not a lot you can do. The other thing to consider it that because JS uses signed arithmetic, its really a 63bit number... unless you want negative indexes appearing? (And how would that affect ordering and sorting)? Cheers, Keean. On 12 November 2010 07:36, Jeremy Orlow jor...@chromium.org wrote: On Fri, Nov 12, 2010 at 10:08 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 9:22 PM, Jeremy Orlow jor...@chromium.org wrote: On Fri, Nov 12, 2010 at 12:32 AM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 11, 2010 at 11:41 AM, Jeremy Orlow jor...@chromium.org wrote: On Thu, Nov 11, 2010 at 6:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Nov 11, 2010 at 4:20 AM, Jeremy Orlow jor...@chromium.org wrote: What would we do if what they provided was not an integer? The behavior isn't very important; throwing would be fine here. In mySQL, you can only put AUTO_INCREMENT on columns in the integer family. What happens if the number they insert is so big that the next one causes overflow? The same thing that happens if you do ++ on a variable holding a number that's too large. Or, more directly, the same thing that happens if you somehow fill up a table to the integer limit (probably deleting rows along the way to free up space), and then try to add a new row. What is the use case for this? Do we really think that most of the time users do this it'll be intentional and not just a mistake? A big one is importing some data into a live table. Many smaller ones are related to implicit data constraints that exist in the application but aren't directly expressed in the table. I've had several times when I could normally just rely on auto-numbering for something, but occasionally, due to other data I was inserting elsewhere, had to specify a particular id. This assumes that your autonumbers aren't going to overlap and is going to behave really badly when they do. Honestly, I don't care too much about this, but I'm skeptical we're doing the right thing here. Pablo did bring up a good use case, which is wanting to migrate existing data to a new object store, for example with a new schema. And every database examined so far has some ability to specify autonumbered columns. overlaps aren't a problem in practice since 64bit integers are really really big. So unless someone maliciously sets a number close to the upper bound of that then overlaps won't be a problem. Yes, but we'd need to spec this, implement it, and test it because someone will try to do this maliciously. I'd say it's fine to treat the range of IDs as a hardware limitation. I.e. similarly to how we don't specify how much data a webpage is allowed to put into DOMStrings, at some point every implementation is going to run out of memory and effectively limit it. In practice this isn't a problem since the limit is high enough. Another would be to define that the ID is 64 bit and if you run out of IDs no more rows can be inserted into the objectStore. At that point the page is responsible for creating a new object store and compacting down IDs. In practice no page will run into this limitation if they use IDs increasing by one. Even if you generate a new ID a million times a second, it'll still take you over half a million years to run out of 64bit IDs. This seems reasonable. OK, let's do it. And, in the email you replied right under, I brought up the point that this feature won't help someone who's trying to import data into a table that already has data in it because some of it might clash. So, just to make sure we're all on the same page, the use case for this is restoring data into an _empty_ object store, right? (Because I don't think this is a good solution for much else.) That's the main scenario I can think of that would require this yes. / Jonas