On Tue, May 18, 2010 at 6:32 PM, Shawn Wilsher <sdwi...@mozilla.com> wrote:

> On 5/18/2010 7:20 AM, Jeremy Orlow wrote:
>>  1. Once a database has been opened (a database connection has been
>>> established) read access to meta-data, such as objectStore and index
>>> names, is synchronous. Changes to such meta data, such as creating
>>> objectStores and indexes, is still asynchronous.
>> I believe this is already how it's specced.  The IDBDatabase interface
>> already gives you synchronous access to all of this.
> Mostly that is the same, with the exception of getting an object store or
> index which is now synchronous.
>  9. IDBKeyRanges are created using functions on IndexedDatabaseRequest.
>>> We couldn't figure out how the old API allowed you to create a range
>>> object without first having a range object.
>> In the spec, I see the following in examples:
>> var range = new IDBKeyRange.bound(2, 4);
>> and
>> var range = IDBKeyRange.leftBound(key);
>> I'm not particularly happy with hanging functions off of
>> IndexedDatabaseRequest for this.  Can it work something like what I listed
>> above?  If not, maybe we can find a better place to put them?  Or just
>> create multiple openCursor functions for each case?
> I think one concern with the above syntax is that it's adding another
> object to the global scope.  I recall Ben Turner and I discussing the
> possibility of hanging it off of indexedDB, so:
> var range = new indexedDB.KeyRange.bound(2, 4);

Good point re the global scope.

> My concern with making multiple openCursor functions is that there is more
> API complexity there.  With that said, I don't have any strong opinions here
> either way.

How is there more API complexity to have several openCursor functions rather
than several factories for KeyRange objects?

 10. You are allowed to have multiple transactions per database
>>> connection. However if they use overlapping tables, only the first one
>>> will receive events until it is finished (with the usual exceptions of
>>> allowing multiple readers of the same table).
>> Can you please clarify what you mean here?  This seems like simply an
>> implementation detail to me, so maybe I'm missing something?
> What this is trying to say is that you can have an object store being used
> in more than one transaction, but they cannot access it at the same time.
>  However, I think it's best for Jonas to chime in here because this doesn't
> quite seem right to me like it did yesterday.

Oh, I see.  The problem is that if you open an entity store and start
multiple transactions, it's not clear which it's associated with.  I guess I
feel like what Jonas described would be pretty confusing.

What about creating a IDBSuccessEvents.transaction that's the transaction
the request is associated with (or null)?  Another option is to only allow
one (top level) transaction per connection.  (I still think we should
support open nested transactions.)

 5)  You have two IDBTransactionRequest.onaborts.  I think one is supposed
>> to
>> be an ontimeout.
> Whoops, I thought we fixed that before he sent this out :)
> 6)  What is the default limit for the getAll functions?  Should we make it
>> 0
>> (and define any<=0 amount to mean infinity)?
> I believe we intended the default to be infinity (as in, if you don't
> specify, you get it all).

Ahh, so not specifying would be the only way to get infinity?  Yeah, I guess
that works.  What if someone passes 0?  Just return a request with null?

>  7)  I expect "add or modify" to be more used than the add or the modify
>> methods.  As such, I wonder if it makes sense to optimize the naming for
>> that.  For example, addOrModify=>set, add=>add/insert,
>> modify=>modify/update/replace maybe?
> We had a lot of internal debate over this very thing.  The problem we kept
> running into was that set could easily be read as doing just updating so it
> wouldn't be clear that you can also insert a new record there (without
> reading the spec or some tutorial).  The benefit with addOrModify is that it
> is very explicit in what it does.  We don't like how long it is, but we
> couldn't come up with a shorter name that doesn't have a fair amount of
> ambiguity.

I see.  Well, I'm OK starting with addOrModify and seeing if developers

>  8)   We can't leave deciding whether a cursor is pre-loaded up to UAs
>> since people will code for their favorite UA and then access
>> IDBCursorPreloadedRequest.count when some other UA does it as a
>> non-preloaded request.  Even in the same UA this will cause problems when
>> users have different datasets than developers are testing with.
> I think that you might have been confused by our wording there.  Sorry
> about that!  IDBCursorPreloadedRequest is what you get if you pass sync=true
> into openCursor or openObjectCursor.  Basically, sync cursors will give you
> a count, whereas async ones will not.

Ohhhhh.  I missed that parameter and I guess let my imagination run wild.

I'm not sure I like the idea of offering sync cursors either since the UA
will either need to load everything into memory before starting or risk
blocking on disk IO for large data sets.  Thus I'm not sure I support the
idea of synchronous cursors.  But, at the same time, I'm concerned about the
overhead of firing one event per value with async cursors.  Which is why I
was suggesting an interface where the common case (the data is in memory) is
done synchronously but the uncommon case (we'd block if we had to respond
synchronously) has to be handled since we guarantee that the first time will
be forced to be asynchronous.

Like I said, I'm not super happy with what I proposed, but I think some
hybrid async/sync interface is really what we need.  Have you guys spent any
time thinking about something like this?  How dead-set are you on
synchronous cursors?

 2) There's an estimated count even if it's not pre-loaded (which has been
>> requested in other threads).
> Were there use cases in those other threads?  We couldn't come up with a
> case that would have been useful.

I took a quick look through my archives but couldn't spot the threads.
 Hopefully I'm not imagining them.  :-)

IIRC, the use cases were all ones that only required the order of magnitude
and not necessarily exact counts.  One I can think of off the top of my head
is query optimization: Knowing the order of magnitude of elements can make a
big difference in terms of how you want to join data together.


Reply via email to