Re: [cors] Subdomains

2010-07-26 Thread Anne van Kesteren
On Sun, 25 Jul 2010 14:25:58 +0200, Christoph Päper  
christoph.pae...@crissov.de wrote:
Maybe I’m missing something, but shouldn’t it be easy to use certain  
groups of origins in ‘Access-Control-Allow-Origin’, e.g. make either the  
scheme, the host or the port part irrelevant or only match certain  
subparts of the host part?


We had something like that long ago, but decided the complexity was not  
worth it. At least not for now. So yes, the Commons server would have to  
implement the appropriate logic. It does not actually have to parse the  
header though, as the draft says it could simply contain a list of origins  
it allows requests from and compare the incoming origin against said list.  
That would probably be safer than to try parsing things manually.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: [CORS] What constitutes a network error?

2010-07-26 Thread Anne van Kesteren
On Mon, 26 Jul 2010 08:08:13 +0200, Anne van Kesteren ann...@opera.com  
wrote:

[...]


Okay, I synced the wording with that of XMLHttpRequest. The text is  
duplicated, but clear.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: File URN lifetimes and SharedWorkers

2010-07-26 Thread Dmitry Titov
On Fri, Jul 23, 2010 at 12:15 AM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 23 Feb 2010, Drew Wilson wrote:
 
  This was recently brought to my attention by one of the web app
 developers
  in my office:
 
  http://dev.w3.org/2006/webapi/FileAPI/#lifeTime
 
  User agents MUST ensure that the lifetime of File URN #dfn-fileURNs is
 the
  same as the lifetime of the Document [HTML5 #HTML5] of the origin
 script
  which spawned the File #dfn-file object on which the urn #dfn-urn
 attribute
  was called. When this Document is destroyed, implementations MUST treat
  requests for File URN #dfn-fileURNs created within thisDocument as 404
 Not
  Found.[Processing Model #processing-model-urn for File URN
 #dfn-fileURN
  s]
 
  I'm curious how this should work for SharedWorkers - let's imagine that I
  create a File object in a document and send it to a SharedWorker via
  postMessage() - the SharedWorker will receive a structured clone of that
  File object, which it can then access. What should the lifetime of the
  resulting URN for that file object be? I suspect the intent is that File
  objects ought to be tied to an owning script context rather than to a
  specific Document (so, in this case, the lifetime of the resulting URN
 would
  be the lifetime of the worker)?

 Was this ever addressed? Do I need to add something to the workers spec
 for this? Who is currently editing the File API specs?


There is a relevant discussion here:
http://lists.w3.org/Archives/Public/public-webapps/2010JulSep/0169.html
Looks like adding an explicit createBlobUrl/revokeBlobUrl on global object,
and tying the lifetime to this global object as well (implicit revoke) can
be less confusing then current spec language.

Arun Ranganathan is editor and Jonas Sicking seems to be a co-editor.



 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Re: Lifetime of Blob URL

2010-07-26 Thread Jonas Sicking
On Tue, Jul 13, 2010 at 7:37 AM, David Levin le...@google.com wrote:
 On Tue, Jul 13, 2010 at 6:50 AM, Adrian Bateman adria...@microsoft.com
 wrote:

 On Monday, July 12, 2010 2:31 PM, Darin Fisher wrote:
  On Mon, Jul 12, 2010 at 9:59 AM, David Levin le...@google.com wrote:
  On Mon, Jul 12, 2010 at 9:54 AM, Adrian Bateman adria...@microsoft.com
  wrote:
  I read point #5 to be only about surviving the start of a navigation. As
  a
  web developer, how can I tell when a load has started for an img?
  Isn't
  this similarly indeterminate.
 
  As soon as img.src is set.
 
  the spec could mention that the resource pointed by blob URL should be
  loaded successfully as long as the blob URL is valid at the time when
  the
  resource is starting to load.
 
  Should apply to xhr (after send is called), img, and navigation.
 
  Right, it seems reasonable to say that ownership of the resource
  referenced
  by a Blob can be shared by a XHR, Image, or navigation once it is told
  to
  start loading the resource.
 
  -Darin

 It sounds like you are saying the following is guaranteed to work:

 img.src = blob.url;
 window.revokeBlobUrl(blob);
 return;

 If that is the case then the user agent is already making the guarantees
 I was talking about and so I still think having the lifetime mapped to the
 blob
 not the document is better. This means that in the general case I don't
 have
 to worry about lifetime management.

 Mapping lifetime to the blob exposes when the blob gets garbage collected
 which is a very indeterminate point in time (and is very browser version
 dependent -- it will set you up for compatibility issues when you update
 your javascript engine -- and there are also the cross browser issues of
 course).
 Specifically, a blob could go out of scope (to use your earlier phrase)
 and then one could do img.src = blobUrl (the url that was exposed from the
 blob but not using the blob object). This will work sometimes but not others
 (depending on whether garbage collection collected the blob).
 This is much more indeterminate than the current spec which maps the
 blob.url lifetime to the lifetime of the document where the blob was
 created.
 When thinking about blob.url lifetime, there are several problems to solve:
 1. An AJAX style web application may never navigate the document and this
 means that every blob for which a URL is created must be kept around in some
 form for the lifetime of the application.
 2. A blob passed to between documents would have its blob.url stop working
 as soon as the original document got closed.
 3. Having a model that makes the url have a determinate lifetime which
 doesn't expose the web developer to indeterminate behaviors issues like we
 have discussed above.
 The current spec has issues #1 and #2.
 Binding the lifetime of blob.url to blob has issue #3.

Indeed.

I agree with others that have said that exposing GC behavior is a big
problem. I think especially here where a very natural usage pattern is
to grab a File object, extract its url, and then drop the reference to
the File object on the floor.

And I don't think specifying how GC is supposed to work is a workable
solution. I doubt that any browser vendor will be willing to lock down
their GC to that degree. GC implementations is a very active area of
experimentation and has been for many many years. I see no reason to
think that we'd be able to come up with a GC algorithm that wouldn't
be obsolete very soon.

However I also don't think #3 above is a huge problem. You can always
flush a blob to disk, meaning that all that is leaked is an entry in a
url-filename hash table. No actual data needs to be kept in memory.
It's definitely still a problem, but I figured it's worth pointing
out.

Given that, I see no other significantly different solution than what
is in the spec right now. Though there are definitely some problems
that we should fix:

1. Add a function for destroying a url reference seems like a good idea.
2. #2 above can be specced away. You simply need to specify that any
context that calls blob.url extends the lifetime such that the url
isn't automatically destroyed until all contexts that requested it are
destroyed.
3. We should define that worker scopes can also extract blob urls.

However this leaves deciding on what syntax to use for creating and
destroying URLs. The current method of obtaining a url is:

x = myfile.url;
we could simply add
myfile.killUrl();

which kills the url that was previously returned from the file.
However this requires that people hold on to the Blob object and so
seems like a suboptimal solution. We could also do

x = myfile.url;
we could simply add
window.destroyBlobUrl(x);

However this keeps the creator and destructor functions far from each
other, which IMHO isn't very nice.

It has also been suggested that we change the syntax for obtaining urls to:

x = window.createBlobUrl(myfile);
and
window.destroyBlobUrl(x);

however the myfile.url syntax feels really nice and 

Re: File URN lifetimes and SharedWorkers

2010-07-26 Thread Jonas Sicking
On Fri, Jul 23, 2010 at 12:15 AM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 23 Feb 2010, Drew Wilson wrote:

 This was recently brought to my attention by one of the web app developers
 in my office:

 http://dev.w3.org/2006/webapi/FileAPI/#lifeTime

 User agents MUST ensure that the lifetime of File URN #dfn-fileURNs is the
 same as the lifetime of the Document [HTML5 #HTML5] of the origin script
 which spawned the File #dfn-file object on which the urn #dfn-urn 
 attribute
 was called. When this Document is destroyed, implementations MUST treat
 requests for File URN #dfn-fileURNs created within thisDocument as 404 Not
 Found.[Processing Model #processing-model-urn for File URN #dfn-fileURN
 s]

 I'm curious how this should work for SharedWorkers - let's imagine that I
 create a File object in a document and send it to a SharedWorker via
 postMessage() - the SharedWorker will receive a structured clone of that
 File object, which it can then access. What should the lifetime of the
 resulting URN for that file object be? I suspect the intent is that File
 objects ought to be tied to an owning script context rather than to a
 specific Document (so, in this case, the lifetime of the resulting URN would
 be the lifetime of the worker)?

 Was this ever addressed? Do I need to add something to the workers spec
 for this? Who is currently editing the File API specs?

This is indeed a tricky situations given that, as I understand it, the
lifetime of a shared worker is pretty fuzzy. I think the best we can
do is to say that a File.url extracted from a shared worker remains
working as long as the shared worker stays alive. I.e. as long as |x =
new SharedWorker(worker.js, my worker);| returns a worker with the
same scope.

This does somewhat expose the GC-like shared worker lifetime. But not
more than the SharedWorker constructor already does.

/ Jonas



Re: [IndexedDB] Current editor's draft

2010-07-26 Thread Jonas Sicking
On Sat, Jul 24, 2010 at 8:29 AM, Jeremy Orlow jor...@chromium.org wrote:
  And is it
  only possible to lock existing rows, or can you prevent new records
  from being created?
 
  There's no way to lock yet to be created rows since until a transaction
  ends, its effects cannot be made visible to other transactions.

 So if you have an objectStore with auto-incrementing indexes, there is
 the possibility that two dynamic transactions both can add a row to
 said objectStore at the same time. Both transactions would then add a
 row with the same autogenerated id (one higher than the highest id in
 the table). Upon commit, how is this conflict resolved?

 What if the objectStore didn't use auto-incrementing indexes, but you
 still had two separate dynamic transactions which both insert a row
 with the same key. How is the conflict resolved?

 I believe a common trick to reconcile this is stipulating that if you add
 1000 rows the id's may not necessarily be 1000 sequential numbers.  This
 allows transactions to increment the id and leave it incremented even if the
 transaction fails.  Which also means that other transactions can be grabbing
 an ID of their own as well.  And if a transaction fails, well, we've wasted
 one possible ID.

This does not answer the question what happens if two transactions add
the same key value though?

  And is it possible to only use read-locking for
  some rows, but write-locking for others, in the same objectStore?
 
  An implementation could use shared locks for read operations even though
  the object store might have been opened in READ_WRITE mode, and later
  upgrade the locks if the read data is being modified. However, I am not 
  keen
  to push for this as a specced behavior.

 What do you mean by an implementation could? Is this left
 intentionally undefined and left up to the implementation? Doesn't
 that mean that there is significant risk that code could work very
 well in a conservative implementation, but often cause race conditions
 in a implementation that uses narrower locks? Wouldn't this result in
 a race to the bottom where implementations are forced to eventually
 use very wide locks in order to work well in websites?

 In general, there are a lot of details that are unclear in the dynamic
 transactions proposals. I'm also not sure if these things are unclear
 to me because they are intentionally left undefined, or if you guys
 just haven't had time yet to define the details?

 As the spec is now, as an implementor I'd have no idea of how to
 implement dynamic transactions. And as a user I'd have no idea what
 level of protection to expect from implementations, nor what
 strategies to use to avoid bugs.

 In all the development I've done deadlocks and race conditions are
 generally unacceptable, and instead strategies are developed that
 avoids them, such as always grab locks in the same order, and always
 grab locks when using shared data. I currently have no idea what
 strategy to recommend in IndexedDB documentation to developers to
 allow them to avoid race conditions and deadlocks.

 To get clarity in these questions, I'd *really* *really* like to see a
 more detailed proposal.

 I think a detailed proposal would be a good thing (maybe from Pablo or
 Nikunj since they're who are really pushing them at this point), but at the
 same time, I think you're getting really bogged down in the details, Jonas.
 What we should be concerned about and speccing is the behavior the user
 sees.  For example, can any operation on data fail due to transient issues
 (like deadlocks, serialization issues) or will the implementation shield web
 developers from this?  And will we guarantee 100% serializable semantics?
  (I strongly believe we should on both counts.)  How things are implemented,
 granularity of locks, or even if an implementation uses locks at all for
 dynamic transactions should be explicitly out of scope for any spec.  After
 all, it's only the behavior users care about.

If we can guarantee no deadlocks and 100% serializable semantics, then
I agree, it doesn't matter beyond that. However I don't think the
current proposals for dynamic transactions guarantee that. In fact, a
central point of the dynamic transactions proposal seems to be that
the author can grow the lock space dynamically, in an author defined
order. As long as that is the case you can't prevent deadlocks other
than by forbidding multiple concurrent (dynamic) transactions.

/ Jonas