Re: [whatwg] BroadcastChannel memory leak

2014-05-23 Thread Michael Nordman
 When is it safe for a user agent to garbage collect a BroadcastChannel
instance?

When there's no direct reference to the object and no onmessage handler
attached to it. (?)



On Fri, May 23, 2014 at 5:23 PM, Adam Barth w...@adambarth.com wrote:

 When is it safe for a user agent to garbage collect a BroadcastChannel
 instance?


 http://www.whatwg.org/specs/web-apps/current-work/multipage/web-messaging.html#broadcasting-to-other-browsing-contexts

 Given that another document might create a new BroadcastChannel
 instance at any time, the |message| event of the object might fire any
 time its responsible document is fully active.  For web application
 that have long-lived documents, the BroadcastChannel might not be
 eligible for garbage collection for a long time.

 Proposal: Add a |close| method to the BroadcastChannel interface
 similar to the |close| method on MessagePort.  The |close| method
 would just neuter the instance of the channel and prevent it from
 receiving further messages.

 Adam



Re: [whatwg] Fwd: fallback section taking over for 4xx and 5xx responses while online

2012-12-20 Thread Michael Nordman
It'd be loads better if application logic were directly responsible for
making these sort of policy decisions regarding what cached resource to use
under what circumstance. Obscure least-common-denominator rules baked into
the user agent with even more obscure ways to override that
least-common-denominator behavior just isn't working out very well.

 In my opinion the UA should *always* use server returned...

And in some other developers opinion, that would defeat efforts to make the
client resilient to 5xx errors.


On Thu, Dec 20, 2012 at 2:24 AM, Mikko Rantalainen 
mikko.rantalai...@peda.net wrote:

 connect the server and
 gets HTTP 410 Gone, I'd be pretty upset if cached offline copy would be
 used automatically. The server has clearly responded that the requested
 document is intentionally removed. End user seeing cached (stale) copy
 instead is very far from intented result in this case.



Re: [whatwg] AppCache Error events

2012-11-29 Thread Michael Nordman
Sounds reasonable to me. Webkit and chromium expose information like this
via the inspector console so users/developers at that can better diagnose
problems locally. Makes sense to also expose that info to app logic so
developers could diagnose from afar.


On Thu, Nov 29, 2012 at 11:40 AM, David Barrett-Kahn d...@google.com wrote:

 So are there no objections to this, should I draft a change to the spec?

 -Dave

 On Mon, Nov 26, 2012 at 12:00 PM, David Barrett-Kahn d...@google.com
 wrote:

  Right now this event contains no structured information, just an error
  message.  It'd be helpful to us to know more about what failed, so we can
  know what to report to the server and take action on.  It's hard to
  distinguish cache update failures due to just being offline from those
  which are actually causing trouble.  In the second case it's also hard to
  work out which resource is proving unavailable and why.
 
  One way to do this would be to create an AppCacheError subclass, with an
  errorCode parameter, and also nullable url and httpResponseCode
 properties.
   Potential error codes:
  * couldn't fetch manifest (includes url and httpResponseCode)
  * pre and post update manifest fetches mismatched (includes url)
  * fetching a resource failed (includes url and httpResponseCode)
 
  Related bug:
  https://code.google.com/p/chromium/issues/detail?id=161753
 
  Thoughts?
 
  -Dave
 
 


 --
 -Dave



Re: [whatwg] AppCache-related e-mails

2011-08-04 Thread Michael Nordman
On Tue, Aug 2, 2011 at 5:23 PM, Michael Nordman micha...@google.com wrote:

 On Mon, 13 Jun 2011, Michael Nordman wrote:
 
  Let's say there's a page in the cache to be used as a fallback resource,
  refers to the manifest by relative url...
 
  html manifest='x'
 
  Depending on the url that invokes the fallback resource, 'x' will be
  resolved to different absolute urls. When it doesn't match the actual
  manifest url, the fallback resource will get tagged as FOREIGN and will
  no longer be used to satisfy main resource loads.
 
  I'm not sure if this is a bug in chrome or a bug in the appcache spec
  just yet. I'm pretty certain that Safari will have the same behavior as
  chrome in this respect (the same bug). The value of the manifest
  attribute is interpreted as relative to the location of the loaded
  document in chrome and all webkit based browsers and that value is used
  to detect foreign'ness.
 
  The workaround/solution for this is to NOT put a manifest attribute in
  the html tag of the fallback resource (or to put either an absolute
  url or host relative url as the manifest attribute value).

 Or just make sure you always use relative URLs, even in the manifest.

 I don't really understand the problem here. Can you elaborate further?


 Suppose the fallback resource is setup like this...

 FALLBACK:
 / FallbackPage.html

 ... and that page contains a relative link to the manifest in its html tag 
 like so...
 html manifest=file.manifest

 Any server request that fails under / will get FallbackPage.html in response. 
 For example...

 /SomePage.html

 When the fallback is used in this case the manifest url will be interpreted 
 as /file.manifest

 /Some/Other/Page.html

 And in this case the manifest url will be interpreted as 
 /Some/Other/file.manifest


 On Fri, 1 Jul 2011, Michael Nordman wrote:
 
  Cross-origin resources listed in the CACHE section aren't retrieved with
  the 'Origin' header

 This is incorrect. They are fetched with the origin of the manifest. What
 makes you say no Origin header is included?


 I don't see mention of that in the draft? If that were the case then this
 wouldn't be an issue.

 I'm not familiar with CORS usage. Do xorigin subresource loads of all kinds
 (.js, .css, .png) carry the Origin header?

 I can imagine a server implementation that would examine the Origin header
 upfront, and if it didn't like what it saw, instead of computing the
 response without the origin listed in the Access-Control-Allow-Origin
 response header... it just wouldn't compute the response body and return an
 empty response without the origin listed in the Access-Control-Allow-Origin
 response header.

 If general subresource loads aren't sent with the Origin header, fetching
 all manifest listed resource with that header set could cause problems.


According to some documentation over at mozilla'land, the value of the
Origin header is different depending on the source of the request.
https://wiki.mozilla.org/Security/Origin#When_Origin_is_served_.28and_when_it_is_.22null.22.29
So i think including Origin:manifestUrlOrigin when fetching all resources
to populate an appcache could be the source of some subtle bugs.


Re: [whatwg] AppCache-related e-mails

2011-08-03 Thread Michael Nordman
On Tue, Aug 2, 2011 at 4:55 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 2 Aug 2011, Michael Nordman wrote:
  
   If you actively want to seek out old manifests, sure, but what's the
   use case for doing that? It would be like trying to actively evict
   things from HTTP caches.
 
  You should talk to some app developers. View source on angry birds for a
  use case, they are doing this to get rid of stale version tied to old
  manifest urls.

 But why?

 I couldn't figure out the use case from the source you mention.


This is a message I recently received from a different developer using the
appcache that would also like to see more in the way of being able to manage
the set of appcaches in the system. Please see the use cases listed towards
the end.


Hi Michael, Greg.  I'm writing to advise you of a requirement I'd like to
see appcache fulfill in the medium term.  We've spoken about it before, but
only in the general context of 'what would you like to see in the future'.
 No releases are gated on this feature, so I guess we're talking M15 or
thereabouts.  Feel free to cross-post this to a list you deem relevant for
wider review and discussion.

The feature is a javascript API to enable the creation, enumeration, update,
and deletion of appcaches on the current origin.  Calls might look something
like this:

/** Creates a new cache or updates an existing one with the given manifest
URL.  Manifest URL must be in the same origin as the JS */
createOrUpdateCache(String manifestUri, completionCallback, errorCallback);

/** Enumerates the caches present on the current origin */
enumerateCaches(CacheEnumerationCallback callback, ErrorCallback
errorCallback);

interface CacheEnumerationCallback {
  void handleEvent(Cache[] caches);
}

interface Cache {
  number getManifestUri();
  number getSizeInBytes();
  String getManifestAsText();
  String[] getMasterEntryUris();
  String[] getExplicitEntryUris();
  FallbackEntry[] getFallbackEntries();
  String[] getNetworkWhitelistUris();
  boolean isNetworkWhitelistOpen();
  DateTime getCreationTime();
  DateTime getLastManifestFetchTime(); // The last time the manifest was
fetched and checked for differences
  DateTime getLastUpdateTime(); // The last time a manifest fetch caused an
actual update
  DateTime getLastAccessTime(); // The last time the cache actually bound to
a browsing context
  // Maybe some APIs to signal whether the cache is currently being updated,
and whether there is currently a running browsing context bound to it.

  void delete(... some callbacks ...); // Probably fails if there's a
running browsing context bound to the cache
  void update(... some callbacks ...); // I guess a no-op if an update is
currently in progress or maybe even if it happened very recently
}

interface FallbackEntry {
  String getTriggerUri();
  String getTargetUri();
}

Additional characteristics:
* Must be usable from pages not themselves bound to an appcache, as long as
they are served from the same origin as the caches being operated on.
* Must work from workers, shared workers, and background pages, again
subject to a same origin check.

The above is a very rough sketch, and needs a bunch of work, but illustrates
the features we'd find useful.  An obvious flaw is that it doesn't fit in
with the system of progress events etc on the current API, but there are
probably many others.  View it mainly as a list of requirements.  Our use
cases are as follows:

* Docs maintains a set of appcaches which it uses for various purposes.
 Each editor, for example, has a cache.  There are also cases where
different documents require different versions of the same editor.
* The set of caches required on a particular browser depends on the
documents synced there.  A given set of documents will require a particular
(much smaller) set of caches to open.  The set of caches required on a given
browser is therefore dynamic, changing as documents enter and leave the set
of those synchronized.
* Each time anybody opens a docs property, and perhaps during the lifetimes
of some of them, we perform a procedure called 'appcache maintenance', which
ensures that the caches necessary for the current set of documents are
synced.  This is a fairly nasty process involving many iframes, but it
works.  We would like, however, to make this code much simpler, not have it
involve the iframes, and make the process of piping progress events back to
the host application less awful.  Right now it's such a pain we're not
bothering with it.
* We'd like to perform appcache maintenance on existing caches less often,
reducing server load.  The timestamps included above would allow us to do
that.
* When an appcache is no longer needed by the current set of documents, it
is currently just left there.  We would like to be able to clean it up.
* We would like to be able to perform our appcache maintenance procedure
from a shared worker, as we have one that can bring new documents into
storage.  Right now

Re: [whatwg] AppCache-related e-mails

2011-08-02 Thread Michael Nordman
 A common request that maybe we can agree upon is the ability to list the
 manifests that are cached and to delete them via script. Something
like...
   String[] window.applicationCache.getManifests();  // returns appcache
 manifest for the origin
   void window.applicationCache.deleteManifest(manifestUrl);

 This is trivial to do already; just return 404s for all the manifests you
 no longer want to keep around.

It involves creating hidden iframes loaded with pages that refer to the
manifests to be deleted, straightforward but gunky.

 0. [DONE] A means of not invoking the fallback resource for some error
 responses that would generally result in the fallback resource being
 returned. An additional response header would suite they're needs...
 something like...
 x-chromium-appcache-fallback-override: disallow-fallback
 If a response header is present with that value, the fallback response
would
 not be returned.
 http://code.google.com/p/chromium/issues/detail?id=82066

 What's the use case? When would you ever want to show the user an error
 yet really desire to indicate that it's an error and not a 200 OK
response?

Google Docs. Instead of seeing a fallback page that erroneously says You
must be offline and this document is not available., they wanted to show
the actual error page generated by the server in the case of a deleted
document or when the user doesn't have rights to access that doc.


Re: [whatwg] AppCache-related e-mails

2011-08-02 Thread Michael Nordman
On Tue, Aug 2, 2011 at 4:40 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 2 Aug 2011, Michael Nordman wrote:
  
   A common request that maybe we can agree upon is the ability to list
 the
   manifests that are cached and to delete them via script. Something
   like...
 String[] window.applicationCache.getManifests();  // returns
 appcache
   manifest for the origin
 void window.applicationCache.deleteManifest(manifestUrl);
  
   This is trivial to do already; just return 404s for all the manifests
   you no longer want to keep around.
 
  It involves creating hidden iframes loaded with pages that refer to the
  manifests to be deleted, straightforward but gunky.

 If you actively want to seek out old manifests, sure, but what's the use
 case for doing that? It would be like trying to actively evict things from
 HTTP caches.


   0. [DONE] A means of not invoking the fallback resource for some error
   responses that would generally result in the fallback resource being
   returned. An additional response header would suite they're needs...
   something like...
   x-chromium-appcache-fallback-override: disallow-fallback
   If a response header is present with that value, the fallback response
   would not be returned.
   http://code.google.com/p/chromium/issues/detail?id=82066
  
   What's the use case? When would you ever want to show the user an
   error yet really desire to indicate that it's an error and not a 200
   OK response?
 
  Google Docs. Instead of seeing a fallback page that erroneously says
  You must be offline and this document is not available., they wanted
  to show the actual error page generated by the server in the case of a
  deleted document or when the user doesn't have rights to access that
  doc.

 I don't see what's wrong with using 200 OK for that case.


You should talk to the app developers. I think there are other consumers of
these urls besides the browser. To change the status code to 200 would break
those other consumers.



 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] AppCache-related e-mails

2011-08-02 Thread Michael Nordman
On Tue, Aug 2, 2011 at 4:40 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 2 Aug 2011, Michael Nordman wrote:
  
   A common request that maybe we can agree upon is the ability to list
 the
   manifests that are cached and to delete them via script. Something
   like...
 String[] window.applicationCache.getManifests();  // returns
 appcache
   manifest for the origin
 void window.applicationCache.deleteManifest(manifestUrl);
  
   This is trivial to do already; just return 404s for all the manifests
   you no longer want to keep around.
 
  It involves creating hidden iframes loaded with pages that refer to the
  manifests to be deleted, straightforward but gunky.

 If you actively want to seek out old manifests, sure, but what's the use
 case for doing that? It would be like trying to actively evict things from
 HTTP caches.


You should talk to some app developers. View source on angry birds for a use
case, they are doing this to get rid of stale version tied to old manifest
urls.




   0. [DONE] A means of not invoking the fallback resource for some error
   responses that would generally result in the fallback resource being
   returned. An additional response header would suite they're needs...
   something like...
   x-chromium-appcache-fallback-override: disallow-fallback
   If a response header is present with that value, the fallback response
   would not be returned.
   http://code.google.com/p/chromium/issues/detail?id=82066
  
   What's the use case? When would you ever want to show the user an
   error yet really desire to indicate that it's an error and not a 200
   OK response?
 
  Google Docs. Instead of seeing a fallback page that erroneously says
  You must be offline and this document is not available., they wanted
  to show the actual error page generated by the server in the case of a
  deleted document or when the user doesn't have rights to access that
  doc.

 I don't see what's wrong with using 200 OK for that case.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] AppCache-related e-mails

2011-08-02 Thread Michael Nordman

 On Mon, 13 Jun 2011, Michael Nordman wrote:
 
  Let's say there's a page in the cache to be used as a fallback resource,
  refers to the manifest by relative url...
 
  html manifest='x'
 
  Depending on the url that invokes the fallback resource, 'x' will be
  resolved to different absolute urls. When it doesn't match the actual
  manifest url, the fallback resource will get tagged as FOREIGN and will
  no longer be used to satisfy main resource loads.
 
  I'm not sure if this is a bug in chrome or a bug in the appcache spec
  just yet. I'm pretty certain that Safari will have the same behavior as
  chrome in this respect (the same bug). The value of the manifest
  attribute is interpreted as relative to the location of the loaded
  document in chrome and all webkit based browsers and that value is used
  to detect foreign'ness.
 
  The workaround/solution for this is to NOT put a manifest attribute in
  the html tag of the fallback resource (or to put either an absolute
  url or host relative url as the manifest attribute value).

 Or just make sure you always use relative URLs, even in the manifest.

 I don't really understand the problem here. Can you elaborate further?


Suppose the fallback resource is setup like this...

FALLBACK:
/ FallbackPage.html

... and that page contains a relative link to the manifest in its
html tag like so...
html manifest=file.manifest

Any server request that fails under / will get FallbackPage.html in
response. For example...

/SomePage.html

When the fallback is used in this case the manifest url will be
interpreted as /file.manifest

/Some/Other/Page.html

And in this case the manifest url will be interpreted as
/Some/Other/file.manifest


On Fri, 1 Jul 2011, Michael Nordman wrote:
 
  Cross-origin resources listed in the CACHE section aren't retrieved with
  the 'Origin' header

 This is incorrect. They are fetched with the origin of the manifest. What
 makes you say no Origin header is included?


I don't see mention of that in the draft? If that were the case then this
wouldn't be an issue.

I'm not familiar with CORS usage. Do xorigin subresource loads of all kinds
(.js, .css, .png) carry the Origin header?

I can imagine a server implementation that would examine the Origin header
upfront, and if it didn't like what it saw, instead of computing the
response without the origin listed in the Access-Control-Allow-Origin
response header... it just wouldn't compute the response body and return an
empty response without the origin listed in the Access-Control-Allow-Origin
response header.

If general subresource loads aren't sent with the Origin header, fetching
all manifest listed resource with that header set could cause problems.


Re: [whatwg] AppCache-related e-mails

2011-07-01 Thread Michael Nordman
A common request that maybe we can agree upon is the ability to list the
manifests that are cached and to delete them via script. Something like...
  String[] window.applicationCache.getManifests();  // returns appcache
manifest for the origin
  void window.applicationCache.deleteManifest(manifestUrl);

I think it's clear from this discussion (and others) that the overall
appcache feature set leaves something to be desired, but it's less clear how
to best satisfy the desirements. Until there is some clarity, it's hard to
see how the community is going to make progress. Personally, I think whats
needed to move things forward is for browser vendors to do some independent
innovating to see what works and what doesn't work.

 @Hixie... any idea when the appcache feature set will be up for a growth
 spurt? I think there's an appetite for another round of features in the
 offline app developers that i communicate with. There's been some recent
 interest here in pursuing a means of programatically producing a
 response instead of just returning static content.

 Who implements it currently? Is there a test suite? Those are the main
 things that would gate a dramatic addition of new features.

Well, nobody yet; but I have a roadmap in mind that builds up to that. Much
of the discussion in this thread has been on the second item. Mobile
developers are particularly interested 2 to avoid HTTP cache churn and the
cost of HTTP cache validation. In this roadmap, you can see that it would
also allow pages vended from servers to make use of executable intercept
handlers.


-1. [DONE] Support for cross-origin HTTPS resources.
http://code.google.com/p/chromium/issues/detail?id=69594

0. [DONE] A means of not invoking the fallback resource for some error
responses that would generally result in the fallback resource being
returned. An additional response header would suite they're needs...
something like...
x-chromium-appcache-fallback-override: disallow-fallback
If a response header is present with that value, the fallback response would
not be returned.
http://code.google.com/p/chromium/issues/detail?id=82066

1. [UNDER CONFUSING DISCUSSION] Allow a syntax to associate a page with an
application cache, but does
not add that page to the cache. A common feature request also mentioned on
the whatwg list, but it's not getting any engagement from other browser
vendors or the spec writer (which is kind of frustrating). The premise is to
allow pages vended from a server to take advantage of the resources in an
application cache when loading subresources. A perfectly reasonable
request, http useManifest='x'.

2. Introduce a new manifest file section to INTERCEPT requests into a prefix
matched url namespace and satisfy them with a cached resource. The resulting
page would be free to interpret the location url and act accordingly based
on the path and query elements beyond the prefix matched url string. This
section would be similar to the FALLBACK section in that prefix matching is
involved, but different in that instead of being used only in the case of a
network/server error, the cached INTERCEPT resource would be used
immediately w/o first going to the server.
  INTERCEPT:
  urlprefix redirect newlocationurl
  urlprefix return cachedresourceurl

Here's where the INTERCEPT namespace could fit into the changes to the
network model.
   if (url is EXPLICITLY_CACHED)  // exact match
 return cached_response;
   if (url is in NETWORK namespace) // prefix match
 return network_response_as_usual;
   if (url is in INTERCEPT namespace) // prefix match  this is the new
section
 return handle_intercepted_request_accordingly
   if (url is in FALLBACK namespace) // prefix match
 return network_response_but_fallback_where_needed;
   if (ONLINE_WILDCARD)
 return network_response;
   otherwise
 return synthesized_error_response;

3. Allow an INTERCEPT cached resources to be executable. Instead of simply
returning the cached resource or redirect in response to the request, load
it into a
background worker context (if not already loaded) and invoke a function in
that context to asynchronously compute response headers and body based on
the request headers (including cookie) and body. The background worker would
have access to various local storage facilities (fileSystem, indexed/sqlDBs)
as well as the ability to make network requests via XHR.
  INTERCEPT:
  urlprefix execute cachedexecutableresourceurl

4. Create a syntax to allow FALLBACK resources to be similarly executable in
a background worker context.

5. Some kind of auto-update policy where the appcache is refreshed w/o the
app running.

There are a couple of features that are not on this list that I want to call
out:

* The ability to add(url) and remove(url) the appcache is not on the list.
FileSystem urls cover a lot of this already, and the ability to cache adhoc
resources and later load them via http urls could be composed out of the
filesystem and 

Re: [whatwg] AppCache-related e-mails

2011-06-16 Thread Michael Nordman
 On Tue, 8 Feb 2011, Michael Nordman wrote:
 
  Just had an offline discussion about this and I think the answer can be
  much simpler than what's been proposed so far.  All we have to do for
  cross-origin HTTPS resources is respect the cache-control no-store
  header.
 
  Let me explain the rationale... first let's back up to the motivation
  for the restrictions on HTTPS. They're there to defeat attacks that
  involve physical access the the client system, so the attacker cannot
  look at the cross-origin HTTS data stored in the appcache on disk. But
  the regular disk cache stores HTTPS data provided the cache-control
  header doesn't say no-store, so excluding this data from appcaching does
  nothing to defeat that attack.
 
  Maybe the spec changes to make are...
 
  1) Examine the cache-control header for all cross-origin resources (not
  just HTTPS), and only allow them if they don't contain the no-store
  directive.
 
  2) Remove the special-case restriction that is currently in place only
  for HTTPS cross-origin resources.

 On Wed, 30 Mar 2011, Michael Nordman wrote:
 
  Fyi: This change has been made in chrome.
  * respect no-store headers for cross-origin resources (only for HTTPS)
  * allow HTTPS cross-origin resources to be listed in manifest hosted on
  HTTPS

 This seems reasonable. Done.



But... I just looked at the current draft of the spec and i think it
reflects a greater change than the one i had proposed.

I had proposed respecting the no-store directive only for cross-origin
resources. The current draft is examining the no-store directive for all
resources without regard for their origin. The intent behind the proposed
change was to allow authors to continue to override the no-store header
for resources in their origin, and to disallow that override only for
cross-origin resources. The proposed change is less likely to break existing
apps, and I think there are valid use cases for the existing behavior where
no-store can be overriden by explicit inclusion in an appcache.


[whatwg] AppCache FOREIGN entry issues.

2011-06-13 Thread Michael Nordman
1) There's a bug in the draft around FOREIGN entries.

BUG: When updating an existing cache containing FOREIGN entries, the FOREIGN
flag is sticky even if the resource has been modified and is no longer
FOREIGN. The update algorithm (section 6.6.4) should be modified to reset
the FOREIGN flag if a new resource is actually downloaded as part of the
update.

2) There's another rough spot with FOREIGN entries. This one's an awkward
problem with FALLBACK resource being identified as FOREIGN. I'm not sure the
spec is actually clear about how the manifest attribute value of a FALLBACK
entry should be interpreted. A clarification would be good.

Here's a description of the problem from
http://code.google.com/p/chromium/issues/detail?id=82577

ApplicationCache can flag fallback resources as FOREIGN when it shouldn't

Let's say there's a page in the cache to be used as a fallback
resource, refers to the manifest by relative url...

html manifest='x'

Depending on the url that invokes the fallback resource, 'x' will be
resolved to different absolute urls. When it doesn't match the actual
manifest url, the fallback resource will get tagged as FOREIGN and
will no longer be used to satisfy main resource loads.

I'm not sure if this is a bug in chrome or a bug in the appcache spec
just yet. I'm pretty certain that Safari will have the same behavior
as chrome in this respect (the same bug). The value of the manifest
attribute is interpreted as relative to the location of the loaded
document in chrome and all webkit based browsers and that value is
used to detect foreign'ness.

The workaround/solution for this is to NOT put a manifest attribute in
the html tag of the fallback resource (or to put either an absolute
url or host relative url as the manifest attribute value).


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-03-30 Thread Michael Nordman
Fyi: This change has been made in chrome.
* respect no-store headers for cross-origin resources (only for HTTPS)
* allow HTTPS cross-origin resources to be listed in manifest hosted on
HTTPS

On Mon, Feb 14, 2011 at 5:04 PM, Michael Nordman micha...@google.comwrote:

 Fyi... I'm planning on making a change along these lines to chrome soon...
 * respect no-store headers for cross-origin resources
 * allow HTTPS cross-origin resources

 On Tue, Feb 8, 2011 at 3:25 PM, Michael Nordman micha...@google.com
 wrote:
  Hi again,
 
  Just had an offline discussion about this and I think the answer can
  be much simpler than what's been proposed so far.  All we have to do
  for cross-origin HTTPS resources is respect the cache-control no-store
  header.
 
  Let me explain the rationale... first let's back up to the motivation
  for the restrictions on HTTPS. They're there to defeat attacks that
  involve physical access the the client system, so the attacker cannot
  look at the cross-origin HTTS data stored in the appcache on disk. But
  the regular disk cache stores HTTPS data provided the cache-control
  header doesn't say no-store, so excluding this data from appcaching
  does nothing to defeat that attack.
 
  Maybe the spec changes to make are...
  1) Examine the cache-control header for all cross-origin resources
  (not just HTTPS), and only allow them if they don't contain the
  no-store directive.
  2) Remove the special-case restriction that is currently in place only
  for HTTPS cross-origin resources.
 
  WDYT?
 
  On Mon, Feb 7, 2011 at 5:27 PM, Michael Nordman micha...@google.com
 wrote:
  On Mon, Feb 7, 2011 at 4:35 PM, Jonas Sicking jo...@sicking.cc wrote:
  On Mon, Feb 7, 2011 at 3:31 PM, Ian Hickson i...@hixie.ch wrote:
  On Mon, 7 Feb 2011, Jonas Sicking wrote:
  On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman 
 micha...@google.com wrote:
   But... the risk you outline is not possible...
  
   However, with the modification you are proposing, an attacker site
   could forever pin this page the users app-cache. This means that
 if
   there is a security bug in the page, the attacker site could
 exploit
   that security problem forever since any javascript in the page
 will
   continue to run in the security context of bunnies.com. So all of
 a
   sudden the CORS headers that the site added has now had a severe
   security impact.
  
   The bunnies.com page stored in the attacker's appcache will never
 be
   loaded into the context of bunnies.com. There are provisions in
 the
   the appcache system to prevent that. Those provisions guard against
 a
   this type of attack via HTTP.
 
  Your proposal means that we forever lock that constraint on the
  appcache. That is not currently the case. I.e. we'll never be able to
  say open an iframe using the resource which is available in my
  appcache or open this cross-site worker using the resource
 available
  in my appcache.
 
  Or at least we won't ever be able to do that for cross-site
 resources.
 
  That's intentional. We don't want it to be possible to get a cache of
 a
  third-party page vulnerable to some sort of XSS attack and then to be
 able
  to load that page with the credentials of its origin, since it would
 make
  it possible for hostile sites to lock in a vulnerability and keep
 using
  it even after the site had fixed the problem.
 
  It seems desirable that the third party site could opt in to allowing
  this. Especially if it can choose which sites should be able to cache
  it. Which I think is the feature request that Michael starts with in
  this thread.
 
  / Jonas
 
 
  My feature request is for an opt-in mechanism to facilitate
  cross-origin HTTPS resources. I'm not looking for an opt-in mechanism
  to allow execution of cached cross-origin resources at this time. Anne
  mentioned that CORS might be an option for my feature request... and
  here we are.
 
 



Re: [whatwg] Improvement of the Application Cache

2011-03-04 Thread Michael Nordman
Yes, it does, in particular the add(), remove(), and enumerate() parts. That
spec is in the act of being dropped by the webapps working group. I think if
we want to see features along those lines, we should see them in the context
of the HTML5 AppCache.

On Thu, Mar 3, 2011 at 7:07 PM, Joseph Pecoraro pecor...@apple.com wrote:

 Sounds related to Programmable HTTP Caching and Serving (formerly titled
 DataCache API):
 http://dev.w3.org/2006/webapi/DataCache/

   [[[ This document defines APIs for off-line serving of requests to
   HTTP resources using static and dynamic responses. It extends
   the function of application caches defined in HTML5. ]]]

 - Joe


 On Mar 3, 2011, at 1:50 PM, Michael Nordman wrote:

 Sounds like there are at least two feature requests here...

 1) Some way of not adding the 'master' entry to the cache. This is a common
 feature request, there's another thread about it specificially titled
 Application Cache for on-line sites.

 2) The ability to add(), remove(), and enumerate() individual urls in the
 appcache. Long ago, there were interfaces on the appcache drawing board to
 allow that. They got removed for a variety of reasons including to start
 simpler. A couple of years later, it may make sense to revisit these kind
 of features, although there is another repository also capable of storing
 ad-hoc collection of resources now (FileSystem), so i'm not sure this
 feature really needs to be in the appcache.

 @Hixie... any idea when the appcache feature set will be up for a growth
 spurt? I think there's an appetite for another round of features in the
 offline app developers that i communicate with. There's been some recent
 interest here in pursuing a means of programatically producing a response
 instead of just returning static content.



 On Wed, Mar 2, 2011 at 7:40 AM, Edward Gerhold 
 edward.gerh...@googlemail.com wrote:

 Hello,


 i would like to suggest an improvement for the Offline Web applications.


 Problem:

 I´ve found out, that i can not Cache my Joomla! Content Management System.

 Of course i´ve read and heard about, that the application cache is for

 static pages.

 But with a little change to the spec and the implementations, it would be

 possible to cache

 more than static pages.


 I would like to cache my Joomla! system. To put the scripts, css and images

 into the cache.

 I would like to add the appcache manifest to the index.php file of the

 Joomla Template.

 What happens is, that the index.php is cached once and not updated again. I

 can not view

 new articles. The problem is, that i can neither update the Master File,

 nor

 whitelist it.


 And this is, what my request or suggestion is about. I would like to

 whitelist the Master

 file, where the appcache manifest is installed in. Or i would like to

 update

 this file, or any

 file else, i would like to update, on demand.


 If there is any possibility, to do that already, please tell me. But i

 think

 that is not the case.


 Caching the CMS by making it possible to update or to whitelist certain

 files, the always

 dynamic frontpage or /index.php, would be the hammer to nail the board on

 the storage.


 Rules:

 The things, which should be considered are: *To allow to fetch the Master

 file, e.g. index.php*

 *in Joomla! over the NETWORK,* while any other file in the manifest get´s

 fetched or cached like

 before. Which is the most important for me, to get Joomla! into the cache.


 Javascript:

 For the script i would like to add *applicationCache.updateMaster()*, which

 forces the browser

 to fetch the file again. I think, this is impossible today, to update

 exactly this file. For the function,

 i could add a button to my page, to let the user choose  to update the

 file.

 The second function would be *applicationCache.updateFile(url)*, which

 could

 be triggered by

 a button and script, too. I could let the user update certain articles.

 With that i would like to suggest* applicationCache.addToCache(url)* to add

 files manually or

 programmatic, which can not be determined by the manifest. Urls like new

 articles (*), i would

 like to read offline. I would like to add them to the cache, if the link

 appears, maybe on the

 frontpage. I would have to add the manifest to the CMS anyways, so i could

 add a few

 more functions to the page, of course. *

 applicationCache.removeFromCache(url)* should

 be obvious and helpful with the other functions.

 Good would be, to be able to iterate through the list of cached objects and

 even the manifest,

 with the update, add, remove functions, it would be very useful to work

 with

 the filenames and

 parameters.


 [(*) I could let the user decide wether he wants to download my mp3 files

 to

 the appcache or not,

 and fulfill the wish with the javascript functions. Maybe he´s got no bytes

 left or wants only the

 lyrics.]


 Conclusion:

 The application cache is very powerful. But it is very

Re: [whatwg] Improvement of the Application Cache

2011-03-03 Thread Michael Nordman
Sounds like there are at least two feature requests here...

1) Some way of not adding the 'master' entry to the cache. This is a common
feature request, there's another thread about it specificially titled
Application Cache for on-line sites.

2) The ability to add(), remove(), and enumerate() individual urls in the
appcache. Long ago, there were interfaces on the appcache drawing board to
allow that. They got removed for a variety of reasons including to start
simpler. A couple of years later, it may make sense to revisit these kind
of features, although there is another repository also capable of storing
ad-hoc collection of resources now (FileSystem), so i'm not sure this
feature really needs to be in the appcache.

@Hixie... any idea when the appcache feature set will be up for a growth
spurt? I think there's an appetite for another round of features in the
offline app developers that i communicate with. There's been some recent
interest here in pursuing a means of programatically producing a response
instead of just returning static content.



On Wed, Mar 2, 2011 at 7:40 AM, Edward Gerhold 
edward.gerh...@googlemail.com wrote:

 Hello,

 i would like to suggest an improvement for the Offline Web applications.

 Problem:
 I´ve found out, that i can not Cache my Joomla! Content Management System.
 Of course i´ve read and heard about, that the application cache is for
 static pages.
 But with a little change to the spec and the implementations, it would be
 possible to cache
 more than static pages.

 I would like to cache my Joomla! system. To put the scripts, css and images
 into the cache.
 I would like to add the appcache manifest to the index.php file of the
 Joomla Template.
 What happens is, that the index.php is cached once and not updated again. I
 can not view
 new articles. The problem is, that i can neither update the Master File,
 nor
 whitelist it.

 And this is, what my request or suggestion is about. I would like to
 whitelist the Master
 file, where the appcache manifest is installed in. Or i would like to
 update
 this file, or any
 file else, i would like to update, on demand.

 If there is any possibility, to do that already, please tell me. But i
 think
 that is not the case.

 Caching the CMS by making it possible to update or to whitelist certain
 files, the always
 dynamic frontpage or /index.php, would be the hammer to nail the board on
 the storage.

 Rules:
 The things, which should be considered are: *To allow to fetch the Master
 file, e.g. index.php*
 *in Joomla! over the NETWORK,* while any other file in the manifest get´s
 fetched or cached like
 before. Which is the most important for me, to get Joomla! into the cache.

 Javascript:
 For the script i would like to add *applicationCache.updateMaster()*, which
 forces the browser
 to fetch the file again. I think, this is impossible today, to update
 exactly this file. For the function,
 i could add a button to my page, to let the user choose  to update the
 file.
 The second function would be *applicationCache.updateFile(url)*, which
 could
 be triggered by
 a button and script, too. I could let the user update certain articles.
 With that i would like to suggest* applicationCache.addToCache(url)* to add
 files manually or
 programmatic, which can not be determined by the manifest. Urls like new
 articles (*), i would
 like to read offline. I would like to add them to the cache, if the link
 appears, maybe on the
 frontpage. I would have to add the manifest to the CMS anyways, so i could
 add a few
 more functions to the page, of course. *
 applicationCache.removeFromCache(url)* should
 be obvious and helpful with the other functions.
 Good would be, to be able to iterate through the list of cached objects and
 even the manifest,
 with the update, add, remove functions, it would be very useful to work
 with
 the filenames and
 parameters.

 [(*) I could let the user decide wether he wants to download my mp3 files
 to
 the appcache or not,
 and fulfill the wish with the javascript functions. Maybe he´s got no bytes
 left or wants only the
 lyrics.]

 Conclusion:
 The application cache is very powerful. But it is very disappointing, that
 it is only useful for static
 pages. With a little improvement to the Offline Web applications chapter,
 and of course to the browsers,
 it would be possible to cache any Content Manager or dynamic page. And that
 would let the appcache
 become one of the most powerful things in the world.

 I could read my Joomla! offline, could update the cached files, if i want
 to, on a click or if the cache expires.
 I could let the half of the CMS load from the cache. But for that, the
 index.php, where the manifest is, has to
 be updateable. Correct me, if i am wrong. But this is not possible today,
 the master file can not be influenced.
 And there is no expiration or a possibility to update or manipulate the
 cache and even no way to find out which
 files are cached, what would let me/us have 

Re: [whatwg] Application Cache for on-line sites

2011-02-18 Thread Michael Nordman
On Fri, Feb 11, 2011 at 1:14 PM, Michael Nordman micha...@google.com wrote:
 Once you get past the should this be a feature question, there are
 some questions to answer.

I'll take a stab at answering these questions.

 1) How does an author indicate which pages should be added to the
 cache and which should not?

 A few ideas...
 a. html useManifest='x'
 b. If the main resource has a no-store header, don't add it to the
 cache, but do associate the document with the cache.
 c. A new manifest section to define a prefix matched namespace for these 
 pages.

Option (c) isn't precisely targeted at individual resources, authors
would have to arrange their URL namespace more carefully to achieve
the desired results. Option (a) and (b) can be applied specifically to
individual pages offering greater control over. And option (a) is the
more explicit the two. Authors need only edit the page to get the
desired result instead of having to arrange for the appropiate http
headers. Readers of the code (and view sourcers of the pages) can more
easily determine how this page should utilize the appcache just by
looking at the source of the page.

My pick would be option (a), html useManifest='x'

 2) What sequence of events does a page that just uses the cache w/o
 being added to it observe?

The same sequence of events as a master entry that does get added to the cache.

 3) At what point do subresources in an existing appcache start getting
 utlized by such pages? What if the appcache is stale? Do subresource
 loads cause revalidation?

Swap the cache into use upon successful update completion, so again
just like for master entry pages that do get added to the cache.

But this observation is the crux of it. There's a window of time
between page loading and update completion. During that interim
resources resident in the appcache can be used to effectively augment
the browser's regular HTTP cache. For example, when the page requests
a subresource and the resource in the appcache is fresh enough per
http caching rules, it may be returned immediately w/o any validation.

WDYT?


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-14 Thread Michael Nordman
Fyi... I'm planning on making a change along these lines to chrome soon...
* respect no-store headers for cross-origin resources
* allow HTTPS cross-origin resources

On Tue, Feb 8, 2011 at 3:25 PM, Michael Nordman micha...@google.com wrote:
 Hi again,

 Just had an offline discussion about this and I think the answer can
 be much simpler than what's been proposed so far.  All we have to do
 for cross-origin HTTPS resources is respect the cache-control no-store
 header.

 Let me explain the rationale... first let's back up to the motivation
 for the restrictions on HTTPS. They're there to defeat attacks that
 involve physical access the the client system, so the attacker cannot
 look at the cross-origin HTTS data stored in the appcache on disk. But
 the regular disk cache stores HTTPS data provided the cache-control
 header doesn't say no-store, so excluding this data from appcaching
 does nothing to defeat that attack.

 Maybe the spec changes to make are...
 1) Examine the cache-control header for all cross-origin resources
 (not just HTTPS), and only allow them if they don't contain the
 no-store directive.
 2) Remove the special-case restriction that is currently in place only
 for HTTPS cross-origin resources.

 WDYT?

 On Mon, Feb 7, 2011 at 5:27 PM, Michael Nordman micha...@google.com wrote:
 On Mon, Feb 7, 2011 at 4:35 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Feb 7, 2011 at 3:31 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 7 Feb 2011, Jonas Sicking wrote:
 On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman micha...@google.com 
 wrote:
  But... the risk you outline is not possible...
 
  However, with the modification you are proposing, an attacker site
  could forever pin this page the users app-cache. This means that if
  there is a security bug in the page, the attacker site could exploit
  that security problem forever since any javascript in the page will
  continue to run in the security context of bunnies.com. So all of a
  sudden the CORS headers that the site added has now had a severe
  security impact.
 
  The bunnies.com page stored in the attacker's appcache will never be
  loaded into the context of bunnies.com. There are provisions in the
  the appcache system to prevent that. Those provisions guard against a
  this type of attack via HTTP.

 Your proposal means that we forever lock that constraint on the
 appcache. That is not currently the case. I.e. we'll never be able to
 say open an iframe using the resource which is available in my
 appcache or open this cross-site worker using the resource available
 in my appcache.

 Or at least we won't ever be able to do that for cross-site resources.

 That's intentional. We don't want it to be possible to get a cache of a
 third-party page vulnerable to some sort of XSS attack and then to be able
 to load that page with the credentials of its origin, since it would make
 it possible for hostile sites to lock in a vulnerability and keep using
 it even after the site had fixed the problem.

 It seems desirable that the third party site could opt in to allowing
 this. Especially if it can choose which sites should be able to cache
 it. Which I think is the feature request that Michael starts with in
 this thread.

 / Jonas


 My feature request is for an opt-in mechanism to facilitate
 cross-origin HTTPS resources. I'm not looking for an opt-in mechanism
 to allow execution of cached cross-origin resources at this time. Anne
 mentioned that CORS might be an option for my feature request... and
 here we are.




Re: [whatwg] Application Cache for on-line sites

2011-02-11 Thread Michael Nordman
Waking this feature request up again as it's been requested multiple
times, I think the ability to utilize an appcache w/o having to have
the page added to it is the #1 appcache feature request that I've
heard.

* The Gmail mobile team has mentioned this.

* Here's a thread on a chromium.org mailing list where this feature is
requested: How to instruct the main page to be not cached?
https://groups.google.com/a/chromium.org/group/chromium-html5/browse_thread/thread/a254e2090510db39/916f3a8da40e34f8

* More recently this has been requested in the context of an
application that uses pushState to alter the url of the main page.

To keep this discussion distinct from others, I'm pulling in the few
comments that have been made on another thread.

hixie said...
 Why can't the pages just switch to a more AJAX-like model rather than
 having the main page still load over the network? The main page loading
 over the network is a big part of the page being slow.

and i replied...
 The premise of the feature request is that the main pages aren't
 cached at all.

 | I tried to use the HTML5 Application Cache to improve the performances
 | of on-line sites (all the tutorials on the web write only about usage
 | with off-line apps)

 As for why can't the pages just switch, I can't speak for andrea,
 but i can guess that a redesign of that nature was out of scope and/or
 would conflict with other requirements around how the url address
 space of the app is defined.

Once you get past the should this be a feature question, there are
some questions to answer.

1) How does an author indicate which pages should be added to the
cache and which should not?

A few ideas...
a. html useManifest='x'
b. If the main resource has a no-store header, don't add it to the
cache, but do associate the document with the cache.
b. A new manifest section to define a prefix matched namespace for these pages.

2) What sequence of events does a page that just uses the cache w/o
being added to it observe?

3) At what point do subresources in an existing appcache start getting
utlized by such pages? What if the appcache is stale? Do subresource
loads cause revalidation?

On Mon, Dec 20, 2010 at 12:56 PM, Michael Nordman micha...@chromium.org wrote:
 This type of request (see forwarded message below) to utilize the
 application cache for subresource loads into documents that are not stored
 in the cache has come up several times now. The current feature set is very
 focused on the offline use case. Is it worth making additions such that a
 document that loads from a server can utilize the resources in an appcache?
 Today we have html manifest=manifestFile, which adds the document
 containing this tag to the appcache and associates that doc with that
 appcache such that subresource loads hit the appcache.
 Not a complete proposal, but...
 What if we had something along the lines of html
 useManifest=''manifestFile, which would do the association of the doc with
 the appcache (so subresources loads hit the cache) but not add the document
 to the cache?

 -- Forwarded message --
 From: UVL andrea.do...@gmail.com
 Date: Sun, Dec 19, 2010 at 1:35 PM
 Subject: [chromium-html5] Application Cache for on-line sites
 To: Chromium HTML5 chromium-ht...@chromium.org


 I tried to use the HTML5 Application Cache to improve the performances
 of on-line sites (all the tutorials on the web write only about usage
 with off-line apps)

 I created the manifest listing all the js, css and images, and the
 performances were really exciting, until I found that even the page
 HTML was cached, despite it was not listed in the manifest. The pages
 of the site are in PHP, so I don't want them to be cached.

 From
 http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
 :
 Authors are encouraged to include the main page in the manifest also,
 but in practice the page that referenced the manifest is automatically
 cached even if it isn't explicitly mentioned.

 Is there a way to have this automating caching disabled?

 Note: I know that caching can be controlled via HTTP headers, but I
 just wanted to try this way as it looks quite reliable, clean and
 powerful.

 --
 You received this message because you are subscribed to the Google Groups
 Chromium HTML5 group.
 To post to this group, send email to chromium-ht...@chromium.org.
 To unsubscribe from this group, send email to
 chromium-html5+unsubscr...@chromium.org.
 For more options, visit this group at
 http://groups.google.com/a/chromium.org/group/chromium-html5/?hl=en.





Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-08 Thread Michael Nordman
Hi again,

Just had an offline discussion about this and I think the answer can
be much simpler than what's been proposed so far.  All we have to do
for cross-origin HTTPS resources is respect the cache-control no-store
header.

Let me explain the rationale... first let's back up to the motivation
for the restrictions on HTTPS. They're there to defeat attacks that
involve physical access the the client system, so the attacker cannot
look at the cross-origin HTTS data stored in the appcache on disk. But
the regular disk cache stores HTTPS data provided the cache-control
header doesn't say no-store, so excluding this data from appcaching
does nothing to defeat that attack.

Maybe the spec changes to make are...
1) Examine the cache-control header for all cross-origin resources
(not just HTTPS), and only allow them if they don't contain the
no-store directive.
2) Remove the special-case restriction that is currently in place only
for HTTPS cross-origin resources.

WDYT?

On Mon, Feb 7, 2011 at 5:27 PM, Michael Nordman micha...@google.com wrote:
 On Mon, Feb 7, 2011 at 4:35 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Feb 7, 2011 at 3:31 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 7 Feb 2011, Jonas Sicking wrote:
 On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman micha...@google.com 
 wrote:
  But... the risk you outline is not possible...
 
  However, with the modification you are proposing, an attacker site
  could forever pin this page the users app-cache. This means that if
  there is a security bug in the page, the attacker site could exploit
  that security problem forever since any javascript in the page will
  continue to run in the security context of bunnies.com. So all of a
  sudden the CORS headers that the site added has now had a severe
  security impact.
 
  The bunnies.com page stored in the attacker's appcache will never be
  loaded into the context of bunnies.com. There are provisions in the
  the appcache system to prevent that. Those provisions guard against a
  this type of attack via HTTP.

 Your proposal means that we forever lock that constraint on the
 appcache. That is not currently the case. I.e. we'll never be able to
 say open an iframe using the resource which is available in my
 appcache or open this cross-site worker using the resource available
 in my appcache.

 Or at least we won't ever be able to do that for cross-site resources.

 That's intentional. We don't want it to be possible to get a cache of a
 third-party page vulnerable to some sort of XSS attack and then to be able
 to load that page with the credentials of its origin, since it would make
 it possible for hostile sites to lock in a vulnerability and keep using
 it even after the site had fixed the problem.

 It seems desirable that the third party site could opt in to allowing
 this. Especially if it can choose which sites should be able to cache
 it. Which I think is the feature request that Michael starts with in
 this thread.

 / Jonas


 My feature request is for an opt-in mechanism to facilitate
 cross-origin HTTPS resources. I'm not looking for an opt-in mechanism
 to allow execution of cached cross-origin resources at this time. Anne
 mentioned that CORS might be an option for my feature request... and
 here we are.



Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-07 Thread Michael Nordman
On Mon, Feb 7, 2011 at 6:18 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 04 Feb 2011 23:15:44 +0100, Michael Nordman micha...@google.com
 wrote:

 Just want to wake this thread up and say that I still see CORS as a
 good fit for this use case, and I'm curious Jonas about what you think
 in light of my previous post?

 I think Jonas does have a point. There are side effects to setting the CORS
 headers. I am not sure whether these are bad or good, but if people approach
 this from the idea of getting cross-origin caching to work they might not
 consider them.

 I.e. once CORS is used on those resources they can be read using
 XMLHttpRequest. And once we make further changes to the platform
 cross-origin images can be read via canvas, etc.

The side effect with appcaching is that the data is persisted locally.
Recipients of CORS response data via XHR can persist that data locally
even w/o the appcache. So if a resource is made CORS'able for the
purposes of XHR, there's no harm in allowing it to be appcached.

Going the other way... if a resource is made CORS'able for the
purposes of appcaching, that has the side effect of making the
resource directly loadable as well. If a developer is not willing to
allow that than those resources should not be made CORS'able.

If you buy that line of thinking, it's OK to make CORS'able resources
appcachable, but that may not be sufficient for all use cases (cases
where a resource should be appcachable but not loadable).


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-07 Thread Michael Nordman
On Mon, Feb 7, 2011 at 3:27 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman micha...@google.com wrote:
 But... the risk you outline is not possible...

 However, with the modification you are proposing, an attacker site
 could forever pin this page the users app-cache. This means that if
 there is a security bug in the page, the attacker site could exploit
 that security problem forever since any javascript in the page will
 continue to run in the security context of bunnies.com. So all of a
 sudden the CORS headers that the site added has now had a severe
 security impact.

 The bunnies.com page stored in the attacker's appcache will never be
 loaded into the context of bunnies.com. There are provisions in the
 the appcache system to prevent that. Those provisions guard against a
 this type of attack via HTTP.

 Your proposal means that we forever lock that constraint on the
 appcache. That is not currently the case. I.e. we'll never be able to
 say open an iframe using the resource which is available in my
 appcache or open this cross-site worker using the resource available
 in my appcache.

 Or at least we won't ever be able to do that for cross-site resources.

I don't see how this would lock us in.

As Ian points out, that limitation in the current design is
intentional. Right now, I'm looking to work within the constraints of
the current design to unhinder HTTPS. If and when we add the
capability you describe (to execute cross-origin appcached content in
the context of the originating origin), whatever protocols are
designed to allow that for HTTP should also apply to HTTPS resources.
Simply being CORS'able would not be sufficient to grant execute
rights.


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-07 Thread Michael Nordman
On Mon, Feb 7, 2011 at 4:35 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Feb 7, 2011 at 3:31 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 7 Feb 2011, Jonas Sicking wrote:
 On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman micha...@google.com 
 wrote:
  But... the risk you outline is not possible...
 
  However, with the modification you are proposing, an attacker site
  could forever pin this page the users app-cache. This means that if
  there is a security bug in the page, the attacker site could exploit
  that security problem forever since any javascript in the page will
  continue to run in the security context of bunnies.com. So all of a
  sudden the CORS headers that the site added has now had a severe
  security impact.
 
  The bunnies.com page stored in the attacker's appcache will never be
  loaded into the context of bunnies.com. There are provisions in the
  the appcache system to prevent that. Those provisions guard against a
  this type of attack via HTTP.

 Your proposal means that we forever lock that constraint on the
 appcache. That is not currently the case. I.e. we'll never be able to
 say open an iframe using the resource which is available in my
 appcache or open this cross-site worker using the resource available
 in my appcache.

 Or at least we won't ever be able to do that for cross-site resources.

 That's intentional. We don't want it to be possible to get a cache of a
 third-party page vulnerable to some sort of XSS attack and then to be able
 to load that page with the credentials of its origin, since it would make
 it possible for hostile sites to lock in a vulnerability and keep using
 it even after the site had fixed the problem.

 It seems desirable that the third party site could opt in to allowing
 this. Especially if it can choose which sites should be able to cache
 it. Which I think is the feature request that Michael starts with in
 this thread.

 / Jonas


My feature request is for an opt-in mechanism to facilitate
cross-origin HTTPS resources. I'm not looking for an opt-in mechanism
to allow execution of cached cross-origin resources at this time. Anne
mentioned that CORS might be an option for my feature request... and
here we are.


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-02-04 Thread Michael Nordman
Hi again,

Just want to wake this thread up and say that I still see CORS as a
good fit for this use case, and I'm curious Jonas about what you think
in light of my previous post?

-Michael

On Mon, Jan 31, 2011 at 6:27 PM, Michael Nordman micha...@google.com wrote:
 But... the risk you outline is not possible...

 However, with the modification you are proposing, an attacker site
 could forever pin this page the users app-cache. This means that if
 there is a security bug in the page, the attacker site could exploit
 that security problem forever since any javascript in the page will
 continue to run in the security context of bunnies.com. So all of a
 sudden the CORS headers that the site added has now had a severe
 security impact.

 The bunnies.com page stored in the attacker's appcache will never be
 loaded into the context of bunnies.com. There are provisions in the
 the appcache system to prevent that. Those provisions guard against a
 this type of attack via HTTP.

 On Mon, Jan 31, 2011 at 5:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jan 31, 2011 at 2:57 PM, Michael Nordman micha...@google.com wrote:
 I don't  fully understand your emphasis on the implied semantics of a
 CORS request. You say it *only* means a site can read the response. I
 don't see that in the draft spec. Cross-origin XHR may have been the
 big motivation behind CORS, but the mechanisms described in the spec
 appear agnostic with regard to use cases and the abstract section
 seems to invite additional use cases.

 The spec does say what the meaning of the Access-Contol-Allow-Origin
 header means. You're trying to modify that meaning.

 A strangely existential statement :)

 Consider things from a web authors point of view. The author develops
 a website, bunnies.com, which contains a HTML page which performs
 same-site, and thus trusted, XHR requests. The HTML page additionally
 exposes an API based on postMessage to allow parent frames to
 communicate with it.

 Since the site exposes various useful HTTP APIs it further has adds
 Access-Control-Allow-Origin: origin
 Access-Control-Allow-Credentials: true

 to a set of the URLs on the site. Including the url of the static HTML
 page. This is per CORS safe since the HTML page is static there is no
 information leakage that doesn't happen through a normal
 server-to-server request anyway.

 However, with the modification you are proposing, an attacker site
 could forever pin this page the users app-cache. This means that if
 there is a security bug in the page, the attacker site could exploit
 that security problem forever since any javascript in the page will
 continue to run in the security context of bunnies.com. So all of a
 sudden the CORS headers that the site added has now had a severe
 security impact.

 That's why I'm hampering on the semantics.

 Another issue is that if a site *is* willing to allow resources to be
 pinned in the app-cache of another site, it might still not be willing
 to share the contents of those resources with everyone. If we reuse
 the existing CORS headers to express is allowed to be app-cache
 pinned, then we can't satisfy that use case.

 For example a website could create a HTML page which embeds a
 user-specific key and exposes a postMessage based API for third party
 sites to encrypt/decrypt content using that users key. To allow this
 to happen for off-line apps it wants to allow the HTML page to be
 pinned in a third party app-cache. But it doesn't want to expose the
 actual key to the third party sites. If CORS was used to allow
 cache-pinning, this wouldn't be possible.

 I do appreciate the using CORS for this feels like blurring the lines
 between two different things.  I wonder if there should be additional
 request/response headers in CORS to convey the intended use of the
 resource and whether that particular use is allowed?

 If not CORS, what mechanism would you suggest to allow HTTPS resources
 from another origin to be including in a cache manifest file? Any
 means for the 'other' origin to opt in will suite my needs.

 I don't really care if this is part of CORS spec or not, but it needs
 to be different headers than Access-Control-Allow-Origin to avoid
 overloading the meaning of that header, and thus the effect of adding
 it.

 The header-value should probably include some sort of limit on how
 long the resource is allowed to be cached, and maybe there should be
 ways that the site can signal that a given url should be used as
 fallback.

 I think these two requirements add unnecessary complexity, and in the
 use case that brought me here, as a fallback is definitely not
 desired.

 Honestly, I'm not so convinced that pinning in an appcache is much
 different than providing read access. Such cross origin resources
 are available to be loaded as subresources into main pages using that
 particular appcache, only in the context of the manfest file's origin.
 They don't escape beyond that boundary. Looks a lot like

Re: [whatwg] navigation shouldn't abort if canceled

2011-02-02 Thread Michael Nordman
That does sound like a bug? I'd be curious to know what the reasoning
was for the existing sequence of steps.

Step 10 looks out of place too...

10. If the new resource is to be handled using a mechanism that does
not affect the browsing context, e.g. ignoring the navigation request
altogether because the specified scheme is not one of the supported
protocols, then abort these steps and proceed with that mechanism
instead.

Aborting the active document sounds like an undesirable side affect on
the browsing context for mailto links.

On Tue, Feb 1, 2011 at 11:07 AM, Mike Wilson mike...@hotmail.com wrote:
 No comments so far on this issue so I'll describe it a bit more.
 Consequences of the current text are that resource fetches are
 canceled for a document when navigating away from it, even if
 the user then chooses to cancel the navigation at a
 beforeunload prompt and returns to the document.

 Best regards
 Mike Wilson

 Mike Wilson wrote on December 26, 2010:
 http://www.whatwg.org/specs/web-apps/current-work/#navigating-
 across-documents
 (as of December 26, 2010)
 | When a browsing context is navigated to a new resource, the
 | user agent must run the following steps:
 ...
 | 9.  Abort the active document of the browsing context.
 ...
 | 11. Prompt to unload the Document object. If the user refused
 |     to allow the document to be unloaded, then these steps
 |     must be aborted.

 Might this be a bug? (It seems more consistent with other
 parts of the html5 spec, and with browsers, to do the abort
 after the user has allowed the document to unload.)

 Best regards
 Mike Wilson




Re: [whatwg] Appcache feedback

2011-02-02 Thread Michael Nordman
On Mon, Jan 31, 2011 at 4:20 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 11 Nov 2010, Michael Nordman wrote:

 In section 6.6.6 Changes to the networking model which applies to sub
 resource loads, step 3 prevents returning fallback resources for
 requested urls that fall into a network namespace.

 In section 6.5.1 Navigating across documents which applies to main
 resource loads, step 17 does not explicitly exclude returning fallbacks
 for such urls.

 I doubt this difference is intentional, looks like step 17 needs some
 additional words...

 If the resource was not fetched from an application cache, and was to
 be fetched using HTTP GET or equivalent, and its URL matches the
 fallback namespace but not the network namespace of one or more
 relevant application caches...

 I assume you mean the online whitelist, not the network namespace.

 I've adjusted the spec as you suggest.

Thank you.


 On Mon, 20 Dec 2010, Michael Nordman wrote:

 -- Forwarded message --
 | From: UVL andrea.do...@gmail.com
 | Date: Sun, Dec 19, 2010 at 1:35 PM
 | Subject: [chromium-html5] Application Cache for on-line sites
 | To: Chromium HTML5 chromium-ht...@chromium.org
 |
 | I tried to use the HTML5 Application Cache to improve the performances
 | of on-line sites (all the tutorials on the web write only about usage
 | with off-line apps)
 |
 | I created the manifest listing all the js, css and images, and the
 | performances were really exciting, until I found that even the page HTML
 | was cached, despite it was not listed in the manifest. The pages of the
 | site are in PHP, so I don't want them to be cached.
 |
 | From
 | http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
 | : Authors are encouraged to include the main page in the manifest also,
 | but in practice the page that referenced the manifest is automatically
 | cached even if it isn't explicitly mentioned.
 |
 | Is there a way to have this automating caching disabled?
 |
 | Note: I know that caching can be controlled via HTTP headers, but I just
 | wanted to try this way as it looks quite reliable, clean and powerful.

 This type of request [...] to utilize the application cache for
 subresource loads into documents that are not stored in the cache has
 come up several times now. The current feature set is very focused on
 the offline use case. Is it worth making additions such that a
 document that loads from a server can utilize the resources in an
 appcache?

 Today we have html manifest=manifestFile, which adds the document
 containing this tag to the appcache and associates that doc with that
 appcache such that subresource loads hit the appcache.

 Not a complete proposal, but...

 What if we had something along the lines of html
 useManifest=''manifestFile, which would do the association of the doc
 with the appcache (so subresources loads hit the cache) but not add the
 document to the cache?

 Why can't the pages just switch to a more AJAX-like model rather than
 having the main page still load over the network? The main page loading
 over the network is a big part of the page being slow.

The premise of the feature request is that the main pages aren't
cached at all.

| I tried to use the HTML5 Application Cache to improve the performances
| of on-line sites (all the tutorials on the web write only about usage
| with off-line apps)

As for why can't the pages just switch, I can't speak for andrea,
but i can guess that a redesign of that nature was out of scope and/or
would conflict with other requirements around how the url address
space of the app is defined.


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-31 Thread Michael Nordman
I don't  fully understand your emphasis on the implied semantics of a
CORS request. You say it *only* means a site can read the response. I
don't see that in the draft spec. Cross-origin XHR may have been the
big motivation behind CORS, but the mechanisms described in the spec
appear agnostic with regard to use cases and the abstract section
seems to invite additional use cases.

I do appreciate the using CORS for this feels like blurring the lines
between two different things.  I wonder if there should be additional
request/response headers in CORS to convey the intended use of the
resource and whether that particular use is allowed?

If not CORS, what mechanism would you suggest to allow HTTPS resources
from another origin to be including in a cache manifest file? Any
means for the 'other' origin to opt in will suite my needs.



On Fri, Jan 28, 2011 at 8:52 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jan 28, 2011 at 2:13 PM, Michael Nordman micha...@google.com wrote:
 On Thu, Jan 27, 2011 at 8:30 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Jan 27, 2011 at 5:16 PM, Michael Nordman micha...@google.com 
 wrote:
 A CORS based answer to this would work for the folks that have
 expressed an interest in this capability to me.

 cc'ing some other appcache implementors too... any thoughts?

 CORS has the semantics of you're allowed to make these types of
 requests to this resource, and you're allowed to read the response
 from such requests. This is very different from what is being
 requested here as I understand it?

 So either we'd need to add more headers to CORS, or come up with some
 other header-based solution I think.

 / Jonas

 Seems like CORS describes a protocol more than prescribes semantics?
 Is it really necessary to build up another protocol. From the
 abstract,
 Specifications that enable an API to make cross-origin requests to
 resources can use the algorithms defined by this specification.

 As long as you don't confuse webauthors. I.e. if an author sends:

 access-control-allow-origin: *

 that *only* means that any site can read that response. I.e. that it
 doesn't come with any unrelated side effects such as cache pinning or
 the like.

 / Jonas



Re: [whatwg] Appcache feedback

2011-01-31 Thread Michael Nordman
On Mon, Jan 31, 2011 at 4:20 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 30 Sep 2010, Alexey Proskuryakov wrote:

 In definitions of application cache entry categories, it's mentioned
 that an explicit entry can also be marked as foreign. This contrasts
 with fallback entries, for which no such notice is made.

 It still appears that the intention was for fallback entries to
 sometimes be foreign - in particular, section 6.5.1 says Let candidate
 be the fallback resource and then If candidate is not marked as
 foreign...

 I found it confusing that there is a specific mention of foreign for
 explicit entries, but not for fallback ones.

 Oops, yeah. Fixed.


 On Thu, 11 Nov 2010, Michael Nordman wrote:

 In section 6.6.6 Changes to the networking model which applies to sub
 resource loads, step 3 prevents returning fallback resources for
 requested urls that fall into a network namespace.

 In section 6.5.1 Navigating across documents which applies to main
 resource loads, step 17 does not explicitly exclude returning fallbacks
 for such urls.

 I doubt this difference is intentional, looks like step 17 needs some
 additional words...

 If the resource was not fetched from an application cache, and was to
 be fetched using HTTP GET or equivalent, and its URL matches the
 fallback namespace but not the network namespace of one or more
 relevant application caches...

 I assume you mean the online whitelist, not the network namespace.

 I've adjusted the spec as you suggest.

 As a side note: a redirect can never reach this point in the navigation
 algorthm, as they are handled earlier. This means that a captive portal
 captures URLs in fallback namespaces and the user can never get to the
 fallback file of a resource loaded in a browsing context when the network
 has a captive portal.


 On Mon, 20 Dec 2010, Michael Nordman wrote:

 -- Forwarded message --
 | From: UVL andrea.do...@gmail.com
 | Date: Sun, Dec 19, 2010 at 1:35 PM
 | Subject: [chromium-html5] Application Cache for on-line sites
 | To: Chromium HTML5 chromium-ht...@chromium.org
 |
 | I tried to use the HTML5 Application Cache to improve the performances
 | of on-line sites (all the tutorials on the web write only about usage
 | with off-line apps)
 |
 | I created the manifest listing all the js, css and images, and the
 | performances were really exciting, until I found that even the page HTML
 | was cached, despite it was not listed in the manifest. The pages of the
 | site are in PHP, so I don't want them to be cached.
 |
 | From
 | http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
 | : Authors are encouraged to include the main page in the manifest also,
 | but in practice the page that referenced the manifest is automatically
 | cached even if it isn't explicitly mentioned.
 |
 | Is there a way to have this automating caching disabled?
 |
 | Note: I know that caching can be controlled via HTTP headers, but I just
 | wanted to try this way as it looks quite reliable, clean and powerful.

 This type of request [...] to utilize the application cache for
 subresource loads into documents that are not stored in the cache has
 come up several times now. The current feature set is very focused on
 the offline use case. Is it worth making additions such that a
 document that loads from a server can utilize the resources in an
 appcache?

 Today we have html manifest=manifestFile, which adds the document
 containing this tag to the appcache and associates that doc with that
 appcache such that subresource loads hit the appcache.

 Not a complete proposal, but...

 What if we had something along the lines of html
 useManifest=''manifestFile, which would do the association of the doc
 with the appcache (so subresources loads hit the cache) but not add the
 document to the cache?

 Why can't the pages just switch to a more AJAX-like model rather than
 having the main page still load over the network? The main page loading
 over the network is a big part of the page being slow.


 On Thu, 13 Jan 2011, Michael Nordman wrote:

 AppCache feature request: An https manifest should be able to list
 resources from other https origins.

 I've got some app developers asking for this feature. Currently, it's
 explicitly disallowed by the the spec for valid security reasons, but
 there are also valid reasons to have this capability, like a webapp that
 uses resources hosted on gstatic.

 Seems like a robots.txt like scheme where a site like gstatic can
 declare that its OK to appcache me from elsewhere is needed.

 I've opened a chromium bug for this here...
 http://code.google.com/p/chromium/issues/detail?id=69594

 Why do the valid security reasons not apply in this case?

The vendors of originA and originB have expressed that its OK for one
to appcache resources of the other. In practical terms this is to
support a single application being hosted on multiple 'origins'.
Google gstatic.com for one example...
http

Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-31 Thread Michael Nordman
But... the risk you outline is not possible...

 However, with the modification you are proposing, an attacker site
 could forever pin this page the users app-cache. This means that if
 there is a security bug in the page, the attacker site could exploit
 that security problem forever since any javascript in the page will
 continue to run in the security context of bunnies.com. So all of a
 sudden the CORS headers that the site added has now had a severe
 security impact.

The bunnies.com page stored in the attacker's appcache will never be
loaded into the context of bunnies.com. There are provisions in the
the appcache system to prevent that. Those provisions guard against a
this type of attack via HTTP.

On Mon, Jan 31, 2011 at 5:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jan 31, 2011 at 2:57 PM, Michael Nordman micha...@google.com wrote:
 I don't  fully understand your emphasis on the implied semantics of a
 CORS request. You say it *only* means a site can read the response. I
 don't see that in the draft spec. Cross-origin XHR may have been the
 big motivation behind CORS, but the mechanisms described in the spec
 appear agnostic with regard to use cases and the abstract section
 seems to invite additional use cases.

 The spec does say what the meaning of the Access-Contol-Allow-Origin
 header means. You're trying to modify that meaning.

A strangely existential statement :)

 Consider things from a web authors point of view. The author develops
 a website, bunnies.com, which contains a HTML page which performs
 same-site, and thus trusted, XHR requests. The HTML page additionally
 exposes an API based on postMessage to allow parent frames to
 communicate with it.

 Since the site exposes various useful HTTP APIs it further has adds
 Access-Control-Allow-Origin: origin
 Access-Control-Allow-Credentials: true

 to a set of the URLs on the site. Including the url of the static HTML
 page. This is per CORS safe since the HTML page is static there is no
 information leakage that doesn't happen through a normal
 server-to-server request anyway.

 However, with the modification you are proposing, an attacker site
 could forever pin this page the users app-cache. This means that if
 there is a security bug in the page, the attacker site could exploit
 that security problem forever since any javascript in the page will
 continue to run in the security context of bunnies.com. So all of a
 sudden the CORS headers that the site added has now had a severe
 security impact.

 That's why I'm hampering on the semantics.

 Another issue is that if a site *is* willing to allow resources to be
 pinned in the app-cache of another site, it might still not be willing
 to share the contents of those resources with everyone. If we reuse
 the existing CORS headers to express is allowed to be app-cache
 pinned, then we can't satisfy that use case.

 For example a website could create a HTML page which embeds a
 user-specific key and exposes a postMessage based API for third party
 sites to encrypt/decrypt content using that users key. To allow this
 to happen for off-line apps it wants to allow the HTML page to be
 pinned in a third party app-cache. But it doesn't want to expose the
 actual key to the third party sites. If CORS was used to allow
 cache-pinning, this wouldn't be possible.

 I do appreciate the using CORS for this feels like blurring the lines
 between two different things.  I wonder if there should be additional
 request/response headers in CORS to convey the intended use of the
 resource and whether that particular use is allowed?

 If not CORS, what mechanism would you suggest to allow HTTPS resources
 from another origin to be including in a cache manifest file? Any
 means for the 'other' origin to opt in will suite my needs.

 I don't really care if this is part of CORS spec or not, but it needs
 to be different headers than Access-Control-Allow-Origin to avoid
 overloading the meaning of that header, and thus the effect of adding
 it.

 The header-value should probably include some sort of limit on how
 long the resource is allowed to be cached, and maybe there should be
 ways that the site can signal that a given url should be used as
 fallback.

I think these two requirements add unnecessary complexity, and in the
use case that brought me here, as a fallback is definitely not
desired.

Honestly, I'm not so convinced that pinning in an appcache is much
different than providing read access. Such cross origin resources
are available to be loaded as subresources into main pages using that
particular appcache, only in the context of the manfest file's origin.
They don't escape beyond that boundary. Looks a lot like a form of
read access extended to the offline case.


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-28 Thread Michael Nordman
On Thu, Jan 27, 2011 at 8:30 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Jan 27, 2011 at 5:16 PM, Michael Nordman micha...@google.com wrote:
 A CORS based answer to this would work for the folks that have
 expressed an interest in this capability to me.

 cc'ing some other appcache implementors too... any thoughts?

 CORS has the semantics of you're allowed to make these types of
 requests to this resource, and you're allowed to read the response
 from such requests. This is very different from what is being
 requested here as I understand it?

 So either we'd need to add more headers to CORS, or come up with some
 other header-based solution I think.

 / Jonas

Seems like CORS describes a protocol more than prescribes semantics?
Is it really necessary to build up another protocol. From the
abstract,
Specifications that enable an API to make cross-origin requests to
resources can use the algorithms defined by this specification.


Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-27 Thread Michael Nordman
A CORS based answer to this would work for the folks that have
expressed an interest in this capability to me.

cc'ing some other appcache implementors too... any thoughts?


On Wed, Jan 26, 2011 at 12:28 PM, Michael Nordman micha...@google.com wrote:
 I was alluding to a simple robots.txt like solution with the static
 'allow' file, but it seems like CORS could work too, it is more
 burdensome to setup due to the additional HTTP headers.

 GET /some-resource
 Origin: https://acme.com

 HTTP/1.x 200 OK
 Access-Control-Allow-Origin: * | https://acme.com

 Unless the the origin is allowed, the resource will not be added to
 the cache and the update will fail.

 ..
 On Wed, Jan 26, 2011 at 12:50 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 25 Jan 2011 23:37:55 +0100, Michael Nordman micha...@google.com
 wrote:

 Would the public-webapps list be better for discussing appcache
 feature requests?

 It's not a feature drafted in any of the WebApps WG specifications. If you
 want to discuss at the W3C the appropriate place would be the HTML WG.

 Also,
 http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F
 might be interesting. (Though you are probably aware of it.)


 This could be as simple as the presence of an
 'applicationcaching_allowed' file at the top level. An https manifest
 update that wants to retrieve resources from another https origin
 would first have to fetch the 'allow' file and see an expected
 response, and if it doesn't see a good response, those xorigin entries
 would be skipped (matching today's behavior).

 The request...

 GET /applicationcaching_allowed
 Referer: manifestUrl of the cache trying  to include resources from this
 host

 The expected response headers...

 HTTP/1.x 200 OK
 Content-Type: text/plain

 The expected response body...

 Allowed:*

 So far we have avoided this type of design as it is rather brittle. Maybe
 CORS can be used?


 --
 Anne van Kesteren
 http://annevankesteren.nl/




Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-26 Thread Michael Nordman
I was alluding to a simple robots.txt like solution with the static
'allow' file, but it seems like CORS could work too, it is more
burdensome to setup due to the additional HTTP headers.

GET /some-resource
Origin: https://acme.com

HTTP/1.x 200 OK
Access-Control-Allow-Origin: * | https://acme.com

Unless the the origin is allowed, the resource will not be added to
the cache and the update will fail.

..
On Wed, Jan 26, 2011 at 12:50 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 25 Jan 2011 23:37:55 +0100, Michael Nordman micha...@google.com
 wrote:

 Would the public-webapps list be better for discussing appcache
 feature requests?

 It's not a feature drafted in any of the WebApps WG specifications. If you
 want to discuss at the W3C the appropriate place would be the HTML WG.

 Also,
 http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F
 might be interesting. (Though you are probably aware of it.)


 This could be as simple as the presence of an
 'applicationcaching_allowed' file at the top level. An https manifest
 update that wants to retrieve resources from another https origin
 would first have to fetch the 'allow' file and see an expected
 response, and if it doesn't see a good response, those xorigin entries
 would be skipped (matching today's behavior).

 The request...

 GET /applicationcaching_allowed
 Referer: manifestUrl of the cache trying  to include resources from this
 host

 The expected response headers...

 HTTP/1.x 200 OK
 Content-Type: text/plain

 The expected response body...

 Allowed:*

 So far we have avoided this type of design as it is rather brittle. Maybe
 CORS can be used?


 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-25 Thread Michael Nordman
Would the public-webapps list be better for discussing appcache
feature requests?

This could be as simple as the presence of an
'applicationcaching_allowed' file at the top level. An https manifest
update that wants to retrieve resources from another https origin
would first have to fetch the 'allow' file and see an expected
response, and if it doesn't see a good response, those xorigin entries
would be skipped (matching today's behavior).

The request...

GET /applicationcaching_allowed
Referer: manifestUrl of the cache trying  to include resources from this host

The expected response headers...

HTTP/1.x 200 OK
Content-Type: text/plain

The expected response body...

Allowed:*


On Thu, Jan 13, 2011 at 3:08 PM, Michael Nordman micha...@google.com wrote:

 AppCache feature request: An https manifest should be able to list
 resources from other https origins.

 I've got some app developers asking for this feature. Currently, it's
 explicitly disallowed by the the spec for valid security reasons, but
 there are also valid reasons to have this capability, like a webapp
 that uses resources hosted on gstatic.

 Seems like a robots.txt like scheme where a site like gstatic can
 declare that its OK to appcache me from elsewhere is needed.

 I've opened a chromium bug for this here...
 http://code.google.com/p/chromium/issues/detail?id=69594


[whatwg] AppCache feature request: An https manifest should be able to list resources from other https origins.

2011-01-13 Thread Michael Nordman
AppCache feature request: An https manifest should be able to list
resources from other https origins.

I've got some app developers asking for this feature. Currently, it's
explicitly disallowed by the the spec for valid security reasons, but
there are also valid reasons to have this capability, like a webapp
that uses resources hosted on gstatic.

Seems like a robots.txt like scheme where a site like gstatic can
declare that its OK to appcache me from elsewhere is needed.

I've opened a chromium bug for this here...
http://code.google.com/p/chromium/issues/detail?id=69594


[whatwg] Application Cache for on-line sites

2010-12-20 Thread Michael Nordman
This type of request (see forwarded message below) to utilize the
application cache for subresource loads into documents that are not stored
in the cache has come up several times now. The current feature set is very
focused on the offline use case. Is it worth making additions such that a
document that loads from a server can utilize the resources in an appcache?

Today we have html manifest=manifestFile, which adds the document
containing this tag to the appcache and associates that doc with that
appcache such that subresource loads hit the appcache.

Not a complete proposal, but...

What if we had something along the lines of html
useManifest=''manifestFile, which would do the association of the doc with
the appcache (so subresources loads hit the cache) but not add the document
to the cache?


-- Forwarded message --
From: UVL andrea.do...@gmail.com
Date: Sun, Dec 19, 2010 at 1:35 PM
Subject: [chromium-html5] Application Cache for on-line sites
To: Chromium HTML5 chromium-ht...@chromium.org


I tried to use the HTML5 Application Cache to improve the performances
of on-line sites (all the tutorials on the web write only about usage
with off-line apps)

I created the manifest listing all the js, css and images, and the
performances were really exciting, until I found that even the page
HTML was cached, despite it was not listed in the manifest. The pages
of the site are in PHP, so I don't want them to be cached.

From
http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
:
Authors are encouraged to include the main page in the manifest also,
but in practice the page that referenced the manifest is automatically
cached even if it isn't explicitly mentioned.

Is there a way to have this automating caching disabled?

Note: I know that caching can be controlled via HTTP headers, but I
just wanted to try this way as it looks quite reliable, clean and
powerful.

--
You received this message because you are subscribed to the Google Groups
Chromium HTML5 group.
To post to this group, send email to chromium-ht...@chromium.org.
To unsubscribe from this group, send email to
chromium-html5+unsubscr...@chromium.orgchromium-html5%2bunsubscr...@chromium.org
.
For more options, visit this group at
http://groups.google.com/a/chromium.org/group/chromium-html5/?hl=en.


[whatwg] Clarification of the relative priorities of network vs fallback namespaces for main vs sub resource loads.

2010-11-11 Thread Michael Nordman
In section  6.6.6 Changes to the networking model which applies to
sub resource loads, step 3 prevents returning fallback resources for
requested urls that fall into a network namespace.

In section 6.5.1 Navigating across documents which applies to main
resource loads, step 17 does not explicitly exclude returning
fallbacks for such urls.

I doubt this difference is intentional, looks like step 17 needs some
additional words...

If the resource was not fetched from an application cache, and was to
be fetched using HTTP GET or equivalent, and its URL matches the
fallback namespace but not the network namespace of one or more
relevant application caches...


Re: [whatwg] Foreign fallback appcache entries

2010-10-01 Thread Michael Nordman
Right you are!

That's pathological content, listing a page as a fallback but having
that page refer to a different manifest. What behavior results from
the currently spec'd algorithms, repeatedly trying to load the url,
finding the fallback, detecting foreign'ness and retrying? Or is the
foreign bit examined when loading fallbacks for top level pages?

Good, the latter. Looks like foreign fallbacks resources are weeded
out in 6.5.1 Navigating across documents, step 16.

So you're looking for  a note that says by the way, fallbacks may get
marked as foreign too and be excluded from loads for script context
navigations as a result. That makes sense.

On Thu, Sep 30, 2010 at 7:29 PM, Alexey Proskuryakov a...@webkit.org wrote:

 Foreign means that the main resource has a different manifest than the one 
 referencing it. E.g.

 foo.manifest:
 CACHE MANIFEST
 iframe.html

 iframe.html:
 html manifest=bar.manifest
 ...

 So, I don't think that the same origin requirement answers this. WebKit bug 
 https://bugs.webkit.org/show_bug.cgi?id=44406 has a live demo (Firefox 
 3.6.10 handles it correctly, according to my reading of the spec).

 - WBR, Alexey Proskuryakov


 30.09.2010, в 18:21, Michael Nordman написал(а):

 I don't think 'fallback' entries can be foreign because they must be
 of the same-origin as the manifest.

 Fallback namespaces and fallback entries must have the same origin as
 the manifest itself.

 On Thu, Sep 30, 2010 at 4:38 PM, Alexey Proskuryakov a...@webkit.org wrote:

 In definitions of application cache entry categories, it's mentioned that 
 an explicit entry can also be marked as foreign. This contrasts with 
 fallback entries, for which no such notice is made.

 It still appears that the intention was for fallback entries to sometimes 
 be foreign - in particular, section 6.5.1 says Let candidate be the 
 fallback resource and then If candidate is not marked as foreign...

 I found it confusing that there is a specific mention of foreign for 
 explicit entries, but not for fallback ones.

 - WBR, Alexey Proskuryakov







Re: [whatwg] Foreign fallback appcache entries

2010-09-30 Thread Michael Nordman
I don't think 'fallback' entries can be foreign because they must be
of the same-origin as the manifest.

Fallback namespaces and fallback entries must have the same origin as
the manifest itself.

On Thu, Sep 30, 2010 at 4:38 PM, Alexey Proskuryakov a...@webkit.org wrote:

 In definitions of application cache entry categories, it's mentioned that an 
 explicit entry can also be marked as foreign. This contrasts with fallback 
 entries, for which no such notice is made.

 It still appears that the intention was for fallback entries to sometimes be 
 foreign - in particular, section 6.5.1 says Let candidate be the fallback 
 resource and then If candidate is not marked as foreign...

 I found it confusing that there is a specific mention of foreign for explicit 
 entries, but not for fallback ones.

 - WBR, Alexey Proskuryakov




Re: [whatwg] Cache manifests and cross-origin resources

2010-08-27 Thread Michael Nordman
On Fri, Aug 27, 2010 at 11:21 AM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 27 Aug 2010, Anne van Kesteren wrote:
 
  With the current model makingyourinterwebsboring.com can define a cache
  manifest and basically point to a lot of external news sites. When any
  of those sites is then fetched directly they would be taken from the
  cache. That does not seem optimal.

 They'd only be fetched from the cache if they had a manifest= attribute
 pointing to the same manifest.


... and since they're from a different origin than the manifest url, that
can't happen.

Resources from an origin different than the manifest's origin will be
cached, but they will never be used to satisfy a a frame navigation. They're
only eligible to be loaded as subresources into pages/workers that are
associated with the cache containing those resources.


 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] About bypassing caches for URLs listed in Fallback and/or Network section in a HTML5 Offline Web Application

2010-07-13 Thread Michael Nordman
Like Opera, Chrome respects the http cache when first attempting to fetch
normally a resource that falls in a fallback namespace. I'm reasonably
certain WebKit does the same.

Normal means just that... first do what the browser would do if there was
no Application Cache... only if that fails to produce a good response, do
something extra.

On Tue, Jul 13, 2010 at 7:19 AM, Shwetank Dixit shweta...@opera.com wrote:

 At least in Opera, it will still respect the browser's normal cache header.
 So the network section header will just bypass the application cache, and
 will load normally like any other web page, which means respecting (i.e, not
 bypassing) the normal cache.

 On Tue, 13 Jul 2010 19:28:55 +0530, Lianghui Chen liac...@rim.com wrote:


 Anyone has any comments?

 From: whatwg-boun...@lists.whatwg.org [mailto:
 whatwg-boun...@lists.whatwg.org] On Behalf Of Lianghui Chen
 Sent: Monday, July 12, 2010 4:12 PM
 To: whatwg@lists.whatwg.org
 Subject: [whatwg] About bypassing caches for URLs listed in Fallback
 and/or Network section in a HTML5 Offline Web Application

 Hi,

 In spec HTML5 for offline web application (
 http://www.whatwg.org/specs/web-apps/current-work/#offline) chapter
 6.6.6, item 3, 4, 5 state that for resources that is in online whitelist (or
 has wildcard whitelist), or fallback list, it should be fetched normally.

 I would like to know does it mean the user agent (browser) should bypass
 its own caches (besides html5 appcache), like the WebKit cache and browser
 http stack cache?

 Best Regards
 Lyon Chen


 -
 This transmission (including any attachments) may contain confidential
 information, privileged material (including material protected by the
 solicitor-client or other applicable privileges), or constitute non-public
 information. Any use of this information by anyone other than the intended
 recipient is prohibited. If you have received this transmission in error,
 please immediately reply to the sender and delete this information from your
 system. Use, dissemination, distribution, or reproduction of this
 transmission by unintended recipients is not authorized and may be unlawful.



 --
 Shwetank Dixit
 Web Evangelist,
 Site Compatibility / Developer Relations / Consumer Products Group
 Member - W3C Mobile Web for Social Development (MW4D) Group
 Member - Web Standards Project (WaSP) - International Liaison Group
 Opera Software - www.opera.com

 Using Opera's revolutionary e-mail client: http://www.opera.com/mail/



Re: [whatwg] Comment on 6.6 Offline Web Applications

2010-06-03 Thread Michael Nordman
There is a way for the application to remove itself from the cache. If
fetching the manifestUrl returns a 404 or 410 response, all traces of that
manifestUrl are deleted from the cache. See section 6.6.4.

On Thu, Jun 3, 2010 at 12:30 PM, Peter Beverloo pe...@lvp-media.com wrote:

 On Thu, Jun 3, 2010 at 15:01, Daniel Glazman
 daniel.glaz...@disruptive-innovations.com wrote:
 
  Hi there,
 
  I noticed the Application Cache does not allow to remove
  a single cached web application from the cache. Is that on
  purpose?
  I am under the impression the cache is here more an offline
  storage for webapps than a normal cache, and that in the long
  run browsers will have to implement an Offline Web Apps manager.
  Since the user is supposedly already provided with a dialog
  asking permission to make a webapp available offline, it makes
  sense to give him/her a way to remove a single application from
  the cache.
 
  /Daniel

 Section 6.6.7 talks about expiration of cached data [1], but also
 includes a few notes about removing items from the store. It
 specifically states that user-agents could have a delete
 site-specific data feature which also covers removing application
 caches, but also hints towards a feature that removes caches on
 request of the user.

 The API does not state a way allowing an application to remove itself
 from the cache, which could be desirable for web authors. If there's
 interest for such an addition I'm willing to make a proposal, as it
 isn't hard to think about use-cases for such a feature.

 Regards,
 Peter Beverloo

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html#expiring-application-caches



Re: [whatwg] WebSockets: what to do when there are too many open connections

2010-05-13 Thread Michael Nordman
I think that queuing in chrome bends the intent of the createWorker api just
a little too far and will be happy to see it go away. I'd rather it failed
outright then pretend to succeed when it really hasn't.

(Actually that queuing code complicates the impl somewhat too... can you
tell its been annoying me recently ;)

On Thu, May 13, 2010 at 5:36 PM, Dmitry Titov dim...@chromium.org wrote:

 As an example from a bit different area, in Chrome the Web Workers today
 require a separate process per worker. It's not good to create too many
 processes so there is a relatively low limit per origin and higher total
 limit. Two limits help avoid situation when 1 bad page affects others. Once
 limit is reached, the worker objects are created but queued, on a theory
 that pages of the same origin could cooperate, while if a total limit is
 blown then hopefully it's a temporary condition. Not ideal but if there
 should be a limit we thought having 2 limits (origin/total) is better then
 have only a total one.



 On Thu, May 13, 2010 at 4:55 PM, Perry Smith pedz...@gmail.com wrote:


 On May 13, 2010, at 12:40 PM, Mike Shaver wrote:

  The question is whether you queue or give an error.  When hitting the
  RFC-ish per-host connection limits, browsers queue additional requests
  from img or such, rather than erroring them out.  Not sure that's
  the right model here, but I worry about how much boilerplate code
  there will need to be to retry the connection (asynchronously) to
  handle failures, and whether people will end up writing it or just
  hoping for the best.

 Ah.  Thats a good question.  (Maybe that was the original question.)

 Since web sockets is the topic and as far as I know web sockets are only
 used by javascript, I would prefer an error over queuing them up.

 I think javascript and browser facilities have what is needed to create
 its own retry mechanism if that is what a particular situation wants.  I
 don't see driving the retry via a scripting language to be bad.  Its not
 that hard and it won't happen that often.  And it gives the javascript
 authors more control and choices.

 Thats my vote...

 pedz





Re: [whatwg] WebSockets: what to do when there are too many open connections

2010-05-12 Thread Michael Nordman
On Wed, May 12, 2010 at 4:31 AM, Simon Pieters sim...@opera.com wrote:

 establishing a WebSocket connection:

 [[
 Note: There is no limit to the number of established WebSocket connections
 a user agent can have with a single remote host. Servers can refuse to
 connect users with an excessive number of connections, or disconnect
 resource-hogging users when suffering high load.
 ]]

 Still, it seems likely that user agents will want to have limits on the
 number of established WebSocket connections, whether to a single remote host
 or multiple remote hosts, in a single tab or overall. The question is what
 should be done when the user agent-defined limit of established connections
 has been reached and a page tries to open another WebSocket.

 I think just waiting for other WebSockets to close is not good. It just
 means that newly loaded pages don't work.


Agreed, not good. The intent of the api is to start opening a socket now,
not at some undefined point in the future after the user has taken some
undefined action (hey user... please close a tab that has a socket open...
not particularly user actionable).


 If there are any WebSockets in CLOSING state, then I think we should wait
 until they have closed. Otherwise, I think we should force close the oldest
 WebSocket.


Force closing the oldest is not good. A malicious site could cause all open
sockets to be closed. Also this would have nasty side effects. Consider a
memory allocator that just deleted the oldest allocation to make room for
new allocations, far removed things just start failing on odd ways... no
thank you.

An obvious way to handle this condition of too many sockets are open is to
fail to open the new socket with an exception/error which indicates that
condition.



 --
 Simon Pieters
 Opera Software



Re: [whatwg] Storage quota introspection and modification

2010-03-11 Thread Michael Nordman
On Thu, Mar 11, 2010 at 1:29 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 2010/3/11 Ian Fette (イアンフェッティ) ife...@google.com:
  Yes, but I think there may be uses of things like storage for non-offline
  uses (pre-fetching email attachments, saving an email that is in a draft
  state etc.)  If it's relatively harmless, like 1mb usage, I don't want to
  pop up an infobar, I just want to allow it. So, I don't really want to
 have
  an infobar each time a site uses one of these features for the first
 time,
  I'd like to allow innocuous use if possible. But at the same time, I want
  apps to be able to say up front, at a time when the user is thinking
 about
  it (because they just clicked something on the site, presumably) here's
  what I am going to need.

 This is precisely my preferred interaction model as well.  Absolutely
 silent use of a relatively small amount of resources, just like
 cookies are done today, but with a me-initiated ability to authorize
 it to act like a full app with unlimited resources (or at least much
 larger resources).


In addition to more storage, another difference that ideally would come
along with the full-app-privilege is for the user agent to avoid evicting
that data. So stronger promises about keeping that data around relative to
unprivileged app data.

Also, this is being discussed in terms of apps. Can more than one app be
hosted on the same site? And if so, how can their be stored resources be
distinquished?


 ~TJ



Re: [whatwg] Adding FormData support to form

2010-02-26 Thread Michael Nordman
On Fri, Feb 19, 2010 at 8:52 AM, Dmitry Titov dim...@google.com wrote:

 On Thu, Feb 18, 2010 at 8:45 PM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 17, 2010, at 3:15 PM, Jonas Sicking wrote:

  The reason this is a function rather than a read-only attribute is to
 allow the return FormData to be further modified. I.e. the following
 should be allowed:

 fd = myFormElement.getFormData();
 fd.append(foo, bar);
 xhr.send(fd);

 If it was a property I would be worried about people expecting the
 following to work:
 myFormElement.formData.append(foo, bar);
 xhr.send(myFormElement.formData);

 However I don't think there is a good way to make the above work. Thus
 my suggestion to use a function instead. I'm writing a prototype
 implementation over in [2]


 People could imagine that this should work:

 myFormElement.getFormData().append(foo, bar);
 xhr.send(myFormElement.getFormData());

 In either case, it seems that once they see it doesn't work, they will no
 longer expect it to work.


 Sure, but a better name could help a bit. For example, this produces a
 'shared' object:

 document.getElementById(foo)

 while this creates a new one:

 myFormElement.getFormData()

 It might be ok, but it is a bit inconsistent.

 Why not:
 formData = new FormData();
 formData = new FormData(myFormElement);


ah... +1 the ctor




 Regards,
 Maciej





Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Michael Nordman
The lack of support for text drawing in the worker context seems like a
short sighted mistake. I understand there may be implementation issues in
some browsers, but lack of text support feels like a glaring omission spec
wise.

On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:

 I've talked with some other folks on WebKit (Maciej and Oliver) about
 having a canvas that is available to workers. They suggested some nice
 modifications to make it an offscreen canvas, which may be used in the
 Document or in a Worker.

 Proposal:
 Introduce an OffscreenCanvas which may be created from a Document or a
 Worker context.

 interface OffscreenCanvas {
  attribute unsigned long width;
  attribute unsigned long height;
 DOMString toDataURL (in optional DOMString type, in any... args);
 object getContext(in DOMString contextId);
 };


 When it is created in the Worker context, OffscreenCanvas.getContext(2d)
 returns a CanvasWorkerContext2D. In the Document context, it returns a
 CanvasRenderingContext2D.

 The base class for both CanvasWorkerContext2D and CanvasRenderingContext2D
 is CanvasContext2D. CanvasContext2D is just like a CanvasRenderingContext2D
 except for omitting the font methods and any method which uses HTML
 elements. It does have some replacement methods for createPattern/drawImage
 which take an OffscreenCanvas. The canvas object attribute is either a
 HTMLCanvasElement or an OffscreenCanvas depending on what the canvas context
 came from.

 interface CanvasContext2D {
 readonly attribute object canvas;

 void save();
 void restore();

 void scale(in float sx, in float sy);
 void rotate(in float angle);
 void translate(in float tx, in float ty);
 void transform(in float m11, in float m12, in float m21, in float
 m22, in float dx, in float dy);
 void setTransform(in float m11, in float m12, in float m21, in
 float m22, in float dx, in float dy);

  attribute float globalAlpha;
  attribute [ConvertNullToNullString] DOMString
 globalCompositeOperation;

 CanvasGradient createLinearGradient(in float x0, in float y0, in
 float x1, in float y1)
 raises (DOMException);
 CanvasGradient createRadialGradient(in float x0, in float y0, in
 float r0, in float x1, in float y1, in float r1)
 raises (DOMException);
 CanvasPattern createPattern(in OffscreenCanvas image, in DOMString
 repetition);

  attribute float lineWidth;
  attribute [ConvertNullToNullString] DOMString lineCap;
  attribute [ConvertNullToNullString] DOMString lineJoin;
  attribute float miterLimit;

  attribute float shadowOffsetX;
  attribute float shadowOffsetY;
  attribute float shadowBlur;
  attribute [ConvertNullToNullString] DOMString shadowColor;

 void clearRect(in float x, in float y, in float width, in float
 height);
 void fillRect(in float x, in float y, in float width, in float
 height);
 void strokeRect(in float x, in float y, in float w, in float h);

 void beginPath();
 void closePath();
 void moveTo(in float x, in float y);
 void lineTo(in float x, in float y);
 void quadraticCurveTo(in float cpx, in float cpy, in float x, in
 float y);
 void bezierCurveTo(in float cp1x, in float cp1y, in float cp2x, in
 float cp2y, in float x, in float y);
 void arcTo(in float x1, in float y1, in float x2, in float y2, in
 float radius);
 void rect(in float x, in float y, in float width, in float height);
 void arc(in float x, in float y, in float radius, in float
 startAngle, in float endAngle, in boolean anticlockwise);
 void fill();
 void stroke();
 void clip();
 boolean isPointInPath(in float x, in float y);

 void drawImage(in OffscreenCanvas image, in float dx, in float dy,
 in optional float dw, in optional float dh);
 void drawImage(in OffscreenCanvas image, in float sx, in float sy,
 in float sw, in float sh, in float dx, in float dy, in float dw, in float
 dh);

 // pixel manipulation
 ImageData createImageData(in float sw, in float sh)
 raises (DOMException);
 ImageData getImageData(in float sx, in float sy, in float sw, in
 float sh)
 raises(DOMException);
 void putImageData(in ImageData imagedata, in float dx, in float dy,
 in optional float dirtyX, in optional float dirtyY, in optional float
 dirtyWidth, in optional float dirtyHeight]);
 };

 interface CanvasWorkerContext2D : CanvasContext2D {
 };

 interface CanvasRenderingContext2D : CanvasContext2D {
   CanvasPattern createPattern(in HTMLImageElement image, in
 DOMString repetition);
  CanvasPattern createPattern(in HTMLCanvasElement 

Re: [whatwg] Appcache feedback

2009-12-17 Thread Michael Nordman
On Thu, Dec 17, 2009 at 2:17 PM, Joseph Pecoraro joepec...@gmail.comwrote:

 On Dec 17, 2009, at 4: 44PM, Ian Hickson wrote:

 Another conforming sequence of events would be:

 1. The parser's first parsing task begins.
 2. As soon as the manifest= attribute is parsed, the application cache
 download process begins. It queues a task to dispatch the 'checking'
 event.
 3. The parser's first parsing task ends.
 4. The event loop spins, and runs the next task, which is the 'checking'
 event. No scripts have yet run, so no handlers are registered. Nothing
 happens.
 5. The parser's second parsing task begins. It will parse the script, etc.

 [snip moved below..]


 We could delay the application cache download process so that it doesn't
 start until after the 'load' event has fired. Does anyone have an opinion
 on this?


I don't think we'd have to delay the update job, just the delivery of any
events associated with the appcache until 'load' has happened to get the
desired effect.



 From an application developer standpoint I think that would be very nice. I
 cannot comment on a this from an implementors perspective (yet).

 It seems important considering the following text from the Spec:

 http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html#downloading-or-updating-an-application-cache

 [[
 Certain events fired during the application cache download process allow
 the script to override the display of such an interface. The goal of this is
 to allow Web applications to provide more seamless update mechanisms, hiding
 from the user the mechanics of the application cache mechanism. User agents
 may display user interfaces independent of this, but are encouraged to not
 show prominent update progress notifications for applications that cancel
 the relevant events.
 ]]

 It seems pointless to provide hooks in the API that allow for a custom
 interface, when fundamentally they may never be triggered in an otherwise
 compliant user agent.


 You can work around all this by checking the .status attribute when you
 first hook up the event listeners.


 This is even worse. To me, this means the application developer cannot rely
 on certain (any?) events, so he/she would need to build a completely new API
 on top?  Developers will still likely be caught searching for bugs in their
 own code (like I did) wondering why behavior is different between browsers.


 Thanks for the quick response,
 - Joe



Re: [whatwg] Application cache updating synchronicity

2009-12-04 Thread Michael Nordman
My interpretation of this is that atomically is not to be read as
synchronously with regard to html tag parsing
(which ultimately initiates the update algorithm). Otherwise step 1 could
leave a blank page on the screen indefinitely (which is not the intent
afaict).

A clarification in the spec would help.

Even in the absence of async user-interactions alluded to by step 1,
allowing for async'ness at this point is beneficial since initial page
loading can make progress w/o having to consult the appcache prior to
getting past the html tag.

On Fri, Dec 4, 2009 at 2:01 PM, Alexey Proskuryakov a...@webkit.org wrote:

 Recently, a new step was prepended to the application cache update
 algorithm:

 1. Optionally, wait until the permission to start the application cache
 download process has been obtained from the user and until the user agent is
 confident that the network is available. This could include doing nothing
 until the user explicitly opts-in to caching the site, or could involve
 prompting the user for permission. The algorithm might never get past this
 point. (This step is particularly intended to be used by user agents running
 on severely space-constrained devices or in highly privacy-sensitive
 environments).

 It's not clear if it's supposed to synchronous or not. The doing nothing
 clause suggests that page loading can continue normally. On the other hand,
 the algorithm says that asynchronous processing only begins after step 2,
 which runs atomically.

 - WBR, Alexey Proskuryakov




Re: [whatwg] Please always use utf-8 for Web Workers

2009-09-28 Thread Michael Nordman
Leaving legacy encodings behind would be a good thing if we can get away
with it... jmho.

On Mon, Sep 28, 2009 at 9:59 AM, Drew Wilson atwil...@google.com wrote:

 Ah, sorry for the confusion - my use of default was indeed sloppy. I'm
 saying that if the server is explicitly specifying the charset either via a
 header or via BOMs, it seems bad to ignore it since there's no other way to
 override the charset.
 I understand your point, though - since workers don't inherit the document
 encoding from their parent, they may indeed decode a given resource
 differently if the server isn't specifying a charset in some way.

 -atw


 On Mon, Sep 28, 2009 at 4:47 AM, Anne van Kesteren ann...@opera.comwrote:

 On Fri, 25 Sep 2009 19:34:18 +0200, Drew Wilson atwil...@google.com
 wrote:

 Again, apologies if I'm misunderstanding the suggestion.


 I thought that by default encoding you meant the encoding that would be
 used if other means of getting the encoding failed. If there is only one
 encoding it is not exactly the default, since it cannot be changed.



 --
 Anne van Kesteren
 http://annevankesteren.nl/





Re: [whatwg] Cache Manifest: why have NETWORK?

2009-09-24 Thread Michael Nordman
The relative priorities of FALLBACK vs CACHED vs NETWORK are somewhat
arbitrary, it has to be spelled out in the spec, but how they should be
spelled out is anybody's guess. The current draft puts a stake in the ground
in section 6.9.7 (and also around where frame navigations are spelled out)
such that...
if (url is CACHED)
  return cached_resposne;
if (url has FALLBACK)
  return
repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
if (url is in NETWORK namespace)
  return response_as_usual;
otherwise
  return synthesized_error_response;

Sounds like you may be warming up to make a case for something like...

if (url is in NETWORK namespace)
  return response_as_usual;
if (url is CACHED)
  return cached_resposne;
if (url has FALLBACK)
  return
repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
otherwise
  return synthesized_error_response;

That probably makes sense too in some use cases. Without practical
experience with this thing, its difficult to 'guess' which is of more use.
Really these aren't mutually exclusive at all...

if (url is in NETWORK namespace)
  return response_as_usual;
if (url is CACHED)
  return cached_resposne;
if (url has FALLBACK)
  return
repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
if (url is in FALLTHRU namespace)
  return response_as_usual;
otherwise
  return synthesized_error_response;

Notice the distinction between NETWORK vs FALLTHRU both of which hit the
wire.

Cheers



On Thu, Sep 24, 2009 at 1:43 AM, Anne van Kesteren ann...@opera.com wrote:

 On Wed, 23 Sep 2009 20:09:03 +0200, Michael Nordman micha...@google.com
 wrote:

 For cases where you don't want to, or can't,  'fallback' on a cached
 resource.
 ex 1.

 http://server/get/realtime/results/from/the/outside/worldCreating a
 fallback
 resource with a mock error or empty response is busy work.

 ex 2.

 http://server/change/some/state/on/server/side?id=xnewValue=y
 Ditto


 You could fallback to a non-existing fallback or some such. But if it is
 really needed NETWORK should get priority over FALLBACK in my opinion (or at
 least the subset of NETWORK that is not a wildcard) so in cases like this

  FALLBACK:
  / /fallback

  NETWORK
  /realtime-api
  /update

 ... you do not get /fallback all the time.



 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Cache Manifest: why have NETWORK?

2009-09-24 Thread Michael Nordman
I probably should've held the semantics of NETWORK constant in my earlier
notes, and alluded to a new FALLTHRU section that has the *this section gets
examined first* characteristic... same thing with the names changed to
protect the innocent bystanders (those using NETWORK namespaces already).

On Thu, Sep 24, 2009 at 1:49 PM, Michael Nordman micha...@google.comwrote:

 The relative priorities of FALLBACK vs CACHED vs NETWORK are somewhat
 arbitrary, it has to be spelled out in the spec, but how they should be
 spelled out is anybody's guess. The current draft puts a stake in the ground
 in section 6.9.7 (and also around where frame navigations are spelled out)
 such that...
 if (url is CACHED)
   return cached_resposne;
 if (url has FALLBACK)
   return
 repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
 if (url is in NETWORK namespace)
   return response_as_usual;
 otherwise
   return synthesized_error_response;

 Sounds like you may be warming up to make a case for something like...

 if (url is in NETWORK namespace)
   return response_as_usual;
 if (url is CACHED)
   return cached_resposne;
 if (url has FALLBACK)
   return
 repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
 otherwise
   return synthesized_error_response;

 That probably makes sense too in some use cases. Without practical
 experience with this thing, its difficult to 'guess' which is of more use.
 Really these aren't mutually exclusive at all...

 if (url is in NETWORK namespace)
   return response_as_usual;
 if (url is CACHED)
   return cached_resposne;
 if (url has FALLBACK)
   return
 repsonse_as_usual_unless_for_fallback_conditions_are_met_by_that_response;
 if (url is in FALLTHRU namespace)
   return response_as_usual;
 otherwise
   return synthesized_error_response;

 Notice the distinction between NETWORK vs FALLTHRU both of which hit the
 wire.

 Cheers



 On Thu, Sep 24, 2009 at 1:43 AM, Anne van Kesteren ann...@opera.comwrote:

 On Wed, 23 Sep 2009 20:09:03 +0200, Michael Nordman micha...@google.com
 wrote:

 For cases where you don't want to, or can't,  'fallback' on a cached
 resource.
 ex 1.

 http://server/get/realtime/results/from/the/outside/worldCreating a
 fallback
 resource with a mock error or empty response is busy work.

 ex 2.

 http://server/change/some/state/on/server/side?id=xnewValue=y
 Ditto


 You could fallback to a non-existing fallback or some such. But if it is
 really needed NETWORK should get priority over FALLBACK in my opinion (or at
 least the subset of NETWORK that is not a wildcard) so in cases like this

  FALLBACK:
  / /fallback

  NETWORK
  /realtime-api
  /update

 ... you do not get /fallback all the time.



 --
 Anne van Kesteren
 http://annevankesteren.nl/





[whatwg] Fwd: [gears-users] Version in manifest is not enough

2009-09-23 Thread Michael Nordman
Food for thought around the AppCache feature which can be similarly affected
i think.

-- Forwarded message --
From: emu emu.hu...@gmail.com
Date: Sun, Sep 20, 2009 at 8:31 AM
Subject: [gears-users] Version in manifest is not enough
To: Gears Users gears-us...@googlegroups.com



Some times the response of the web server be changed by some proxy
servers or even the ISPs(some times they try to add some ads or filter
something), and google gears will cache the changed content an will
not check before the manifest file change again, and some times the
manifest files change before the files they list changed because of
the CDN problem. When these happends, wrong captured files becomes a
disaster. It's very hard to findout what happends to the clients'
browser.
If gears has a second-check option, like this:
   { url: abc.html, md5: 92d9b0186331ec3e }
it'll be very very helpful.


Re: [whatwg] Cache Manifest: why have NETWORK?

2009-09-23 Thread Michael Nordman
For cases where you don't want to, or can't,  'fallback' on a cached
resource.
ex 1.

http://server/get/realtime/results/from/the/outside/worldCreating a fallback
resource with a mock error or empty response is busy work.

ex 2.

http://server/change/some/state/on/server/side?id=xnewValue=y
Ditto

On Wed, Sep 23, 2009 at 2:47 AM, Anne van Kesteren ann...@opera.com wrote:

 If you use a fallback namespace it will always try to do a network fetch
 before using the fallback entry so why is there a need for a NETWORK entry
 in the cache manifest?


 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Using Web Workers without external files

2009-09-23 Thread Michael Nordman
Data urls seem like a real good fit for dedicated workers.

On Wed, Sep 23, 2009 at 5:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Sep 23, 2009 at 2:51 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Wed, Sep 23, 2009 at 4:40 PM, Jonas Sicking jo...@sicking.cc wrote:
  I think making data: urls is an ok solution,
 
  At the moment, data: urls don't seem to be usable with Workers due to
  same-origin restrictions; canvas methods get to special-case data: to
  not be treated as different-origin (and thus dirty themselves), but
  there doesn't appear to be a similar relaxation of the origin rules
  for data: urls in script.

 I know, I'm saying that we should relax that.

 / Jonas



Re: [whatwg] localStorage, the storage mutex, document.domain, and workers

2009-09-18 Thread Michael Nordman
These two statements are true...
* We can't change the API
* It is seriously flawed
... and therein lies the problem.

I'm sad to have to say it... but I hope this withers and dies an early
death. Putting this in the web platform for perpetuity is a mistake. I don't
support the adoption of this into the platform.

Time to go read the SimpleDatabase proposal.

 On Wed, 9 Sep 2009, Darin Fisher wrote:
 By the way, you can already pretty much create my acquireLock /
 releaseLock API on top of SharedWorkers today, but in a slightly
 crappier way.

 How? Since the API is completely async, you can't make a spinlock.

You must not have read Darin's proposal. It wasn't a 'lock' at all. It's a
completely async, well-factored primitive.
  void acqureFlag('name', callback);   // returns immediately in all cases
  void releaseFlag('name');  // returns immediately in all cases
The callback is called upon 'flag' acquisition. Its all yours until you call
release. Completely async. I think its self-evident that this can be
composed with a SharedWorker.

Darin's was an example of a good proposal... simple on all dimensions, yet
powerful and broadly applicable... what is not to like? Marry the 'flag'
with a unlocked storage repository and viola, you have something... the
whole is greater than the sum of the parts.

Another lesson to be learned from the LocalStorage debacle is to decompose
things, hashmaps+(implicit)locks+events... it slices and dices (and there's
more)... it was a bad idea to jumble all that together... individually,
minus the implicit, those would make for nice features.

Also, regarding we can't change the API... well it did get changed... the
application of additional implicit locking semantics to IE's API... that is
a material change.


On Thu, Sep 17, 2009 at 5:13 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 17, 2009 at 8:32 PM, Ian Hickson i...@hixie.ch wrote:

 LESSONS LEARNT

 If we ever define a new API that needs a lock of some kind, the way to do
 it is to use a callback, so that the UA can wait for the lock
 asynchronously, and then run the callback once it has it. (This is what
 the Web Database spec does, for instance.)


 When we add more of these features, I think we will need a way to acquire
 multiple locks simultaneously before running a callback. (So if we had
 localStorage.runTransaction(function(storage) { ... }) and
 otherLockingThing.runTransaction(function(thing) { ... }), we could also
 have, for example, window.runTransaction(localStorage, otherLockingThing,
 function(storage, thing) { ... }).) So it may be worth thinking about what
 that API should be and what we will need to add to each feature spec to
 support it.

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]



Re: [whatwg] localStorage, the storage mutex, document.domain, and workers

2009-09-18 Thread Michael Nordman


 When we add more of these features, I think we will need a way to acquire
 multiple locks simultaneously before running a callback. (So if we had
 localStorage.runTransaction(function(storage) { ... }) and
 otherLockingThing.runTransaction(function(thing) { ... }), we could also
 have, for example, window.runTransaction(localStorage, otherLockingThing,
 function(storage, thing) { ... }).) So it may be worth thinking about what
 that API should be and what we will need to add to each feature spec to
 support it.


void acquireFlags(['foo', 'bar', baz'], callback);


Re: [whatwg] LocalStorage in workers

2009-09-16 Thread Michael Nordman
On Wed, Sep 16, 2009 at 9:58 AM, Drew Wilson atwil...@google.com wrote:

 Jeremy, what's the use case here - do developers want workers to have
 access to shared local storage with pages? Or do they just want workers to
 have access to their own non-shared local storage?
 Because we could just give workers their own separate WorkerLocalStorage
 and let them have at it. A worker could block all the other accesses to
 WorkerLocalStorage within that domain, but so be it - it wouldn't affect
 page access, and we already had that issue with the (now removed?)
 synchronous SQL API.

 I think a much better case can be made for WorkerLocalStorage than for
 give workers access to page LocalStorage, and the design issues are much
 simpler.


Putting workers in their own storage silo doesn't really make much sense?
Sure it may be simpler for browser vendors, but does that make life simpler
 for app developers, or just have them scratching their heads about how to
read/write the same data set from either flavor of context in their
application?

I see no rhyme or reason for the arbitrary barrier except for browser
vendors to work around the awkward implict locks on LocalStorage (the source
of much grief). Consider this... would it make sense to cordon off the
databases workers vs pages can see? I would think not, and i would hope
others agree.




 -atw

 On Tue, Sep 15, 2009 at 8:27 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Sep 15, 2009 at 6:56 PM, Jeremy Orlow jor...@chromium.org
 wrote:
  One possible solution is to add an asynchronous callback interface for
  LocalStorage into workers.  For example:
  function myCallback(localStorage) {
localStorage.accountBalance = localStorage.accountBalance + 100;
  }
  executeLocalStorageCallback(myCallback);  // TODO: Make this name better
   :-)
  The interface is simple.  You can only access localStorage via a
 callback.
   Any use outside of the callback is illegal and would raise an
 exception.
   The callback would acquire the storage mutex during execution, but the
  worker's execution would not block during this time.  Of course, it's
 still
  possible for a poorly behaving worker to do large amounts
 of computation in
  the callback, but hopefully the fact they're executing in a callback
 makes
  the developer more aware of the problem.

 First off, I agree that not having localStorage in workers is a big
 problem that we need to address.

 If I were designing the localStorage interface today I would use the
 above interface that you suggest. Grabbing localStorage can only be
 done asynchronously, and while you're using it, no one else can get a
 reference to it. This way there are no race conditions, but also no
 way for anyone to have to lock.

 So one solution is to do that in parallel to the current localStorage
 interface. Let's say we introduce a 'clientStorage' object. You can
 only get a reference to it using a 'getClientStorage' function. This
 function is available both to workers and windows. The storage is
 separate from localStorage so no need to worry about the 'storage
 mutex'.

 There is of course a risk that a worker grabs on to the clientStorage
 and holds it indefinitely. This would result in the main window (or
 another worker) never getting a reference to it. However it doesn't
 affect responsiveness of that window, it's just that the callback will
 never happen. While that's not ideal, it seems like a smaller problem
 than any other solution that I can think of. And the WebDatabase
 interfaces are suffering from the same problem if I understand things
 correctly.

 There's a couple of other interesting things we could expose on top of
 this:

 First, a synchronous API for workers. We could allow workers to
 synchronously get a reference to clientStorage. If someone is
 currently using clientStorage then the worker blocks until the storage
 becomes available. We could either use a callback as the above, which
 blocks until the clientStorage is acquired and only holds the storage
 until the callback exists. Or we could expose clientStorage as a
 property which holds the storage until control is returned to the
 worker eventloop, or until some explicit release API is called. The
 latter would be how localStorage is now defined, with the important
 difference that localStorage exposes the synchronous API to windows.

 Second, allow several named storage areas. We could add an API like
 getNamedClientStorage(name, callback). This would allow two different
 workers to simultaneously store things in a storage areas, as long as
 they don't need to use the *same* storage area. It would also allow a
 worker and the main window to simultaneously use separate storage
 areas.

 However we need to be careful if we add both above features. We can't
 allow a worker to grab multiple storage areas at the same time since
 that could cause deadlocks. However with proper APIs I believe we can
 avoid that.

 / Jonas





Re: [whatwg] LocalStorage in workers

2009-09-16 Thread Michael Nordman
On Wed, Sep 16, 2009 at 11:24 AM, James Robinson jam...@google.com wrote:

 On Wed, Sep 16, 2009 at 10:53 AM, Michael Nordman micha...@google.comwrote:



 On Wed, Sep 16, 2009 at 9:58 AM, Drew Wilson atwil...@google.com wrote:

 Jeremy, what's the use case here - do developers want workers to have
 access to shared local storage with pages? Or do they just want workers to
 have access to their own non-shared local storage?
 Because we could just give workers their own separate WorkerLocalStorage
 and let them have at it. A worker could block all the other accesses to
 WorkerLocalStorage within that domain, but so be it - it wouldn't affect
 page access, and we already had that issue with the (now removed?)
 synchronous SQL API.

 I think a much better case can be made for WorkerLocalStorage than for
 give workers access to page LocalStorage, and the design issues are much
 simpler.


 Putting workers in their own storage silo doesn't really make much sense?
 Sure it may be simpler for browser vendors, but does that make life simpler
  for app developers, or just have them scratching their heads about how to
 read/write the same data set from either flavor of context in their
 application?

 I see no rhyme or reason for the arbitrary barrier except for browser
 vendors to work around the awkward implict locks on LocalStorage (the source
 of much grief). Consider this... would it make sense to cordon off the
 databases workers vs pages can see? I would think not, and i would hope
 others agree.


 The difference is that the database interface is purely asynchronous
 whereas storage is synchronous.


Sure... we're talking about adding an async api that allows worker to access
a local storage repository... should such a thing exist, why should it not
provide access to the same repository as seen by pages?


 If multiple threads have synchronous access to the same shared resource
 then there has to be a consistency model.  ECMAScript does not provide for
 one so it has to be done at a higher level.  Since there was not a solution
 in the first versions that shipped, the awkward implicit locks you mention
 were suggested as a workaround.  However it's far from clear that these
 solve the problem and are implementable.  It seems like the only logical
 continuation of this path would be to add explicit, blocking synchronization
 primitives for developers to deal with - which I think everyone agrees would
 be a terrible idea.  If you're worried about developers scratching their
 heads about how to pass data between workers just think about happens-before
 relationships and multi-threaded memory models.

 In a hypothetical world without synchronous access to LocalStorage/cookies
 from workers, there is no shared memory between threads except via message
 passing.  This can seem a bit tricky for developers but is very easy to
 reason about and prove correctness and the absence of deadlocks.

 - James






 -atw

 On Tue, Sep 15, 2009 at 8:27 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Sep 15, 2009 at 6:56 PM, Jeremy Orlow jor...@chromium.org
 wrote:
  One possible solution is to add an asynchronous callback interface for
  LocalStorage into workers.  For example:
  function myCallback(localStorage) {
localStorage.accountBalance = localStorage.accountBalance + 100;
  }
  executeLocalStorageCallback(myCallback);  // TODO: Make this name
 better
   :-)
  The interface is simple.  You can only access localStorage via a
 callback.
   Any use outside of the callback is illegal and would raise an
 exception.
   The callback would acquire the storage mutex during execution, but
 the
  worker's execution would not block during this time.  Of course, it's
 still
  possible for a poorly behaving worker to do large amounts
 of computation in
  the callback, but hopefully the fact they're executing in a callback
 makes
  the developer more aware of the problem.

 First off, I agree that not having localStorage in workers is a big
 problem that we need to address.

 If I were designing the localStorage interface today I would use the
 above interface that you suggest. Grabbing localStorage can only be
 done asynchronously, and while you're using it, no one else can get a
 reference to it. This way there are no race conditions, but also no
 way for anyone to have to lock.

 So one solution is to do that in parallel to the current localStorage
 interface. Let's say we introduce a 'clientStorage' object. You can
 only get a reference to it using a 'getClientStorage' function. This
 function is available both to workers and windows. The storage is
 separate from localStorage so no need to worry about the 'storage
 mutex'.

 There is of course a risk that a worker grabs on to the clientStorage
 and holds it indefinitely. This would result in the main window (or
 another worker) never getting a reference to it. However it doesn't
 affect responsiveness of that window, it's just that the callback will
 never happen. While

Re: [whatwg] LocalStorage in workers

2009-09-16 Thread Michael Nordman
On Wed, Sep 16, 2009 at 3:30 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Sep 16, 2009 at 3:21 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Thu, Sep 17, 2009 at 9:56 AM, Jeremy Orlow jor...@chromium.orgwrote:

 1) Create a LocalStorage like API that can only be accessed in an async
 way via pages (kind of like WebDatabase).

 2) Remove any
 atomicity/consistency guarantees from synchronous LocalStorage access within
 pages (like IE8 currently does) and add an async interface for when pages do
 need atomicity/consistency.

 3) Come up with a completely different storage API that all the browser
 vendors are willing to implement that only allows Async access from within
 pages.  WebSimpleDatabase might be a good starting point for this.


 4) Create WorkerStorage so that shared workers have exclusive, synchronous
 access to their own persistent storage via an API compatible with
 LocalStorage.


 Ah yes.  That is also an option.

 And, now that I think about it (combined with Jonas' last point) I think it
 might be the best option since it has a very low implementation cost, it
 keeps the very simple API, and solves the primary problem of not blocking
 pages' event loops.


But it fails to solve the problem of a providing a shared storage repository
for the applications use, which at least to me is the real primary goal.


Re: [whatwg] LocalStorage in workers

2009-09-16 Thread Michael Nordman
 Is it?  Can you provide some use cases?  :-)
Um...sure... an app sets up a shared worker whose function it is to sync
up/down changes to the data the application manages...

* pageA makes changes, other pageB sees it by virtue of an event and
reflects change it it view of the world... worker sees the change to by
virtue of the same event and pushes it up.

* worker receive delta from server... and makes the change locally... pageA
and B see that by virtue of the event.


What is the use case for silo'd worker storage?


Re: [whatwg] Global Script proposal.

2009-09-15 Thread Michael Nordman
 to say that every app should necessarily be a single complex AJAX page
morphing itself. That in itself may be a serious limitation.

Agree very much so.

On Tue, Sep 15, 2009 at 5:54 PM, Dmitry Titov dim...@chromium.org wrote:



 On Mon, Sep 14, 2009 at 4:41 AM, Ian Hickson i...@hixie.ch wrote:

 On Mon, 7 Sep 2009, Dimitri Glazkov wrote:
  On Sat, Aug 29, 2009 at 2:40 PM, Ian Hicksoni...@hixie.ch wrote:
   Another case is an application that uses navigation from page to page
   using menu or some site navigation mechanism. Global Script Context
   could keep the application state so it doesn't have to be
   round-tripped via server in a cookie or URL.
  
   You can keep the state using sessionStorage or localStorage, or you
   can use pushState() instead of actual navigation.
 
  First off, sessionStorage and localStorage are not anywhere close to
  being useful if you're dealing with the actual DOM objects. The JS code
  that would freeze-dry them and bring back to life will make the whole
  exercise cost-prohibitive.

 Indeed. I don't see why you would want to be keeping nodes alive while
 navigating to entirely new documents though.


  But more to the point, I think globalScript is a good replacement for
  the pushState additions to the History spec. I've been reading up on the
  spec an the comments made about pushState and I am becoming somewhat
  convinced that pushState is confusing, hard to get right, and full of
  fail. You should simply look at the motivation behind building JS-based
  history state managers -- it all becomes fairly clear.
 
  The best analogy I can muster is this: pushHistory is like creating
  Rhoad's-like kinetic machines for moving furniture around the house in
  an effort to keep the tenant standing still. Whereas globalScript
  proposes to just let the poor slob to walk to the chest to get the damn
  socks.
 
  My big issue with pushHistory is that it messes with the nature of the
  Web: a URL is a resource you request from the server. Not something you
  arrive to via clever sleight of hand in a user agent. So, you've managed
  to pushState your way to a.com/some/path/10/clicks/from/the/home/page.
  Now the user bookmarks it. What are you going to do know? Intuitively,
  it feels like we should be improving the user agent to eliminate the
  need for mucking with history, not providing more tools to encourage it.

 The only criticism of substance in the above -- that pushState() lets you
 change the URL of the current page when you change the page dynamically --
 is pretty much the entire point of the feature, and I don't understand why
 it's bad. I certainly don't want to require that every pan on Google Maps
 require a new page load.


 On Tue, 8 Sep 2009, Anne van Kesteren wrote:
 
  If JavaScript can be somehow kept-alive while navigating to a new page
  within a single domain, be in control of what is displayed and without
  security issues and all that'd be rather cool and also solve the issue.

 This seems substantially less preferable, performance-wise, than having a
 single Document and script, using pushState().


 It depends, right? That single Document+script would have to have all the
 resources and code to be able to morph itself into all the possible app
 states, preventing benefits of lazy-loading. Or, to be more efficient, it
 should load additional resources on demand, which looks very close to
 navigation to subsequent pages.

 Today, those natural navigations from page to page are prohibitively
 expensive, even with caches - they are equivalent to serialization of
 everything into some storage, terminating the app, then launching the app
 again, loading state from storage and/or cloud, setting up the UI etc. So
 AJAX is the only real alternative today, although it comes with complex
 pages that have to construct UI dynamically.

 History management API is great, but it is also an overkill to say that
 every app should necessarily be a single complex AJAX page morphing itself.
 That in itself may be a serious limitation.



Re: [whatwg] Application defined locks

2009-09-11 Thread Michael Nordman
On Thu, Sep 10, 2009 at 6:35 PM, James Robinson jam...@google.com wrote:



 On Thu, Sep 10, 2009 at 6:11 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Fri, Sep 11, 2009 at 9:28 AM, Darin Fisher da...@chromium.org wrote:

 On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan rob...@ocallahan.org
  wrote:

 On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher da...@chromium.orgwrote:

 I think there are good applications for setting a long-lived lock.  We
 can try to make it hard for people to create those locks, but then the end
 result will be suboptimal.  They'll still find a way to build them.


 One use case is selecting a master instance of an app. I haven't really
 been following the global script thread, but doesn't that address this 
 use
 case in a more direct way?


 No it doesn't.  The global script would only be reachable by related
 browsing contexts (similar to how window.open w/ a name works).  In a
 multi-process browser, you don't want to _require_ script bindings to span
 processes.

 That's why I mentioned shared workers.  Because they are isolated and
 communication is via string passing, it is possible for processes in
 unrelated browsing contexts to communicate with the same shared workers.




 What other use-cases for long-lived locks are there?


 This is a good question.  Most of the use cases I can imagine boil down
 to a master/slave division of labor.

 For example, if I write an app that does some batch asynchronous
 processing (many setTimeout calls worth), then I can imagine setting a flag
 across the entire job, so that other instances of my app know not to start
 another such overlapping job until I'm finished.  In this example, I'm
 supposing that storage is modified at each step such that guaranteeing
 storage consistency within the scope of script evaluation is not enough.


 What if instead of adding locking, we added a master election mechanism?
  I haven't thought it out super well, but it could be something like this:
  You'd call some function like |window.electMaster(name,
 newMasterCallback, messageHandler)|.  The name would allow multiple groups
 of master/slaves to exist.  The newMasterCallback would be called any time
 that the master changes.  It would be passed a message port if we're a slave
 or null if we're the master.  messageHandler would be called for any
 messages.  When we're the master, it'll be passed a message port of the
 slave so that responses can be sent if desired.

 In the gmail example: when all the windows start up, they call
 window.electMaster.  If they're given a message port, then they'll send all
 messages to that master.  The master would handle the request and possibly
 send a response.  If a window is closed, then the UA will pick one of the
 slaves to become the master.  The master would handle all the state and the
 slaves would be lighter weight.

 --

 There are a couple open questions for something like this.  First of all,
 we might want to let windows provide a hint that they'd be a bad master.
  For example, if they expected to be closed fairly soon.  (In the gmail
 example, a compose mail window.)

 We might also want to consider allowing windows to opt out of masterhood
 with something like |window.yieldMasterhood()|.  This would allow people to
 build locks upon this interface which could be good and bad.

 Next, we could consider adding a mechanism for the master to pickle up
 some amount of state and pass it on to another master.  For example, maybe
 the |window.yieldMasterhood()| function could take a single state param
 that would be passed into the master via the newMasterCallback function.

 Lastly and most importantly, we need to decide if we think shared workers
 are the way all of this should be done.  If so, it seems like none of this
 complexity is necessary.  That said, until shared workers are first class
 citizens in terms of what APIs they can access (cookies, LocalStorage, etc),
 I don't think shared workers are practical for many developers and use
 cases.


 What about eliminating shared memory (only one context would be allowed
 access to cookies, localStorage, etc)?  It seems to be working out fine for
 DOM access and is much, much easier to reason about.


That's just it, the current state of affairs isn't working out fine for
web application development in general. Grand and glorious hacks galore in
webapps to workaround silo'd page constraints inherent in the web platform
available today.


 - James




Re: [whatwg] Application defined locks

2009-09-10 Thread Michael Nordman
On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline Gmail to
 coordinate which instance of the app (page with gmail in it) should be
 responsible for sync'ing the local database with the mail service. In the
 absence of a feature like this, instead we used the local database itself to
 register which page was the 'syncagent'. This involved periodically updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


In our db polling world... that was why the syncagent periodically updated
the db... to say still alive... on close it would say i'm gone and on
ugly exit, the others would notice the lack of still alives and fight
about who was it next. A silly bunch of complexity for something so simple.

In the acquireFlag world... wouldn't the page going away simply relinquish
the flag?



 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]



Re: [whatwg] Application defined locks

2009-09-10 Thread Michael Nordman
On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote:



 On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline Gmail
 to coordinate which instance of the app (page with gmail in it) should be
 responsible for sync'ing the local database with the mail service. In the
 absence of a feature like this, instead we used the local database itself to
 register which page was the 'syncagent'. This involved periodically updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


 In our db polling world... that was why the syncagent periodically updated
 the db... to say still alive... on close it would say i'm gone and on
 ugly exit, the others would notice the lack of still alives and fight
 about who was it next. A silly bunch of complexity for something so simple.

 In the acquireFlag world... wouldn't the page going away simply relinquish
 the flag?


 How would the pages that failed to acquire it before know that they should
 try to acquire it again? Presumably they would still have to poll (assuming
 the tryLock model).


@Maciej, per darin's comment, that was the proposed interface my snippet was
using. So no polling invovled. Probably hundreds of lines of js/sql code
reduced to one.

This would be a nice generally applicable primitive to have.



 Regards,
 Maciej




Re: [whatwg] Application defined locks

2009-09-10 Thread Michael Nordman
On Thu, Sep 10, 2009 at 2:38 PM, James Robinson jam...@google.com wrote:



 On Thu, Sep 10, 2009 at 1:55 PM, Darin Fisher da...@chromium.org wrote:

 On Thu, Sep 10, 2009 at 1:08 PM, Oliver Hunt oli...@apple.com wrote:


 On Sep 10, 2009, at 12:55 PM, Darin Fisher wrote:

 On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.comwrote:


 On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote:



 On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.org
  wrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman 
 micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline
 Gmail to coordinate which instance of the app (page with gmail in it) 
 should
 be responsible for sync'ing the local database with the mail service. In 
 the
 absence of a feature like this, instead we used the local database 
 itself to
 register which page was the 'syncagent'. This involved periodically 
 updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


 In our db polling world... that was why the syncagent periodically
 updated the db... to say still alive... on close it would say i'm gone
 and on ugly exit, the others would notice the lack of still alives and
 fight about who was it next. A silly bunch of complexity for something so
 simple.

 In the acquireFlag world... wouldn't the page going away simply
 relinquish the flag?


 How would the pages that failed to acquire it before know that they
 should try to acquire it again? Presumably they would still have to poll
 (assuming the tryLock model).

 Regards,
 Maciej



 In my proposed interace, you can wait asynchronously for the lock.  Just
 call acquireLock with a second parameter, a closure that runs once you get
 the lock.


 What if you don't want to wait asynchronously?  My reading of this is
 that you need two copies of the code, one that works synchronously, but you
 still need to keep the asynchronous model to deal with an inability to
 synchronously acquire the lock.  What am I missing?



 Sounds like a problem that can be solved with a function.

 The reason for the trylock support is to allow a programmer to easily do
 nothing if they can't acquire the lock.  If you want to wait if you can't
 acquire the lock, then using the second form of acquireLock, which takes a
 function, is a good solution.


 I don't think there is much value in the first form of acquireLock() - only
 the second form really makes sense.  I also strongly feel that giving web
 developers access to locking mechanisms is a bad idea - it hasn't been a
 spectacular success in any other language.


As proposed, its not really a locking mechanism in that nothing ever blocks.
A 'flag' gets acquired by at most one context in an origin at any given time
is all. Apps can make what they will out of that.



 I think the useful semantics are equivalent to the following (being careful
 to avoid mentioning 'locks' or 'mutexes' explicit):  A script passes in a
 callback and a token.  The UA invokes the callback at some point in the
 future and provides the guarantee that no other callback with that token
 will be invoked in any context within the origin until the invoked callback
 returns.  Here's what I mean with an intentionally horrible name:

 window.runMeExclusively(callback, arbitrary string token);


Sure, a mutex, that can be useful too. I think you can compose a 'mutex' out
of the 'flag'... so in my book... the long lived 'flag' is more valuable
feature.

acquireFlag(mutexName, function() { do critical section stuff;
releaseFlag(mutexName);  } );

Now, somebody will say but what if the programmer neglects to release the
flag in the event of an exceptional return.  And i would say... they wrote
a bug and should fix it if they wanted mutex semantics..




 An application developer could then put all of their logic that touches a
 particular shared resource behind a token.  It's also deadlock free so long
 as each callback terminates.

 Would this be sufficient?  If so it is almost possible to implement it
 correctly in a JavaScript library using a shared worker per origin and
 postMessage, except that it is not currently possible to detect when a
 context goes away.

 - James


 -Darin





Re: [whatwg] Application defined locks

2009-09-09 Thread Michael Nordman
+1, a nice refactoring of the implied locking gunk in the storage api.

On Wed, Sep 9, 2009 at 10:55 AM, Darin Fisher da...@chromium.org wrote:

 The recent discussion about the storage mutex for Cookies and LocalStorage
 got me thinking
 Perhaps instead of trying to build implicit locking into those features, we
 should give web apps the tools to manage exclusive access to shared
 resources.

 

 I imagine a simple lock API:

 window.acquireLock(name)
 window.releaseLock(name)

 acquireLock works like pthread_mutex_trylock in that it is non-blocking.
  it returns true if you succeeded in acquiring the lock, else it returns
 false.  releaseLock does as its name suggests: releases the lock so that
 others may acquire it.

 Any locks acquired would be automatically released when the DOM window is
 destroyed or navigated cross origin.  This API could also be supported by
 workers.

 The name parameter is scoped to the origin of the page.  So, this locking
 API only works between pages in the same origin.

 

 We could also extend acquireLock to support an asynchronous callback when
 the lock becomes available:

 window.acquireLock(name, function() { /* lock acquired */ });

 If the callback function is given, then upon lock acquisition, the callback
 function would be invoked.  In this case, the return value of acquireLock is
 true if the function was invoked or false if the function will be invoked
 once the lock can be acquired.

 

 Finally, there could be a helper for scoping lock acquisition:

 window.acquireScopedLock(name, function() { /* lock acquired for this
 scope only */ });

 

 This lock API would provide developers with the ability to indicate that
 their instance of the web app is the only one that should play with
 LocalStorage.  Other instances could then know that they don't have
 exclusive access and could take appropriate action.

 This API seems like it could be used to allow LocalStorage to be usable
 from workers.  Also, as we start developing other means of local storage
 (File APIs), it seems like having to again invent a reasonable implicit
 locking system will be a pain.  Perhaps it would just be better to develop
 something explicit that application developers can use independent of the
 local storage mechanism :-)

 

 It may be the case that we want to only provide acquireScopedLock (or
 something like it) to enforce fine grained locking, but I think that would
 only force people to implement long-lived locks by setting a field in
 LocalStorage.  That would then result in the locks not being managed by the
 UA, which means that they cannot be reliably cleaned up when the page
 closes.  I think it is very important that we provide facilities to guide
 people away from building such ad-hoc locks on top of LocalStorage.  This is
 why I like the explicit (non-blocking!) acquireLock and releaseLock methods.

 -Darin



Re: [whatwg] Application defined locks

2009-09-09 Thread Michael Nordman
If this feature existed, we likely would have used it for offline Gmail to
coordinate which instance of the app (page with gmail in it) should be
responsible for sync'ing the local database with the mail service. In the
absence of a feature like this, instead we used the local database itself to
register which page was the 'syncagent'. This involved periodically updating
the db by the syncagent, and periodic polling by the would be syncagents
waiting to possibly take over. Much ugliness.
var isSyncAgent = false;
window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

Much nicer.


On Wed, Sep 9, 2009 at 7:02 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 9, 2009, at 6:33 PM, Jeremy Orlow wrote:

 In general this seems like a pretty interesting idea.  It definitely would
 be nice to completely abstract away all concepts of concurrency from web
 developers, but some of our solutions thus far (message passing, async
 interfaces, etc) have not been terribly appreciated by developers either.
  The GlobalScript proposal is a good example: to us, shared workers were an
 adequate solution, but in practice the lack of shared state is very
 difficult for some developers to work around.  Possibly even more difficult
 than dealing with some levels of concurrency.

 I think it'd be interesting to introduce this as an experimental API and
 see what web developers do with it.


 I think it's predictable that it will be used in badly wrong ways without
 implementing it. Explicit application-managed locking is a massive failure
 as a mechanism for managing concurrency.

 As for the idea of a sync API:  What if some library/framework and the
 embedding page use these flags/locks?  I know you can't actually deadlock
 with this API, but I worry some developers will just do
 |while(!acquireLock(flag)) {}| which could lead to deadlocks.  Only
 allowing an async API would fix this, but developers have typically not
 liked async APIs.


 If we want to go async, then I'd rather have an asynchronous way to acquire
 an *actual* lock on the resource (as with the LocalStorage async transaction
 proposal), than an async advisory locking model. Having both asynchronicity
 *and* advisory locks seems like the worst of both worlds.

 On the other hand, if we offer only the equivalent of tryLock() and not a
 blocking lock(), it's almost certain Web apps will build spin locks in the
 way you describe, leading to wasteful CPU usage, bad performance, and the
 possibility of deadlocks.



 Here's another idea that I think is actually kind of cool:  What if we kept
 track of locking precedence (i.e. the graph of which locks have been taken
 while other locks were held) and threw an exception if any lock was ever
 taken in a way that violated the graph.  In other words, we wouldn't make
 the developer tell us the locking precedence, and we wouldn't wait until you
 hit an actual deadlock.  Instead we would look for the first call site that
 _could_ have deadlocked.  A long time ago, I was working on a project that
 had some deadlock problems.  We implemented exactly this and it worked
 pretty well.


 This seems like a very challenging programming model for little gain. If
 the locks are purely advisory, they do not prevent race conditions, but a
 discipline to prevent deadlocks will still make them very hard to use. Note
 also that the possibility of synchronous cross-site code execution would
 require a lock precedence graph to be cross-site to really prevent
 deadlocks, but it would be impossible for a Web application to guarantee
 anything about lock order with respect to Web apps in different origins. The
 other possibility is to drop all locks in the case of synchronous
 cross-origin code execution, but then these advisory locks would not even be
 useful for preventing race conditions.

 Locking is broken - just don't do it.

 Regards,
 Maciej




 On Thu, Sep 10, 2009 at 9:22 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 9/10/09 2:24 AM, Robert O'Callahan wrote:

 On Thu, Sep 10, 2009 at 6:37 AM, Darin Fisher da...@chromium.org
 mailto:da...@chromium.org wrote:

Yes, exactly. Sorry for not making this clear.  I believe implicit
locking for LocalStorage (and the implicit unlocking) is going to
yield something very confusing and hard to implement well.  The
potential for dead locks when you fail to implicitly unlock properly
scares me

 You mean when the browser implementation has a bug and fails to
 implicitly unlock?

 Giving Web authors the crappy race-prone and deadlock-prone locking
 programming model scares *me*. Yes, your acquireLock can't get you into
 a hard deadlock, strictly speaking, but you can still effectively
 deadlock your application by waiting for a lock to become available that
 never can. Also, how many authors will forget to test the result of
 acquireLock (because they're used to other locking APIs that block) and
 find that things are OK in their testing?

 If you're 

Re: [whatwg] RFC: Alternatives to storage mutex for cookies and localStorage

2009-09-08 Thread Michael Nordman
I'm happy to see this getting sorted out. I like maciej's idea too.- Keep
the current LocalStorage API, but make it give no concurrency guarantees
whatsoever. (IE's impl i think)
- Add a simple optional transactional model for aware authors who want
better consistency guarantees.

There is one use-case to keep in mind... setting key/values onunload... how
can we provide transactional access at that time? Maybe we
could guarantee that transact calls made in (and perhaps prior to)
onbeforeunload will be satisfied prior to onunload.


On Tue, Sep 8, 2009 at 6:21 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Sep 9, 2009 at 9:54 AM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 8, 2009, at 4:10 PM, Jonas Sicking wrote:

  On Tue, Sep 8, 2009 at 4:00 PM, Maciej Stachowiakm...@apple.com wrote:


 On Sep 8, 2009, at 1:35 AM, Jonas Sicking wrote:


 I think Firefox would be willing to break compat. The question is if
 microsoft is. Which we need to ask them.


 We would be very hesitant to break LocalStorage API compatibility for
 Safari, at least without some demonstration that the level of
  real-world
 breakage is low (including for mobile-specific/iPhone-specific sites).


 Even if that means that you'll for all future will need to use a
 per-domain mutex protecting localStorage? And even possibly a global
 mutex as currently specified?


 I don't think telling this story to users or developers will make them
 satisfied with the breakage, so the even if is not very relevant.

 I think there are ways to solve the problem without completely breaking
 compatibility. For example:

 - Keep the current LocalStorage API, but make it give no concurrency
 guarantees whatsoever (only single key/value accesses are guaranteed
 atomic).
 - Add a simple optional transactional model for aware authors who want
 better consistency guarantees.

 This might not meaningfully break existing content, unlike proposals for
 effectively mandatory new API calls. Particularly since IE doesn't have any
 kind of storage mutex.

 Yet another possibility is to keep a per-domain mutex, also offer a
 transactional API, and accept that careless authors may indefinitely lock up
 the UI for all pages in their domain (up to the slow script execution limit)
 if they code poorly, but in exchange won't have unexpected race conditions
 with themselves.


 I'll see if I can't get any numbers on how widely used localStorage is
 today.  Assuming that we can't break compat (which I think is a strong
 possibility) I think Maciej's idea is the best one so far.  That said, I
 think Chris's |window.localStorage == undefined| could work.  Both would be
 confusing to web developers in different ways, but I don't think that's
 avoidable (unless we break compat).



Re: [whatwg] Storage mutex and cookies can lead to browser deadlock

2009-09-03 Thread Michael Nordman
 Shared worker access would be a plus.
Indeed. The lack of access to LocalStorage in 'workers' forces developers to
use the more difficult database api for all storage needs, and to roll their
own change event mechanisms (based on postMessage). Thats a bunch of busy
work if a name/value pair schema is all your app really needs.

How hard can it be to find a way to allow LocalStorage access to workers :)


On Thu, Sep 3, 2009 at 6:32 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Fri, Sep 4, 2009 at 10:25 AM, Drew Wilson atwil...@google.com wrote:

 To be clear, I'm not trying to reopen the topic of giving cookie access to
 workers - I'm happy to restrict cookie access to document context (I
 probably shouldn't have brought it up again).


 And to be clear: I don't have strong opinions about cookies
 being accessible in Workers, but I do think localStorage needs to
 be accessible.  That said, my primary goal here is to make the storage mutex
 more reasonable (i.e. something vendors will actually implement).  Shared
 worker access would be a plus, though.



Re: [whatwg] HTML extension for system idle detection.

2009-08-31 Thread Michael Nordman
This would be a nice addition... seems like an event plus a read-only
property on the 'window' object could work.
window.idleState;
window.onidlestatechange = function(e) {...}


On Fri, Aug 28, 2009 at 3:40 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Aug 28, 2009 at 2:47 PM, David Bennettd...@google.com wrote:
  SUMMARY
 
  There currently is no way to detect the system idle state in the browser.
  This makes it difficult to deal with any sort of chat room or instant
  messaging client inside the browser since the idle will always be
 incorrect.
 
  USE CASE
 
  Any instant messaging client, or any client that requires user presence,
  will use this to keep track of the users idle state.  Currently the idle
  state of a user inside a browser tell tend to be incorrect, and this
 leads
  to problems with people being unable to rely on the available status of a
  user.  Without this information it is difficult to do a full featured and
  reliable instant messaging client inside the browser since this makes the
  users' status somewhat unreliable.
 
  Lots of social networking sites and other sites centered around user
  interactions on the net keep track of the users idle state for enabling
  interactions with people that are currently online, this would be
 especially
  useful for interactive online gaming.
 
  A process that would like to do some heavy duty processing, like
 s...@home,
  could use the system idle detection to enable the processing only when
 the
  user is idle and enable it to not interfere with or degrade their normal
  browsing experience.
 
  WORK AROUNDS
 
  The idle state of the user is currently detected by looking at the brower
  window and detecting the last activity time for the window.  This is
  inaccurate since if the user is not looking at the page the state will be
  incorrect and means that the idle time is set to longer than would be
  desirable so there is also a window in which the user is actually idle
 but
  it has not yet been detected.
 
  PROPOSAL
  I propose an api which takes in a timeout for idle, the user agent calls
 the
  callback when the state changes.  Active-idle, Active-away, idle-away,
  idle-active, away-active.
 
  The idle times are all specified in seconds, the handler will be called
 when
  the idle state changes with two arguments and then any user specified
  arguments.  The two arguments are the idle state and the idle time, the
 idle
  time should be the length of time the system is currently idle for and
 the
  state should be one of idle, active or locked (screen saver).  The
 handler
  can be specified as a handler or as a string.
 
  Not explicitly specified, and thus intentionally left to the UA, include:
  * The minimum time the system must be idle before the UA will report it
 [1]
  * Any jitter intentionally added to the idle times reported [1]
  * The granularity of the times reported (e.g. a UA may round them to
  multiples of 15 seconds)
  [NoInterfaceObject, ImplementedOn=Window] interface WindowTimers {
  // timers
  long setSystemIdleCallback(in TimeoutHandler handler, in long
  idleTimeoutSec);
  long setSystemIdleCallback(in TimeoutHandler handler, in
  long idleTimeoutSec, arguments...);
  long setSystemIdleCallback(in DOMString code, in long idleTimeoutSec);
  long setSystemIdleCallback(in DOMString code, in long idleTimeoutSec, in
  DOMString language);
  void clearSystemIdleCallback(in long handle);
  // Returns the current system idle state.
  int systemIdleState();
 
  [Callback=FunctionOnly, NoInterfaceObject]
  interface TimeoutHandler {
  void handleEvent(idleState, idleTime, [Variadic] in any args);
  };
 
  Where idleState is one of:
idleState : active = 1, idle = 2, away = 3
 
  Away is defined as locked/screen saver enabled or any other system
 mechanism
  that is defined as away.
 
  This is based on the setTimeout api at
 http://www.w3.org/TR/html5/no.html
 
  ALTERNATIVES
 
  This could be made simple an event listener, where the browser itself
 keeps
  track of the length of time that is considered idle and fires an event
 when
  the state changes.
 
  setSystemIdleCallback(in IdleHandler handler)
  The downside to this is that it would mean all components on the browser
  would share the same idle time, which would reduce the ability of
 components
  to choose the most efficent idle time for their use case.  Some IM
 clients
  might require the user to be there within a very short of period of time
 to
  increase the likelyhood of finding a person.  It would also not let the
  components let the user choose their idle time.
 
  The upside to this proposal is it would be a lot simpler.
 
  REFERENCES
 
  1] There is research showing that it is possible to detemine a users key
  strokes and which keys they are actually typeing by using millisecond
  accuracy idle time information.  This is the reason this spec emphasises
 the
  jitter and granularity aspects of the idle detection.
  

Re: [whatwg] Storage mutex

2009-08-30 Thread Michael Nordman
'Commit' implies that it hasn't been committed yet, which is not true. This
will be misleading to those developers that do understand the locking
semantics (and transactional semantics) and are looking to defeat them. With
the 'commit' naming, we'd be doing a diservice to exactly the sort of
developers this API is intended for, all to avoid exposing the fact that
locks actually are involved to the more casual developer.

Back to a comment long ago in this thread...

 Authors would be confused that there's no aquireLock() API.

Regardless of what you call it, authors are going to be confused by this
API. This is because the locking semantics are otherwise hidden, the lock
acquisition is implicit. This API provides an explicit means of unlocking.
No matter how you slice it... thats what's going on.

Without an understanding of the locking semantics, its going to be
confusing. Hiding those semantics is the source of the confusion to start
with. Not naming this this releaseStorageLock only adds to the confusion.



On Sat, Aug 29, 2009 at 6:10 PM, Mike Shaver mike.sha...@gmail.com wrote:

 On Fri, Aug 28, 2009 at 3:36 PM, Jeremy Orlowjor...@chromium.org wrote:
  Can a plugin ever call into a script while a script is running besides
 when
  the script is making a synchronous call to the plugin?  If so, that
 worries
  me since it'd be a way for the script to lose its lock at _any_ time.

 Does the Chromium out-of-process plugin model take steps to prevent
 it?  I had understood that it let such things race freely, but maybe
 that was just at the NPAPI level and there's some other interlocking
 protocol before script can be invoked.

 Mike



Re: [whatwg] Global Script proposal.

2009-08-30 Thread Michael Nordman
These arguments against the proposal are not persuasive. I remain of the
opinion that the GlobalScript proposal has merit.

On Sat, Aug 29, 2009 at 2:40 PM, Ian Hickson i...@hixie.ch wrote:

 On Mon, 17 Aug 2009, Dmitry Titov wrote:
 
  Currently there is no mechanism to directly share DOM, code and data on
  the same ui thread across several pages of the web application.
  Multi-page applications and the sites that navigate from page to page
  would benefit from having access to a shared global script context
  (naming?) with direct synchronous script access and ability to
  manipulate DOM.

 This feature is of minimal use if there are multiple global objects per
 application. For instance, if each instance of GMail results in a separate
 global object, we really haven't solved the problem this is setting out to
 solve. We can already get a hold of the Window objects of subwindows (e.g.
 for popping out a chat window), which effectively provides a global
 object for those cases, so it's only an interesting proposal if we can
 guarantee global objects to more than just those.


 we really haven't solved the problem this is setting out to solve

What problem do you think this is trying to solve? I think you're
misunderstanding the motivation. The motivation is frame/page navigation
performance once an app is up and running.


 However, we can't. Given that all frames in a browsing context have to be
 on the same thread, regardless of domain, then unless we put all the
 browsing contexts on the same thread, we can't guarantee that all frames
 from the same domain across all browsing contexts will be on the same
 thread.


As proposed, there is nothing that forces things into a single thread. Those
contexts that happen to be on the same thread can benefit from the feature.



 But further, we actually wouldn't want to anyway. One of the goals of
 having multiple processes is that if one tab crashes, the others don't. We
 wouldn't want one GMail crash to take down all GMail, Google Calendar,
 Google Chat, Google Reader, Google Search, and Google Voice tabs at once,
 not to mention all the blogs that happened to use Google AdSense.

 Furthermore, consider performance going forward. CPUs have pretty much
 gotten as fast as they're getting -- all further progress is going to be
 in making multithreaded applications that use as many CPUs as possible. We
 should actively moving away from single-threaded designs towards
 multithreaded designs. A shared global script is firmly in the old way of
 doing things, and won't scale going forward.


 moving away from single-threaded designs

I don't think there is any move away for the concept of a 'related set of
browsing context' that are required to behave in a single threaded fashion?
 There is a place for that, it is a greatly simplifying model that nobody
wants to see removed.

The proposal is to add GlobalScripts to that set of browsing contexts, and
where convenient , for the UA to have two different related sets share the
same GlobalScript... where convenient is key.

 old way

Ironic... the global script is a decidedly application centric proposal.
Whereas the vast majority of HTML is decidedly page centric. So disagree
that this is firmly rooted in the old way of doing things.


  Chat application opens separate window for each conversation. Any opened
  window may be closed and user expectation is that remaining windows
  continue to work fine. Loading essentially whole chat application and
  maintaining data structures (roster) in each window takes a lot of
  resources and cpu.

 Use a shared worker.

 I know that some consider the asynchronous interaction with workers to be
 a blocker problem, but I don't really understand why. We already have to
 have asynchronous communication with the server, which holds the roster
 data structure, and so on. What difference does it make if instead of
 talking to the server, you talk to a worker?


Provided you can talk to the 'shared worker' in the same fashion you can
talk to the server (XHR)... you have a point here when it comes to keeping
application 'state' in memory and quickly retrieving it from any page in the
application. But, using a worker  does nothing to keep application 'code'
required to run in pages 'hot'... code that does HTML DOM manipulation for
example can't run in the worker.




  Finance site could open multiple windows to show information about
  particular stocks. At the same time, each page often includes data-bound
  UI components reflecting real-time market data, breaking news etc. It is
  very natural to have a shared context which can be directly accessed by
  UI on those pages, so only one set of info is maintained.

 Again, use a shared worker. The UI side of things can be quite dumb, with
 data pushed to it from a shared worker.


The key phrase was directly accessed by the UI... not possible with a
shared worker.




  A game may open multiple windows sharing the same model to provide
  

Re: [whatwg] Proposal for local-storage file management

2009-08-28 Thread Michael Nordman
On Fri, Aug 28, 2009 at 11:12 AM, Jens Alfke s...@google.com wrote:


 On Aug 28, 2009, at 10:51 AM, Brady Eidson wrote:

 I would *NOT* be on board with the spec requiring anything about where the
 file goes on the filesystem.  I have never been convinced by the argument
 that users always need to be in charge of where in a filesystem directory
 tree every single file on their computer needs to go.


 I wouldn't want the spec to require that either. At that high level, I
 think it should just state that:
 • Local storage may contain important user data and should only be deleted
 by direct action of the user.
 • The user must be allowed to decide whether code from a particular
 security domain is allowed to store persistent data locally.
 • The user must be able to see how much disk space each domain is using,
 and delete individual apps' storage.

 The first item (which is basically already in the spec) allows web-apps to
 store user-created content safely.
 The second item helps prevent abuse.
 The third item helps the user stay in control of her disk (and provides the
 'direct action of the user' mentioned in item 1.)

 My suggestion involving the Save As dialog is just to show a feasible way
 to implement those requirements on a desktop OS in a way that makes it
 fairly clear to the user what's going on.

 I'm a huge fan of the my mom litmus test.  To my mom, the filesystem is
 scary and confusing.  But using the browser to manage browser-related things
 is familiar and learnable.


 What I like about using the regular Save As dialog box is that almost every
 user has some experience with it, and knows that it means *this app wants
 to put files on my disk*. Naive users tend to just hit Enter and let
 everything be saved to a default location, which is fine. (In OS X, the
 default collapsed state of the Save panel supports that.) Users who are
 savvy with the filesystem know how to navigate to a different directory if
 they want, or at least look at where the file's going to be saved by
 default.


This works well for storing user generated content (save-as, open what i
saved earlier), doesn't work so well for application data that is less user
perceptible.

It also doesn't look like the type of security-nag dialog that people
 instinctively OK without reading.

 —Jens



Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-27 Thread Michael Nordman
And to confound the problem further, UAs dont have meta-data on hand with
which to relate various pieces of local data together and attribute them to
a specific user-identifiable 'application'. Everything is bound to a
security-origin, but that doesn't clearly identify or label an
'application'.
On Thu, Aug 27, 2009 at 8:10 AM, Chris Taylor chris.tay...@figureout.comwrote:

 Adrian Sutton said:
  On 27/08/2009 15:47, Maciej Stachowiak m...@apple.com wrote:
 
  - Cached for convenience - discarding this will affect performance but
 not functionality.
  - Useful for offline use - discarding this will prevent some data from
 being accessed when offline.
  - Critical for offline use - discarding this will prevent the app
 storing this data from working offline at all.
  - Critical user data - discarding this will lead to permanent user data
 loss.
 
  The only catch being that if the web app decides this for itself, a
 malicious script or tracking cookie will be marked as critical user data
 when in fact the user would disagree.
 
  On the plus side, it would mean a browser could default to not allowing
 storage in the critical user data by default and then let users whitelist
 just the sites they want.  This could be through an evil dialog, or just a
 less intrusive indicator somewhere - the website itself would be able to
 detect that it couldn't save and warn the user in whatever way is most
 appropriate.

 This seems to me a better idea than having multiple storage areas
 (SessionStorage, CachedStorage and FileStorage as suggested by Brady).
 However this could lead to even more evil dialogs: Do you want to save this
 data? Is it important? How important is it? The user - and for that matter,
 the app or UA - doesn't necessarily know how critical a piece of data is.

 The user doesn't know because without some form of notification they won't
 know what the lifetime of that data is (and even if they do they will have
 to know how that lifetime impacts on app functionality). The UA doesn't know
 because it doesn't understand the nature of the data without the user
 telling it. The app doesn't necessarily know because it can't see the wider
 implications of saving the data - storage space on the machine etc. Catch
 22.

 So, to what extent do people think that automatic decisions could be made
 by the UA and app regarding the criticality of a particular piece of data?
 The more the saving of data can be automated - with the right level of
 importance attached to it - the better, saving obtrusive and potentially
 confusing dialogs, and (hopefully) saving the right data in the right way.
 Perhaps UAs could notify apps of the storage space available and user
 preferences on the saving of data up front, helping the app and UA to make
 reasonable decisions, only asking for user confirmation where an reasonable
 automatic decision can't be made.

 It's a head-twister, this one.

 Chris


 This message has been scanned for malware by SurfControl plc.
 www.surfcontrol.com



Re: [whatwg] Proposal for local-storage file management

2009-08-27 Thread Michael Nordman
2009/8/27 Jonas Sicking jo...@sicking.cc

 2009/8/27 Ian Fette (イアンフェッティ) ife...@google.com:
  I would much rather have a well thought-out local filesystem proposal,
 than
  continued creep of the existing File and Local Storage proposal. These
  proposals are both designed from the perspective of I want to take some
  existing data and either put it into the cloud or make it available
  offline. They don't really handle the use case of I want to create new
  data and save it to the local filesystem, or I want to modify existing
  data on the filesystem, or I want to maintain a virtual filesystem for
 my
  application, and potentially map in the existing filesystem (e.g. if I'm
  flickr and I want to be able to read the user's My Photos folder, send
  those up, but also make thumbnails that I want to save locally and don't
  care if they get uploaded, maintain an index file with image metadata /
  thumbnails / ... locally, save off some intermediate files, ...
  For this, I would really like to see us take another look
  at http://dev.w3.org/2006/webapi/fileio/fileIO.htm (I don't think this
 spec
  is exactly what we need, but I like the general approach of origins get
 a
  virtual filesystem tucked away that they can use, they can
  fread/fwrite/fseek, and optionally if they want to interact with the host
 FS
  they can request that and then get some sub-set of that (e.g. my
 documents
  or my photos) mapped in.
  -Ian

 If we added the ability to create File objects, which could then be
 stored in localStorage (and WebSQL or whatever will replace it), then
 wouldn't we basically have the functionality you seek?

 What's the difference between sticking a File in the foo/bar/bin
 property on the localStorage object, vs. sicking a File object in the
 foo/bar/bin directory in some FileSystem object?


+1 the call to add a file system like api to the storage mix

Enumerating the contents of a 'directory' is one difference. Recursively
deleting a 'directory' is another. Checking creation/modification timestamps
is a third. The LocalStorage big-hashmap model doesn't work well for these
things in its current form. The hierarchical file system abstraction is well
understand and has a long track record of usefulness.

From a windows programmer point of view
* LocalStorage is akin to the registry.
* FileSystem is akin to the file system.
* Databases is akin to structured storage.
* AppCache is a necessary evil to do anything without a network connection.
* Workers are akin to threads/processes
* GlobalScripts/SharedContext/ApplicationContexts are akin to DLLs.



 Note that the latest HTML5 drafts allow for storing File objects in
 localStorage.

 / Jonas



Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
 At a minimum the HTML 5 spec should be silent on how user agents implement
local storage policies.
+1 linus

The sort of 'policies' being discussed are feeling 'out-of-scope' for the
HTML5 spec. The practical realities are that eviction needs to be there in
some form w/o an explicit user 'installation' step.

Something that could be appropriately specified is the grain size of local
data eviction...

* Can an individual key,value pair for an origin be removed from local
storage while leaving other local data in place?

* Can an individual Database be deleted for an origin while leaving other
 local data in place?

* Can an individual Manifest be deleted for an origin while leaving other
local data in place?

* Or should an origin's data be subject to all-or-none eviction.

I would prefer to see the spec clarify questions along those lines. That
would be useful.

A mandate from on high that says 'shall store forever and ever' will be
promptly ignored because its impossible to make that guarantee.



On Wed, Aug 26, 2009 at 4:01 PM, Linus Upson li...@google.com wrote:

 Not convinced. :)

 1. Analogies

 The analogy was made comparing a user agent that purges local storage to an
 OS throwing out files without explicit user action. This is misleading since
 most files arrive on your computer's disk via explicit user action. You copy
 files to your disk by downloading them from the internet, copying from a
 network drive, from a floppy, your camera, etc. You put them on your disk
 and you are responsible for removing them to reclaim space.

 There are apps that create files in hidden places such as:

 C:\Documents and Settings\linus\Local Settings\Application
 Data\Google\Chrome\User Data

 If those apps do not manage their space carefully, users get annoyed. If
 such an app filled the user's disk they would have no idea what consumed the
 space or how to reclaim it. They didn't put the files there. How are they
 supposed to know to remove them? Most users have no idea that Local Settings
 exists (it is hidden), much less how to correctly manage any files they
 find.

 A better analogy would be, What if watching TV caused 0-5MB size files to
 silently be created from time to time in a hidden folder on your computer,
 and when your disk filled up both your TV and computer stopped working?

 Lengthy discussion on cleaning up hidden resources (persistent background
 content) here:
 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-July/021421.html

 2. Attack

 Without automatic space management the local storage consumed will grow
 without bound. I'm concerned that even without an intentional DOS attack
 users are going to be unhappy about their shrinking disks and not know what
 to do about it. The problem is worse on phones.

 Things get worse still if a griefer wants to make a point about the
 importance of keeping web browsers logically stateless. Here's how such an
 attack could be carried out:

 2a. Acquire a bunch of unrelated domains from a bunch of registrars using
 stolen credit cards. Skip this step if UAs don't group subdomains under the
 same storage quota. For extra credit pick names that are similar to
 legitimate sites that use local storage.

 2b. Start up some web hosting accounts. Host your attack code here. If they
 aren't free, use stolen credit cards.

 2c. Buy ads from a network that subsyndicates from a network that
 subsyndicates from a major ad network that allows 3rd party ad serving.
 There are lots to choose from. No money? Stolen credit cards. Serve the ads
 from your previously acquired hosting accounts.

 2d. Giggle. The user will be faced with the choice of writing off the
 space, deleting everything including their precious data, or carefully
 picking though tens of thousands of entries to find the few domains that
 hold precious content. User gets really unhappy if the attack managed to
 fill the disk.

 3. Ingcognito / Private Browsing

 Chrome's Incognito mode creates a temporary, in-memory profile. Local
 storage operations will work, but nothing will be saved after the Incognito
 window is closed. Safari takes a different approach and causes local storage
 operations to fail when in Private Browsing mode. Some sites won't work in
 Private Browsing. I don't recall what Firefox or IE do. Pick your poison.

 Lengthy discussion here:
 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-April/019238.html

 4. Cache Eviction Algorithms

 At a minimum the HTML 5 spec should be silent on how user agents implement
 local storage policies. I would prefer the spec to make it clear that local
 storage is a cache, domains can use up to 5MB of space without interrupting
 the user, and that UAs were free to implement varying cache eviction
 algorithms.

 Some browsers may provide interface to allow users to specify precious
 local storage, some may not. Eviction policies for installed extensions may
 be different than those for web pages. Quotas for extensions may be
 

Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
Ok... I overstated things ;)

What seems inevitable are vista-like prompts to allow something (or prods to
delete something) seemingly unrelated to a user's interaction with a site...
please, oh please, lets avoid making that part of the web platform.
I'm assuming that UA will have out-of-band mechanisms to 'bless' certain
sites which should not be subject to automated eviction. If push comes to
shove, the system could suggest cleaning up one of these 'blessed' sites if
inactivity for an extended period was noticed. But for the overwhelming
number of sites in a users browsing history, its a different matter.

If the storage APIs are just available for use, no questions asked
making the storage just go away, no questions asked, is symmetrical.

Blessing involves asking questions... making it go away does too.



On Wed, Aug 26, 2009 at 4:35 PM, Peter Kasting pkast...@google.com wrote:

 On Wed, Aug 26, 2009 at 4:31 PM, Michael Nordman micha...@google.comwrote:

 A mandate from on high that says 'shall store forever and ever' will be
 promptly ignored because its impossible to make that guarantee.


 That's not the proposed mandate.  The proposed mandate is thou shalt not
 discard successfully-written data without explicit user action, which seems
 implementable to me.  Note that this doesn't make claims like the hard
 drive will not fail, and it doesn't claim that the UA is required to allow
 the app to write whatever data it wants in the first place.

 PK



Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
 Maybe the local storage API needs a way to distinguish between cached data
that can be silently thrown away, and important data that can't.*
*
*What if instead of the storage APIs providing a way to distinguish things,
UA's provide a way for users to indicate which applications are important,
and UA's provide a way for applications guide a user towards making that
indication.*

*Seems like permissioning, blessing, could happen out-of-band of the
existing storage APIs.*
*
*
On Wed, Aug 26, 2009 at 4:51 PM, Jens Alfke s...@google.com wrote:


 On Aug 26, 2009, at 4:01 PM, Linus Upson wrote:

 The analogy was made comparing a user agent that purges local storage to an
 OS throwing out files without explicit user action. This is misleading since
 most files arrive on your computer's disk via explicit user action. You copy
 files to your disk by downloading them from the internet, copying from a
 network drive, from a floppy, your camera, etc.


 A web app would also be pretty likely to put stuff in local storage as a
 result of explicit user action. The use cases seem pretty similar.

 Also, you're not counting files that you *create* locally. After all,
 files have to come from somewhere :) Those are the most precious since
 they're yours and they may not live anywhere else if you haven't backed them
 up or copied them elsewhere. There's no reason web-apps can't create the
 same kind of content, and it would look very similar to a user: I go to the
 word processor [website], click New Document, type some stuff, and click
 Save.

 Even if the save process involves migrating the local data up to the cloud,
 that transition is not instantaneous: it can take arbitrarily large amounts
 of time if there are network/server problems or the user is offline. During
 that time, *the local storage represents the only copy of the data*. There
 is therefore a serious race condition where, if the browser decides to purge
 local data before the app has uploaded it, the data is gone forever.

 A better analogy would be, What if watching TV caused 0-5MB size files to
 silently be created from time to time in a hidden folder on your computer,
 and when your disk filled up both your TV and computer stopped working?


 This is a cache — that isn't the kind of usage I'm concerned about. Maybe
 the local storage API needs a way to distinguish between cached data that
 can be silently thrown away, and important data that can't. (For example,
 the Mac OS has separate 'Caches' and 'Application Support' subfolders of
 ~/Library/.)

 First, this is what quotas are for. The TV web-app would have a limited
 quota of space to cache stuff.
 Second, the browser should definitely help you delete stuff like this if
 disk space does get low; I'm just saying it shouldn't delete it silently or
 as part of some misleading command like Empty Cache or Delete Cookies.

 At a minimum the HTML 5 spec should be silent on how user agents implement
 local storage policies. I would prefer the spec to make it clear that local
 storage is a cache, domains can use up to 5MB of space without interrupting
 the user, and that UAs were free to implement varying cache eviction
 algorithms.


 That will have the effect of making an interesting category of new
 applications fail, with user data loss, on some browsers. That sounds like a
 really bad idea to me.

 To repeat what I said up above: *Maybe the local storage API needs a way
 to distinguish between cached data that can be silently thrown away, and
 important data that can't.*

 —Jens



Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
On Wed, Aug 26, 2009 at 5:08 PM, Remco remc...@gmail.com wrote:

 On Thu, Aug 27, 2009 at 1:55 AM, Michael Nordmanmicha...@google.com
 wrote:
  Ok... I overstated things ;)
  What seems inevitable are vista-like prompts to allow something (or prods
 to
  delete something) seemingly unrelated to a user's interaction with a
 site...
  please, oh please, lets avoid making that part of the web platform.

 As far as I know, cookies work the same way as the proposed local
 storage policy: once a cookie is created, the browser won't delete it
 when space becomes a problem. The site controls the expiration date of
 the cookie, and it can fill up the entire drive with cookies if it
 wants to do so. This is all without user interaction. I don't think
 this has ever been a problem.


a. cookies are compartively small... size constraints built in
b. UA's actually do evict cookies if need be (hasn't ever been a problem)
c. they have expriration dates, these new pieces of info don't


 --
 Remco



Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
 My suggestion to have separate 'important' and 'cache' local storage areas
would provide such a mechanism in a standard way. The first time an app
tried to put stuff in the 'important' area, you'd be asked for approval. And
'important' stores wouldn't be deleted
 without your consent.
I think you just described a way to 'bless' things.


On Wed, Aug 26, 2009 at 5:06 PM, Jens Alfke s...@google.com wrote:


 On Aug 26, 2009, at 4:55 PM, Michael Nordman wrote:

  What seems inevitable are vista-like prompts to allow something (or prods
 to delete something) seemingly unrelated to a user's interaction with a
 site... please, oh please, lets avoid making that part of the web platform.


 Doesn't Gears already do this? If I do something like enabling local draft
 storage in WordPress, I get a prompt asking me if I want to allow
 myblog.com to store some local data on my disk, and I click OK because
 after all that's what I asked the site to do.


Yes, but we hate those prompts, don't you?




  I'm assuming that UA will have out-of-band mechanisms to 'bless' certain
 sites which should not be subject to automated eviction.


 If this is out-of-spec and browser-dependent, there won't be a good way for
 an app to request that blessing; it'll be something the user has to know to
 do, otherwise their data can get lost. That seems dangerous. In most systems
 user data loss is just about the worst-case scenario of what could go wrong,
 and you try to prevent it at all costs.


I'd love it if HTML5 took on a standard means of 'blessing' things. Thus far
every time we come even close to the notion of 'installing' something... it
doesn't go well in this forum.



 My suggestion to have separate 'important' and 'cache' local storage areas
 would provide such a mechanism in a standard way. The first time an app
 tried to put stuff in the 'important' area, you'd be asked for approval. And
 'important' stores wouldn't be deleted without your consent.


Interesting... I think you just described a way to 'bless' things by
alluding to 'important' repositories and trying to put something in them.
There are probably other ways to express this too that wouldn't involve
altering  the storage APIs... i'd prefer to pursue things in a more modular
fashion... jmho.



 —Jens


Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-26 Thread Michael Nordman
On Wed, Aug 26, 2009 at 5:14 PM, Brady Eidson beid...@apple.com wrote:

 I started writing a detailed rebuttal to Linus's reply, but by the time I
 was finished, many others had already delivered more targetted replies.

 So I'll cut the rebuttal format and make a few specific points.

  - Many apps act as a shoebox for managing specific types of data, and
 users are used to using these apps to manage that data directly.  See
 iTunes, Windows Media Player, iPhoto, and desktop mail clients as examples.
  This trend is growing, not waning.  Browsers are already a shoebox for
 history, bookmarks, and other types of data.
 Claiming that this data is hidden from users who are used to handling
 obscure file management scenarios  and therefore we shouldn't fully respect
 it is trying to fit in with the past, not trying to make the future better.

  - No one is suggesting that UAs not have whatever freedom they want in
 deciding *what* or *how much* to store.  We're only suggesting that once the
 UA has committed to storing it, it *not* be allowed to arbitrarily purge it.

  - One use of LocalStorage is as a cache for data that is transient and
 non-critical in nature, or that will live on a server.  But another,
 just-as-valid use of LocalStorage is for persistent, predictable storage in
 the client UA that will never rely on anything in the cloud.

  - And on that note, if developers don't have faith that data in
 LocalStorage is actually persistent and safe, they won't use it.
 I've given talks on this point 4 times in the last year, and I am stating
 this as a fact, based on real-world feedback from actual, real-world web
 developers:  If LocalStorage is defined in the standard to be a purgable
 cache, developers will continue to use what they're already comfortable
 with, which is Flash's LocalStorage.

 When a developer is willing to instantiate a plug-in just to reliably store
 simple nuggets of data - like user preferences and settings - because they
 don't trust the browser, then I think we've failed in moving the web
 forward.

 I truly hope we can sway the LocalStorage is a cache crowd.  But if we
 can't, then I would have to suggest something crazy - that we add a third
 Storage object.

 (Note that Jens - from Google - has already loosely suggested this)

 So we'd have something like:
 -SessionStorage - That fills the per browsing context role and whose
 optionally transient nature is already well spec'ed
 -CachedStorage - That fills Google's interpretation of the LocalStorage
 role in that it's global, and will probably be around on the disk in the
 future, maybe
 -FileStorage - That fills Apple's interpretation of the LocalStorage role
 in that it's global, and is as sacred as a file on the disk (or a song in
 your media library, or a photo in your photo library, or a bookmark, or...)


In addition to the key/value pair storage apis, i think we'd need to make
this distinction for databases and appcaches too. This distinction may be
better handled in a way not tied to a particular flavor on storage. Or a
similar distinction could be expressible within the database and appcache
interfaces.
window.openPermanentDatabase() / openPurgeableDatabase()
manifest file syntax games:  PURGEABLE or PERMANENT keyword in there
somewhere.



 The names are just suggestions at this point.

 ~Brady





Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-25 Thread Michael Nordman
The statement in section 4.3 doesn't appear to specify any behavior... its
just an informational statement.

The statement in section 6.1 suggests to prohibit the development of a UI
that mentions local storage as a distinct repository seperate from cookies.
This doesn't belong in the spec imho.

I think both of these statements should be dropped from the spec.

Ultimately I think UAs will have to prop up out-of-band permissioning
schemes to make stronger guarantees about how long lived 'local data' that
accumulates really is.

On Tue, Aug 25, 2009 at 3:19 PM, Aaron Boodman a...@google.com wrote:

 On Tue, Aug 25, 2009 at 2:44 PM, Jeremy Orlowjor...@chromium.org wrote:
  Ok, well I guess we should go ahead and have this discussion now.  :-)
  Does
  anyone outside of Apple and Google have an opinion on the matter (since I
  think it's pretty clear where we both stand).

 FWIW, I tend to agree more with the Apple argument :). I agree that
 the multiple malicious subdomains thing is unfortunate. Maybe the
 quotas should be per eTLD instead of -- or in addition to --
 per-origin? Malicious developers could then use multiple eTLDs, but at
 that point there is a real cost.

 Extensions are an example of an application that is less cloud-based.
 It would be unfortunate and weird for extension developers to have to
 worry about their storage getting tossed because the UA is running out
 of disk space.

 It seems more like if that happens the UA should direct the user to UI
 to free up some storage. If quotas were enforced at the eTLD level,
 wouldn't this be really rare?

 - a



Re: [whatwg] Global Script proposal

2009-08-24 Thread Michael Nordman



 Dmitry had a later note which combined creation of the context and loading
 of the script.  But I suspect one thing people will want to do, in
 development anyway, is load multiple scripts into a context - like you can
 in workers.  Which would mean we'd still need a function to load a script,
 or the only way to load a script would be by also creating a new context -
 which is much like the serverJS module concept.


I think the plan is to provide an importScript(...) function to
globalScripts as is done for workers...
http://www.whatwg.org/specs/web-workers/current-work/#importing-scripts-and-libraries


Re: [whatwg] Global Script proposal

2009-08-24 Thread Michael Nordman
Agreed, blocking semantics are definitely not OK for GlobalScript, this
context-type calls for some form of async importScript functionality. If
order load order matters, apps certainly could defer loading the scripts
until what they depend on have been already loaded already. Where load order
doesn't matter, seems unfortunate to penalize those callers.
Maybe

importScripts(['one.js', 'two.js'], onloadcallback, onerrorcallback);

Immediately initiates in-order loading of the given array of
resources. A second call to importScripts would also begin immediately
as well. So if order matters, put them in the same importScripts call,
and if it doesn't make individual calls.


On Mon, Aug 24, 2009 at 3:15 PM, Drew Wilson atwil...@google.com wrote:

 BTW, the WorkerGlobalScope.importScript() API blocks the current thread of
 execution, which is probably not acceptable for code executed from page
 context. So for globalscripts we'll need some way to do async notifications
 when the loading is complete, and report errors. We may also want to have
 some way to automatically enforce ordering (so if I call
 GlobalScript.importScripts() twice in a row, the second script is not
 executed until after the first script is loaded/executed, to deal with
 dependencies between scripts). The alternative is to force applications to
 do their own manual ordering.
 -atw


 On Mon, Aug 24, 2009 at 11:32 AM, Michael Nordman micha...@google.comwrote:



 Dmitry had a later note which combined creation of the context and
 loading of the script.  But I suspect one thing people will want to do, in
 development anyway, is load multiple scripts into a context - like you can
 in workers.  Which would mean we'd still need a function to load a script,
 or the only way to load a script would be by also creating a new context -
 which is much like the serverJS module concept.


 I think the plan is to provide an importScript(...) function to
 globalScripts as is done for workers...

 http://www.whatwg.org/specs/web-workers/current-work/#importing-scripts-and-libraries





Re: [whatwg] Global Script proposal

2009-08-21 Thread Michael Nordman
I'm confused about the manual loading of the script into the context? The
original proposal called for providing a script url when creating/connecting
to an instance of a global-script... in which case each client page
expresses something more like...
globalScript = new GlobalScript(scriptUrl);
globalScript.onload = myFunctionThatGetsCalledWhenTheScriptIsLoaded;
// some time later onload fires, if the script was already loaded, its
called on the next time thru the message loop


... the system (not the client pages) keep track of how many client pages
are concurrently accessing the same GlobalScript.

On Fri, Aug 21, 2009 at 6:37 AM, Patrick Mueller pmue...@muellerware.orgwrote:

 Patrick Mueller wrote:


 Time to work on some examples.  This would relatively easy to prototype in
 something like Rhino (or my nitro_pie python wrapper for JavaScriptCore), at
 least API wise, so we could see what the user-land code would look like, and
 see it run.


 I developed a simulator for this yesterday.  My big take away is that the
 current shape leaves users in a batteries-not-included state.

 Here's the kind of code I had to write to arrange to create a new scope and
 load a single script in it from multiple windows.  Each window would run
 this code in it's own context.

 function loadLibrary(scopeName, script, callback) {
var scope = getSharedScope(scopeName);

// script already loaded in the scope
if (scope.__loaded) {
callback(scope, scope.__callback_data);
}

// script not yet done loading
else if (scope.__loading) {
scope.__onLoadedListeners.push(callback);
}

// first one in!  much work to do ...
else {
scope.__loading = true;
scope.__onLoadedListeners = [];

function handler(callback_data) {

scope.__loaded= true;
scope.__loading   = false;
scope.__callback_data = callback_data;

callback(scope, callback_data);
for (var i=0; iscope.__onLoadedListeners.length; i++) {
scope.__onLoadedListeners[i](scope, callback_data);
}
}

scope.runScript(script, {handler: handler});
}

return scope;
 }

 I changed the GlobalScript() constructor to a getSharedScope() function (at
 the top), and the load() function to a runScript() function which takes
 parameters including a callback function.

 I'm of two minds here.

 One is that the SharedScope proposal is really only appropriate for pages
 with lots of JavaScript that could be shared, or special use cases where you
 want (eventually) easy sharing between windows.  As such, s smallish amount
 of JS framework-y-ness like this isn't a show stopper. In fact, as spec'd,
 separating out the scope and script loading, will let folks build
 mini-frameworks for themselves fairly easily, customized to their own needs.

 On the other hand, I wonder about the potential benefits of letting more
 people play in the space easier.  The securable module work in the serverjs
 projects it a bit easier to use out of the box.  I'm not sure they have an
 async story though, and async loading of scripts is where this stuff quickly
 gets complicated.

 --
 Patrick Mueller - http://muellerware.org




Re: [whatwg] Global Script proposal.

2009-08-19 Thread Michael Nordman
On Wed, Aug 19, 2009 at 10:48 AM, Dmitry Titov dim...@google.com wrote:

 On Wed, Aug 19, 2009 at 7:37 AM, Patrick Mueller 
 pmue...@muellerware.orgwrote:

 var context = new GlobalScript();  // perhaps 'webkitGlobalScript' as
 experimental feature?
 context.onload = function () {...}
 context.onerror = function () {...}
 context.load('foo.js');


 Presumably this script is being loaded asynchronously, hence the
 callbacks.  But since the GlobalScript() constructor returns the global
 object, probably doesn't make sense to attach the handlers directly there,
 but actually on the load request somehow.


 Good point. Alternative variant to construction/connection to this object
 could be something like this:

 window.createGlobalScript('name', 'foo.js', function(context) {..},
 function(status) {..});

 where the name and url pair play the same role as on SharedWorker, and 2
 callbacks: one delivers load status/errors, another comes with resolved
 context when it's ready. One benefit of this is that 'context' does not have
 to expose the 'external' API on it and it is always functional once it's
 returned. The syntax in the proposal was motivated by commonality with other
 APIs, like XHR or window.open - however I agree this doesn't fit 100% either
 of them...


fyi: There's a seperate thread questioning the utility of the 'name'
parameter in the SharedWorker API. However the debate ends, it would
probably be good to identify shared-workers / global-scripts in a similar
fashion.


 I guess I'm wondering if there will be some desire to have pages opt-in to
 this support, signaling this through an additional API, or like the
 app-cache's manifest attribute on the html element, something in the DOM;
 doesn't seem like we should drag the DOM in here, just mentioned it because
 I was reminded of the way the opt-in works there.



I don't follow the opt-in desirement? If a page wishes to utilize the
GlobalScript feature, it calls the API provided to do so.


 It is a good idea indeed to have some sort of static opt-in information,
 maybe via a feature of app-cache - which could hint the UA to load
 participating pages (of the same application) into the same process so they
 could share Global Script. It is still impossible (and probably not
 important) to guarantee a singularity of the Global Script (as in case of
 cross-domain iframes) but it'd be a good optimization for a multi-process
 UA.


I understand the desire for early warning that a page *may* want to utilized
a GlobalScript. A 'hint' could be useful to a multi-process UA. It's not
clear to me that this is what Patrick was referring to... Patrick?

And I'll just throw out there that a 'hint' in an custom HTTP header of the
page's response would be the earliest warning we could get without static
config info present in the UA. I know Ian doesn't like relying on
out-of-band HTTP things for the HTML5 spec... just sayin...



 Dmitry




Re: [whatwg] SharedWorkers and the name parameter

2009-08-19 Thread Michael Nordman
On Tue, Aug 18, 2009 at 8:26 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Aug 18, 2009 at 7:53 PM, Drew Wilsonatwil...@google.com wrote:
  An alternative would be to make the name parameter optional, where
  omitting the name would create an unnamed worker that is
 identified/shared
  only by its url.
  So pages would only specify the name in cases where they actually want to
  have multiple instances of a shared worker.
  -atw

 This seems like a very good idea. Makes a lot of sense that if two
 shared workers have the same uri, you are probably going to interact
 with it the same way everywhere. Only in less common cases do you need
 to instantiate different workers for the same url, in which case you
 can use the name parameter.


This sounds reasonable to me.



 / Jonas



Re: [whatwg] Global Script proposal.

2009-08-19 Thread Michael Nordman
On Wed, Aug 19, 2009 at 1:34 PM, Patrick Mueller pmue...@muellerware.orgwrote:

 Dmitry Titov wrote:

  The return value from a constructor is the Global Script's global scope
 object. It can be used to directly access functions and variables defined
 in global scope of the Global Script. While this global scope does not
 have
 'window' or 'document' and does not have visual page associated with it,
 the
 local storage, database, timers and XHR are exposed to it, and it can
 build
 up DOM for the connected pages using their 'document' object.


 This turns out to be fairly similar to the serverJS concept of modules.
  I could see how you might want to use it this way, to get script code
 loaded into it's own sandbox, and allow the client of the module to name
 the object as they see fit.

 This would require the use of a name when you create it, so as to allow
 multiple to be created, and to allow other sharers to find those objects.

 This also allows folks to programmatically load JS code without having to
 resort to XHR/eval or adding script nodes to the DOM.  Big plus, because
 those scripts will then be associated with an honest-to-gods name, which
 will show up in debuggers.  And is obviously cleaner than the other
 techniques.


  The list of
 interfaces exposed in the global scope of the Global Script is similar to
 that of Shared Worker, except message-passing interface. It could also
 include events fired when a page connects/disconnects to it and before it
 is
 terminated.


 Can I create additional GlobalScript's from within an existing
 GlobalScript?


That's a good question...

(just having fun... oh the tangled web we weave;)

I'm not sure any has thought thru the implications of that, but it's an
interesting idea.

* An obvious complication is life-cycle management. If GlobalScriptA
attaches to GlobalScriptB, when no 'pages' are attached to either, they
should be eligible for destruction.

* Also about tangled webs... what if A attaches to B, and B attaches to A


 The load() method is very similar to the worker loadScript() (or whatever)
 function.  Perhaps we should combine them into one API, that allows sync or
 async in a worker, but only allows async in a GlobalScript.  Or at least
 advises against use of sync.



 --
 Patrick Mueller - http://muellerware.org




Re: [whatwg] Global Script proposal.

2009-08-18 Thread Michael Nordman
On Tue, Aug 18, 2009 at 6:07 AM, Mike Wilson mike...@hotmail.com wrote:

 This is an interesting suggestion as it isolates the
 stateful parts from the rest of the previous suggestions.
 I like state.

 Here's how I see how it fits inside the big picture:

 Scope  Serialized state Live state
 -   --
 user agent WS_localStorage, GlobalScript [2]
   SharedWorker [1]
   cookie

 browsing context   WS_sessionStorage- [3]
   window.name

 document   -plain JS objs [4]

 history entry  History.pushStateplain JS objs [4]

 [1] Global state can be coordinated by a SharedWorker but
 it would need to be serialized in postMessage on transfer
 so that's why I've put it in the serialized column.

 [2] As I understand it, the new GlobalScript construct is
 a context that can be shared by all browsing contexts in
 the user agent.

 [3] You also mention that the feature could be usable for
 page-to-page navigation within the same browsing context.
 It hasn't been suggested yet, but it would be possible to
 have a variation of GlobalScript that binds to a specific
 browsing context, analogous to sessionStorage.

 [4] These plain JavaScript objects indeed live throughout
 their Document's life, but this lifetime is usually
 shorter than what the user's perception tells him. Ie,
 when the user returns to a previous page through the Back
 button he regards that as the same document, while
 technically it's usually a new Document, with a freshly
 created document tree and JavaScript context.

 Questions
 -

 Threading:
 This is the unavoidable question ;-) How do you envision
 multiple threads accessing this shared context to be
 coordinated?


Nominally, they don't. In our design for chrome's multi-process
architecture, the global-script would only be shared within a single
'renderer' process (in which all page's, and global-scripts, execute in a
single thread).



 Process boundaries:
 In this past discussion there have been several mentions
 about having to cluster pages inside the same process
 if they are to share data.
 Why is this so, and why can't shared memory or proxied
 objects be an option for browsers doing process
 separation?


A multi-process browser vendor probably *could* proxy all script calls to a
truely global context across all 'renderers'... but that is not required in
the proposal... and is probably even discouraged.

One of the motivations for doing this is webapp performance. Proxying all
script interactions across the page/context boundary works against that.
Also synchronization issues get much more complicated.

Implicit in the proposal is that a global-script is very inexpensive to
interact with.



 Best regards
 Mike Wilson


 

 From: whatwg-boun...@lists.whatwg.org
 [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of Dmitry Titov
 Sent: den 17 augusti 2009 23:38
 To: wha...@whatwg.org
 Subject: [whatwg] Global Script proposal.


Dear whatwg,

The previous discussion about shared page and persistence has sent
 us back 'to the drawing board', to think again what is the essence of the
 feature and what's not important. Talking with web apps developers
 indicates
 the most of benefits can be achieved without dangerous background
 persistence or the difficulty to specify visual aspects of the invisible
 page.

Here is the new proposal. Your feedback is very appreciated. We are
 thinking about feasibility of doing experimental implementation in
 WebKit/Chrome. Thanks!

-

SUMMARY

Currently there is no mechanism to directly share DOM, code and data
 on the same ui thread across several pages of the web application.
 Multi-page applications and the sites that navigate from page to page would
 benefit from having access to a shared global script context (naming?)
 with direct synchronous script access and ability to manipulate DOM. This
 would compliment Shared Workers
 (http://www.whatwg.org/specs/web-workers/current-work/) by providing a
 shared script-based context which does not run on a separate thread and can
 be used directly from the application's pages.

USE CASES

Chat application opens separate window for each conversation. Any
 opened window may be closed and user expectation is that remaining windows
 continue to work fine. Loading essentially whole chat application and
 maintaining data structures (roster) in each window takes a lot of
 resources
 and cpu.

Finance site could open multiple windows to show information about
 particular stocks. At the same time, each page often includes data-bound UI
 components reflecting real-time market data, breaking news etc. It is very
 natural to have a shared context which can be directly accessed by UI on
 those pages, so only one set of info is maintained.

A game 

Re: [whatwg] SharedWorkers and the name parameter

2009-08-17 Thread Michael Nordman
What purpose the the 'name' serve? Just seems uncessary to have the notion
of 'named' workers. They need to be identified. The url, including the
fragment part, could serve that purpse just fine without a seperate 'name'.
The 'name' is not enough to identify the worker, url,name is the
identifier. Can the 'name' be used independently of the 'url' in any way?

* From within a shared worker context, it is exposed in the global scope.
This could inform the work about what 'mode' to run.  The location including
the fragment is also exposed within a shared worker context, the fragment
part could just as well serve this 'modalility' purpose.

* From the outside, it has to be provided as part of the identifier to
create or connect to an shared worker. And there are awkward error
conditions arising when a worker with 'name' already exists for a different
'url'. The awkward error conditions would be eliminated if id == url.

* Is 'name' visible to the web developer any place besides those two?


On Mon, Aug 17, 2009 at 2:44 PM, Mike Shaver mike.sha...@gmail.com wrote:

 On Sat, Aug 15, 2009 at 8:29 PM, Jim Jewettjimjjew...@gmail.com wrote:
  Currently, SharedWorkers accept both a url parameter and a name
  parameter - the purpose is to let pages run multiple SharedWorkers using
 the
  same script resource without having to load separate resources from the
  server.
 
  [ request that name be scoped to the URL, rather than the entire origin,
  because not all parts of example.com can easily co-ordinate.]
 
  Would there be a problem with using URL fragments to distinguish the
 workers?
 
  Instead of:
 new SharedWorker(url.js, name);
 
  Use
 new SharedWorker(url.js#name);
  and if you want a duplicate, call it
 new SharedWorker(url.js#name2);
 
  The normal semantics of fragments should prevent the repeated server
 fetch.

 I don't think that it's very natural for the name to be derived from
 the URL that way.  Ignoring that we're not really identifying a
 fragment, it seems much less self-documenting than a name parameter.
 I would certainly expect, from reading that syntax, for the #part to
 be calling out a sub-script (property or function or some such) rather
 than changing how the SharedWorker referencing it is named!

 Mike



Re: [whatwg] SharedWorkers and the name parameter

2009-08-16 Thread Michael Nordman
Tim Berners-Lee seems to think this could be a valid use of URI references.

http://www.w3.org/DesignIssues/Fragment.htmlThe significance of the
fragment identifier is a function of the MIME type of the object

Are there any existing semantics defined for fragments on text/java-script
objects?

// the semantics we're discussing, the naming of a instance loaded script
#name=foo

// hand wavings at other semantics that could make sense, references to
particular items defined in the script
#function/global-function-name
#var/global-var-name


  I'd have to objections to thisDid you mean to say i'd have no objectsion
to this?

On Sun, Aug 16, 2009 at 8:27 AM, Drew Wilson atwil...@google.com wrote:

 That suggestion has also been floating around in some internal discussions.
 I'd have to objections to this approach either, although I'm not familiar
 enough with URL semantics to know if this is a valid use of URL fragments.
 -atw


 On Sat, Aug 15, 2009 at 5:29 PM, Jim Jewett jimjjew...@gmail.com wrote:

  Currently, SharedWorkers accept both a url parameter and a name
  parameter - the purpose is to let pages run multiple SharedWorkers using
 the
  same script resource without having to load separate resources from the
  server.

  [ request that name be scoped to the URL, rather than the entire origin,
  because not all parts of example.com can easily co-ordinate.]

 Would there be a problem with using URL fragments to distinguish the
 workers?

 Instead of:
new SharedWorker(url.js, name);

 Use
new SharedWorker(url.js#name);
 and if you want a duplicate, call it
new SharedWorker(url.js#name2);

 The normal semantics of fragments should prevent the repeated server
 fetch.

 -jJ





Re: [whatwg] Installed Apps

2009-08-04 Thread Michael Nordman
On Tue, Aug 4, 2009 at 1:08 AM, Mike Wilson mike...@hotmail.com wrote:

 Michael Nordman wrote:
  On Mon, Aug 3, 2009 at 2:37 PM, Mike Wilson wrote:
   Btw, another reflection is that this mail thread is about
   introducing a client/server model in the browser. Some
   mentions of complex code in the background page, f ex building
   the HTML for the visible window, make me think of traditional
   server-centric webapps, but inside the browser. I've made
   the below table to illustrate this, and I mean to point out
   similarities between traditional server-centric and the new
   background_page-centric webapps, and between client-centric
   and visible-centric webapps. Maybe this can inspire some new
   thoughts.
 
  Yes... client/server model in the browser... good observation... and a
  good way to think about the direction I would like to see things go.
  Incidentally, that line of thinking is my motivation for the
  introduction of  script-generated responses in this (HTML5) system
  design.

 Ah, I seem to have missed that. Makes total sense for offline
 as it probably makes it a lot easier to port server-centric
 apps to the cache, and well, there is already a concept of
 serving files (although only static).

 It's not a small thing to add, but if this is taken further I
 think coordination should be done with current inititatives
 using JS in server-side web platform designs, such as:
  ServerJS WSGI https://wiki.mozilla.org/ServerJS/WSGI
(several products pointed to by comparison table)
  Joyent Smart http://becoming.smart.joyent.com/index.html
  + other stuff using EnvJS, etc...

 This would make it easier to move code between server and
 client, simplifying creation of distributed and offline apps.


Exactly! (and thnx for the pointers)


Re: [whatwg] Installed Apps

2009-08-03 Thread Michael Nordman
On Mon, Aug 3, 2009 at 3:05 AM, Mike Wilsonmike...@hotmail.com wrote:
 Drew Wilson wrote:

 SharedWorkers are overloaded to provide a way for
 pages under the same domain to share state, but
 this seems like an orthogonal goal to parallel
 execution and I suspect that we may have ended
 up with a cleaner solution had we decided to
 address the shared state issue via a separate
 mechanism.

 [...]

 3) Sharing between pages requires going through
 the database or shared worker - you can't just
 party on a big shared datastructure.

 Assuming this shared state doesn't require a full
 JavaScript global context, and could do with some
 root object or collection, would it be possible to
 extend Web Storage to support this task?

A big part of what the Gmail team is interested in sharing is quite a
lot of javascript (loaded, parsed, jit'd... ready to call functions).
Along with that, the app can maintian shared state as well, but a big
part of this feature request is sharing the code itself. In the
absence of JS languange changes (analogous to DLLs or SOs for JS), I
think this does call for a full JS context.


Re: [whatwg] Installed Apps

2009-08-03 Thread Michael Nordman
On Mon, Aug 3, 2009 at 2:37 PM, Mike Wilsonmike...@hotmail.com wrote:
 Michael Nordman wrote:

 On Mon, Aug 3, 2009 at 3:05 AM, Mike
 Wilsonmike...@hotmail.com wrote:
 
  Assuming this shared state doesn't require a full
  JavaScript global context, and could do with some
  root object or collection, would it be possible to
  extend Web Storage to support this task?

 A big part of what the Gmail team is interested in sharing is quite a
 lot of javascript (loaded, parsed, jit'd... ready to call functions).
 Along with that, the app can maintian shared state as well, but a big
 part of this feature request is sharing the code itself. In the
 absence of JS languange changes (analogous to DLLs or SOs for JS), I
 think this does call for a full JS context.

 Right, with your scenario, that makes use of all these new
 features in the same app, that could make sense. Still, it
 would be interesting to look at how each feature could be
 implemented on its own, to potentially lessen the overhead
 for apps that only use a single feature.

 These are the individual features discussed so far, I think
 (did I miss any?):
 - preload JavaScript code
 - share live data between multiple pages
 - background process with direct access to UI
 - background process that outlives browser process
 - background process that auto-starts with operating system
 - access to notification area

 I can easily imagine separate use of the first two items,
 and I think it would be great to address the data handling
 in a coherent way with other state handling. It would be
 nice to have finer-grained control over data handling than
 having to pop a new window to use its global context.

 Btw, another reflection is that this mail thread is about
 introducing a client/server model in the browser. Some
 mentions of complex code in the background page, f ex building
 the HTML for the visible window, make me think of traditional
 server-centric webapps, but inside the browser. I've made
 the below table to illustrate this, and I mean to point out
 similarities between traditional server-centric and the new
 background_page-centric webapps, and between client-centric
 and visible-centric webapps. Maybe this can inspire some new
 thoughts.

Yes... client/server model in the browser... good observation... and a
good way to think about the direction I would like to see things go.
Incidentally, that line of thinking is my motivation for the
introduction of  script-generated responses in this (HTML5) system
design.


             Remote          Background   Visible
             server          page         page
             --          --   ---

 Current webapp designs:

  server-     state
  centric     logic
  (bugzilla)  gen HTML -                  render

  client-     state -                     state
  centric                                  logic
  (gmail)                                  gen/render HTML

 New background page client-centric designs:

  background- state -        state
  centric                     logic
                             gen HTML -  render

  visible-    state -        state -     state
  centric                     (logic)      logic
                                          gen/render HTML

 mvh Mike




[whatwg] Empty html manifest= attribute handling.

2009-07-31 Thread Michael Nordman
Hello,

How empty html manifest= attribute values are handled in the
section 9.2.5.5 may want some massaging.

http://www.whatwg.org/specs/web-apps/current-work/multipage/syntax.html#parser-appcache

 If the Document is being loaded as part of navigation of a browsing context, 
 then: if the newly
 created element has a manifest attribute, then resolve the value of that 
 attribute to an absolute
 URL, relative to the newly created element, and if that is successful, run 
 the application cache
 selection algorithm with the resulting absolute URLwith 
 any fragment component removed;
 otherwise, if there is no such attribute or resolving it fails, run 
 the application cache selection
 algorithm with no manifest. The algorithm must be passed the Document object.

This ends up passing the value of the document url into the cache
selection algorithm as the manifest url,
which will initiate an update and all that.

A couple of things that may make sense.

1) equate html manifest= with html treat empty as non-existent.

2) don't resolve the url if the attribute value is empty, pass an
empty url to the cache selection algorithm,
and have that algorithm flag such resources as foreign if it was
loaded from an appcache

Both of these prevent the initiation of an update that is doomed to fail.


Re: [whatwg] Installed Apps

2009-07-30 Thread Michael Nordman
So use an out-of-band extension mechanism to establish trust and
permissioning for capabilities that fall out of bounds of the 'regular' web
model.
So lets put that to practice on this particular two-part proposal...

 Our proposed solution has two parts.

This first part (below) falls within the bounds of the 'regular' web model.
Would be nice to discuss this on the merits in absense of the 'scary trust
permissioning' issues.

 The first, which should be
 generally useful, is the ability to have a hidden HTML/JS page running
 in the background that can access the DOM of visible windows. This
 page should be accessible from windows that the user navigates to. We
 call this background Javascript window a shared context or a
 background page. This will enable multiple instances of a web app
 (e.g. tearoff windows in Gmail) to cleanly access the same user state
 no matter which windows are open.

This second part (below) would only be accessible after out-of-band trust
and permissioning mechansims got tickled.

 Additionally, we'd like this background page to continue to run after
 the user has navigated away from the site, and preferably after the
 user has closed the browser. This will enable us to keep client-side
 data up-to-date on the user's machine. It will also enable us to
 download JS in advance. When the user navigates to a web app, all the
 background page has to do is draw the DOM in the visible window. This
 should significantly speed up app startup. Additionally, when
 something happens that requires notification, the background page can
 launch a visible page with a notification (or use other rich APIs for
 showing notifications).

(aside... begs the question... when will that extension mechanism be
standardized :)


On Thu, Jul 30, 2009 at 1:49 PM, Dmitry Titov dim...@google.com wrote:

 I think I almost get this distinction :-) you are saying that HTML+JS could
 be made more powerful with new APIs, but only if it is done sufficiently far
 from the 'regular web page browsing' experience (or model). Say, if it is a
 browser extension or a prism-like app it's ok - only (or mostly) because
 it is outside from the regular process of web browsing that users have been
 taught is 'reasonably safe'.
 Would this functionality be ok as a part of a browser extension? Lets say
 you can install an extension that is loaded in background (as Chrome already
 can do) and that the pages from the same domain are loaded into same process
 and can exchange DOM with it.

 I'm not trying to argue for the proposal, I am just curious how the
 more-powerful APIs could be added, since this is not the last proposal that
 tries to do this. Looking at use cases and coming up with narrow API that
 does not require permissions is understood, but it's interesting how to go
 beyond this line. Or, as Ojan says, if it's even a goal :-)

 Dmitry



 On Thu, Jul 30, 2009 at 1:23 PM, Drew Wilson atwil...@google.com wrote:

 I think the error here is viewing this as a UX issue - if it were just a
 UX issue, then the responses from people would be along the lines of Oh,
 this sounds dangerous - make sure you wrap it with the same permissions UI
 that we have for extensions, plugins, and binary downloads.
 The realization I came to this morning is that the core of the objections
 are not primarily about protecting users (although this is one goal), but
 more about protecting the current secure web browsing model (Linus
 explicitly said this yesterday in his email to the list, but I only got it
 when thinking about it today).

 This is why people are OK with supporting this via extensions but not OK
 with supporting this as part of the core HTML APIs even if the UX was
 exactly the same. It's more about keeping the model pristine. Doing crazy
 stuff in extensions and plugins are OK because they are viewed as falling
 outside the model (they are just random scary things that user agents choose
 to do that don't conform to the specification).

 So arguing but it's the same UI either way! is not going to convince
 anyone.

 -atw

 On Thu, Jul 30, 2009 at 12:51 PM, Dmitry Titov dim...@google.com wrote:

 It seems the biggest concern in this discussion is around BotNet
 Construction Kit as Machej succulently called it, or an ability to run
 full-powered platform API persistently in the background, w/o a visible
 'page' in some window.
 This concern is clear. But what could be a direction to the solution?
 Assuming one of the goals for html5 is reducing a gap in capabilities
 between web apps and native apps, how do we move forward with more powerful
 APIs?

 So far, multiple ways exist to gain access to the user's machine - nearly
 all of them based on some dialog that asks user to make impossible decision
 - as bad as it is, binary downloads, plugins, browser extensions, axtivex
 controls or Gears modules are all but a dialog away from the user's
 computer. Basically, if a malicious dudes are cool to write native apps -
 they can have 

Re: [whatwg] AppCache can't serve different contents for different users at the same URL

2009-07-29 Thread Michael Nordman
'Named' cache groups under a single manifest url is an interesting idea.
Presumably the webapp would be generating the 'name' in the manifest file
based on a cookie value.
Another possibility is something along the lines of what is proposed in the
DataCache draft: the manifest indicates a cookie name, and the value of that
cookie determines the 'name' of the subgroup. And the value of that cookie
also determines which subgroup is enabled at any given time.

On Tue, Jul 28, 2009 at 9:04 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 28 Jul 2009, Adam de Boor wrote:
 
  the difficulty with a named-section option is that the manifest
 generation
  for an application would have to know which users use a particular
 machine,
  which is pretty much a non-starter.

 I meant that each named appcache would have a separate manifest, and the
 manifest would just say the name of the one it was setting up.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Installed Apps

2009-07-29 Thread Michael Nordman
On Wed, Jul 29, 2009 at 1:53 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Jul 29, 2009 at 11:43 AM, Michael Davidson m...@google.com wrote:

 On Wed, Jul 29, 2009 at 11:38 AM, Tab Atkins Jr.jackalm...@gmail.com
 wrote:
  On Wed, Jul 29, 2009 at 1:34 PM, Michael Davidsonm...@google.com
 wrote:
  With a hidden page that's accessible to all Google Finance visible
  pages, they could share a connection to the server. Even if the hidden
  page is closed when the last Google Finance page is closed, this is a
  better situation than we currently have.
 
  Can't SharedWorkers do that right now?  If all you're doing is polling
  for data, it seems like you don't need all the extra stuff that these
  various proposals are offering.

 It's my contention that frequently large web apps are doing more than
 just polling for data. They're trying to maintain complex data
 structures that they pass up to the UI. The programming model of
 SharedWorkers makes this difficult. Take the chat client in Gmail, for
 example. It's much more complex than passing stock quotes from a
 worker to the UI.


 I understand that this isn't helpful for existing web apps like Gmail, but
 I think a MVC type model will work pretty nicely with shared workers.  It's
 just the transition phase that's going to be painful.

 This idea of a hidden page just seems like a big hack to support today's
 applications.  If it were adapted into the spec, I think 5 years from now
 we'd be very sorry that it was.


I disagree. The proposal plays to the strenths of the web platform.

HTML parsing, layout, and rendering is very optimized, much more so than
programmatic HTML DOM manipulation. The incorporation of stateful script
contexts within each page considerably slows page loading. As navigations
occur, that statefullness has to be reconstructed from scratch. In addition
to creating and populating a new script context with the script itself,
this often involves reconstructing data the script operates on. So
additional server roundtrips and desrializing into js objects. With the
introduction of local storeage and databases, augment the server roundtrips
with the reading of those local repositories and deserializing into js
structures.

The sharedContext / backgroundPage provides a means to cut out completely
the need to reconstruct that JS world on each page navigation.

I really like the proposal... its elegant... (and more generally applicable,
and easier to use than fully asyn workers).

hyperbole
I think 5 years from now, people will be wondering why it was ever done
otherwise.
/hyberbole




 The other APIs we've been talking about that satisfy the requirements that
 were originally broken out by Drew seem like much more sustainable
 solutions.

 J



Re: [whatwg] Installed Apps

2009-07-28 Thread Michael Nordman
On Tue, Jul 28, 2009 at 2:12 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 28, 2009 at 1:38 AM, Maciej Stachowiakm...@apple.com wrote:
 
  On Jul 27, 2009, at 10:51 PM, David Levin wrote:
 
  It sounds like most of the concerns are about the 2nd part of this
 proposal:
  allowing a background page to continue running after the visible page has
  been closed.
  However, the first part sounds like it alone would be useful to web
  applications like GMail:
 
  The first, which should begenerally useful, is the ability to have a
 hidden
  HTML/JS page running
  in the background that can access the DOM of visible windows. This
  page should be accessible from windows that the user navigates to. We
  call this background Javascript window a shared context or a
  background page. This will enable multiple instances of a web app
  (e.g. tearoff windows in Gmail) to cleanly access the same user state
  no matter which windows are open.
 
  + restrict things to the same security origin.
  It sounds similar in concept to a share worker except that it runs in the
  main thread and is more concerned with dom manipulation/state while
 workers
  have typically been thought of as allowing background processing.
  It seems that the lifetime of this could be scoped, so that it dies when
 it
  isn't referenced (in a similar way to how shared worker lifetime is
 scoped).
 
  This idea actually sounds reasonably ok, and I think I once proposed
  something like this as an alternative to shared workers as the way for
  multiple app instances to share state and computation.
  It's really the bit about invisibly continuing to run once all related
 web
  pages are closed that I would worry about the security issues.

 The only concern I see with this is that it permanently forces all
 windows from the same domain to run in the same process. As things
 stand today, if the user opens two tabs (or windows) and navigates to
 the two different pages on www.example.com, then a browser could if it
 so wished use separate processes to run those two pages. If we enabled
 this API the two pages would have to run in the same process, even if
 neither page actually used this new API.

 / Jonas


There are conflicting requirements along these lines that would need to be
resolved...

1) A nested iframe needs to run in the same process as its containing page.
2) A shared context needs to run in the same process as its client pages.

... but what if a nested iframe document in processA connects to a
sharedContext that has already been loaded into processB. Somethings gotta
give.

What if a sharedContext isn't gauranteed to be a singleton in the browser. A
browser can provide best effort at co-locating pages and sharedContexts, but
it can't gaurantee that, and the spec respects that.

The lesser gaurantee is that all directly scriptable pages (those from a
given set of related browsing contexts) WILL share the same sharedContext
(should the refer to it).


Re: [whatwg] Issues with Web Sockets API

2009-07-28 Thread Michael Nordman
, Michael Nordman wrote:
 
  The proposed websocket interface is too dumbed down. The caller doesn't
  know what the impl is doing, and the impl doesn't know what the caller
  is trying to do. As a consequence, there is no reasonable action that
  either can take when buffers start overflowing. Typically, the network
  layer provides sufficient status info to its caller that, allowing the
  higher level code to do something reasonable in light of how the network
  layer is performing. That kind of status info is simply missing from the
  websocket interface. I think its possible to add to the interface
  features that would facilitate more demanding uses cases without
  complicating the simple use cases. I think that would be an excellent
  goal for this API.

 Do the minimal new additions address this to your satisfaction?


The hints about future additions have, in particular support for 'streams'.





 On Mon, 27 Jul 2009, Drew Wilson wrote:
 
  I would suggest that the solution to this situation is an appropriate
  application-level protocol (i.e. acks) to allow the application to have
  no more than (say) 1MB of data outstanding.
 
  I'm just afraid that we're burdening the API to handle degenerative
  cases that the vast majority of users won't encounter. Specifying in the
  API that any arbitrary send() invocation could throw some kind of retry
  exception or return some kind of error code is really really
  cumbersome.

 I agree that we aren't talking about a particularly common case.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Issues with Web Sockets API

2009-07-27 Thread Michael Nordman
 Obviously we need more web platform capabilities to make such use cases a
reality, but they are foreseeable and we should deal with them in some
reasonable way.
Couldn't agree more.

The proposed websocket interface is too dumbed down. The caller doesn't know
what the impl is doing, and the impl doesn't know what the caller is trying
to do. As a consequence, there is no reasonable action that either can
take when buffers start overflowing. Typically, the network layer provides
sufficient status info to its caller that, allowing the higher level code to
do something reasonable in light of how the network layer is performing.
That kind of status info is simply missing from the websocket interface. I
think its possible to add to the interface features that would facilitate
more demanding uses cases without complicating the simple use cases. I think
that would be an excellent goal for this API.

On Mon, Jul 27, 2009 at 5:30 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jul 27, 2009, at 2:44 PM, Drew Wilson wrote:



 There's another option besides blocking, raising an exception, and
 dropping data: unlimited buffering in user space. So I'm saying we should
 not put any limits on the amount of user-space buffering we're willing to
 do, any more than we put any limits on the amount of other types of
 user-space memory allocation a page can perform.


 I think even unlimited buffering needs to be combined with at least a hint
 to the WebSocket client to back off the send rate, because it's possible to
 send so much data that it exceeds the available address space, for example
 when uploading a very large file piece by piece, or when sending a live
 media stream that requires more bandwidth than the connection can deliver.
 In the first case, it is possible, though highly undesirable, to spool the
 data to be sent to disk; in the latter case, doing that would just
 inevitably fill the disk. Obviously we need more web platform capabilities
 to make such use cases a reality, but they are foreseeable and we should
 deal with them in some reasonable way.

 Regards,
 Maciej




  1   2   >