Re: [selectors-api] comments on Selectors API Level 2

2010-01-21 Thread Tab Atkins Jr.
On Thu, Jan 21, 2010 at 10:11 AM, Bert Bos b...@w3.org wrote:
 2) Drop queryScopedSelector() and queryScopedSelectorAll(). It is
 trivially easy to replace a call to queryScopedSelector() by a call to
 querySelector(). All you have to do is replace

    e.queryScopedSelector(x)
 by
    e.ownerDocument.querySelector(x)

That's completely incorrect.  A querySelector call on the document
will return all elements that match the selector.  A
queryScopedSelector call on an element will only return elements that
match the selector in the target element's subtree.

 where e is an Element. And for documents d the functions are even
 exactly the same: d.queryScopedSelector(x) == d.querySelector(x) for
 all documents d and selectors x.

That doesn't solve the problem, it just says We don't need to solve
this problem..  A scoped call on the document root is indeed the same
as a non-scoped selector, but that doesn't tell us anything about the
actual scoped behavior.  It's a degenerate case.

 Somebody somewhere else was wondering about the selector ':root + *'. I
 would say it's a valid selector that just happens to never match
 anything, because a tree by definition has only one root. The same
 holds for selectors like '#foo #foo' (valid, but guaranteed to return
 nothing, because IDs are by definition unique), '*:first-child:even'
 (the first child is obviously odd, not even), and ':root:first-child'
 (the root is not a child of anything).

In a scoped selector, :scope + * *should* return something, if the
scoping element has a sibling.  It's the behavior of jQuery's find()
method (try elem.find(+ *)), and it's what authors are used to.  The
entire *point* of scoped selectors was to fix the disconnect between
querySelector and jQuery, basically.  Adding yet another selector
function that doesn't act like what current widely-adopted libraries
need or what authors expect doesn't help anyone.

I don't like the requirement of :scope either, but Lachy took Anne's
dislike of starting the string with a bare combinator to be the WG's
position as a whole.  I think matching jQuery here is *very* important
both for author expectations and for practical benefit, and so having
e.queryScopedSelectorAll(+ *) do the exact same thing as
$(e).find(+ *) is essential.

~TJ



Re: Seeking pre-LCWD comments for View Modes Media Feature; deadline March 17

2010-03-10 Thread Tab Atkins Jr.
On Wed, Mar 10, 2010 at 11:09 AM, Robin Berjon ro...@berjon.com wrote:
 3. why maxiMIZED and only mini?

 Hmmm, because they were added at different times? :) It's a good point, is 
 there a preference between -mized or not?

Since the alternative is to use maxi for consistency... I think I
prefer minimized. ^_^  That maps better to the traditional verbage
used for that window state anyway.

~TJ



Re: [FileAPI] Blob.URN?

2010-03-31 Thread Tab Atkins Jr.
On Wed, Mar 31, 2010 at 1:55 AM, Robin Berjon ro...@berjon.com wrote:
 On Mar 31, 2010, at 01:56 , Darin Fisher wrote:
 The only way to get a FileWriter at the moment is from input 
 type=saveas.  What is desired is a way to simulate the load of a resource 
 with Content-Disposition: attachment that would trigger the browser's 
 download manager.

 I don't think that input type=saveas is a good solution for this, for one 
 it falls back to a text input control, which is less than ideal. I think that 
 the File Writer should trigger downloads on an API call since that doesn't 
 introduce security issues that aren't already there. I'll make a proposal for 
 that.

Better fallback could be achieved with button type=saveas/button.

~TJ



Re: [FileAPI] Blob.URN?

2010-04-03 Thread Tab Atkins Jr.
On Fri, Apr 2, 2010 at 10:43 PM, Jonas Sicking jo...@sicking.cc wrote:
 You still can't promise anything about file size. And with the current
 dialogs the UA is free to improve on the current UI, for example by
 doing content sniffing to make sure that the correct file type is
 indeed being saved. Or run virus scanners on it etc.

You can't always promise a filesize in normal downloads either, if you
don't send the Content-Size header.  So any UI decisions that have to
be made on how to display an unknown-size file have presumably already
been made.

 I've so far thought of the main use case for multiple writing
 something like a document editor, where the user chooses a file
 location once, and then the web application can continuously save
 every 5 minutes or so to that location. As well as any time the user
 explicitly presses some 'save' button. It seems like this work flow
 doesn't go well with the UI you are describing.

Which UI, the one that acts like a continuous download?

 What use case did you have in mind for being able to continuously
 write to a file?

One that we thought of today was the case of playing a game over the
web (perhaps using WebSockets?) and wanting to save a replay.  You
don't want to have to build up a Blob until the very end, and then
deal with all the possible failure conditions.  If the user's power
goes out halfway, frex, it would be nice to at least have a partial
file downloaded.

Another is to save to disk a streaming video resource that you are
viewing part of at the moment.  Same concerns apply - you'd like a
usable file with as much data as possible in it despite any failure
modes.

 Additionally, the problems I described above, of things like file size
 or data not being known at the time the user chooses where to save the
 file. And how do you describe to the user that he/she is granting more
 than a one-shot write?

 I'm not saying that I don't think we need continuos writing. I'm
 saying that I think we additionally the existing save-as dialog but
 for locally created data without going through Content-Disposition
 hacks. And I'm also saying that I think creating good UI for continuos
 writing will be hard.

I think the UI currently used for an in progress download would be
sufficient as a first pass.  Later we could come up with a slightly
better indicator for a continuous download.

~TJ



Re: UMP / CORS: Implementor Interest

2010-04-22 Thread Tab Atkins Jr.
On Wed, Apr 21, 2010 at 6:45 PM, Maciej Stachowiak m...@apple.com wrote:
 XML is also a misnomer. And Http is confusing as well, since these
 requests can (and should) generally be carried over https. At least we agree
 on Request ;).

 I agree, but (a) that ship has sailed; and (b) dropping those from the name
 only in the anonymous/uniform/whatever version would probably be more
 confusing than helpful, at least if the API ends up being roughly similar.
 XMLHttpRequest has brand value, and it's worth building on author awareness
 even if the X and the H are more historical than meaningful at this point.

Count me as one web developer who won't miss the annoying and
inaccurate XH from any future Rs.  I think that dropping them now
won't be very confusing (the Request part has always been the
meaningful one for me), and it then opens the door for future types of
Requests to just share in the Request name, not the full baggage-laden
XHR name.

In other words, assuming confusion from a sample of one doesn't seem
too valid.  Establishing precedent with two, though, makes it
significantly more difficult to ever do anything better in the future.
 We're stuck with XHR as the name for vanilla XHR stuff.  We don't
need to perpetuate its inaccuracies and capitalization inconsistencies
into future types of Requests.

~TJ



Re: [WebNotifications] omnibus feedback reply, new draft

2010-04-22 Thread Tab Atkins Jr.
On Thu, Apr 22, 2010 at 1:05 PM, Drew Wilson atwil...@google.com wrote:
 On Thu, Apr 22, 2010 at 12:28 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

 I'm happy about this, as it also opens up a number of other use-cases
 for free.  For example, a webapp may want to use notifications for
 events that are bursty and can sometimes come in fast enough that it
 would be annoying to pop a notification for each one.  Integrating the
 time into the replaceId, perhaps truncated to every 10 seconds or
 every minute as appropriate for the app, provides virtually free
 assurance that the user won't accidentally be swamped with alerts, a
 task which otherwise requires manual tracking of the last update
 time.

 ~TJ


 As another data point from an app developer, I am in the process of adding
 notification functionality to gmail (staggered notifications of new emails,
 which may arrive in batches), using the current experimental
 webkitNotifications API in Chrome.

 I'm manually mimicking the replaceId functionality (by tracking the current
 notification and closing it before opening a new mail notification). If the
 user gets several emails at once, the behavior I'm trying to achieve is:

 1) Only one email notification will ever be displayed on screen at once

 2) Each individual email notification is displayed for some reasonable
 period of time (a few seconds) so the user has a chance to process it before
 it is replaced by the next one

 3) Manually dismissing a notification displays the next one in the queue (or
 maybe clears the queue - I haven't decided yet)

 I'm writing my own queueing logic in gmail to achieve this as functionality
 like this isn't provided by the API (and I don't think it should be since
 it's easy enough for applications to do this themselves).

 It sounds like Tab is suggesting different behavior than my #2 above - by
 integrating a granular timestamp into the replace ID, an application that
 got a flood of events would end up rapidly replacing notifications, leaving
 the most recent one visible (this is probably desirable behavior for some
 applications, where subsequent notifications are intended to obsolete
 previous ones).

Indeed, I'm suggesting what you say, replacing rather than queuing,
and I think it's what John is suggesting as well as the behavior of
notifications with identical replaceIds.

Queueing is interesting and sounds useful.  This might be useful
enough to be the default behavior.  Alternately, it could just be the
behavior of the actual notification system.

~TJ



Re: [IndexedDB] Granting storage quotas

2010-04-23 Thread Tab Atkins Jr.
On Fri, Apr 23, 2010 at 7:39 AM, Nikunj Mehta nik...@o-micron.com wrote:
 Could we create an additional optional parameter for an open request with
 the type of permanence required? Or is it not a good idea?

I don't think we can expose the type of permanence to the user in any
sort of sane way.  The fact that we have to bug the user at all for
permanent storage is already bad, but necessary.

As long as everyone thinks it's fine to expose an identical user
interface for all types of permanent storage, then I'm cool with it.
(I don't see any problem with doing so, given the types of permanence
listed earlier.)

~TJ



Re: [IndexedDB] Granting storage quotas

2010-04-28 Thread Tab Atkins Jr.
On Wed, Apr 28, 2010 at 4:32 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Apr 28, 2010 at 4:03 PM, Michael Nordman micha...@google.com wrote:
 We have in mind that the incentives for developers to not always utilize the
 most permanent storage option are...
 1) Non-permanent storage is accessible w/o explicit user opt-in, so less
 annoying UI.

 This is interesting. I think that we'll want to allow sites some
 amount of permanent storage without explicit user opt-in. Though we
 could allow much more for the caching storage. This sounds like a
 very good idea.

The problems with allowing permanent no-user-ask storage have already
been hashed out.  It's impossible to do right - either you're too
lenient (amount per subdomain) and malicious sites can fill up your
permanent storage trivially by using iframes to subdomains, or you're
too strict (amount per domain, or disallowing storage in nested
browsing contexts) which kills a lot of legitimate and good use-cases.

Tying permanence to user opt-in, and specifically to proactive user
opt-in rather than a script-triggered dialog (modal or otherwise),
appears to be the only way to chart the appropriate course between
power and protection.


 2) Less of a burden to 'manage' the data if the user-agent will clean it up
 for you.

 I'm not convinced this will be much of incentive. I think few sites
 are as interested in cleaning up the users hard drive as the user is.
 I can see many sites dropping data into the permanent storage and then
 caring very little when that is cleaned up. I'd imagine many ad
 networks would love it if it was never cleaned up.

 The incentive I had in mind was that if the UA decides it needs to
 purge data for a specific site, for example due to hitting some global
 quota, then we'd first clear out the data with the lowest level of
 permanence first. So it wouldn't make a difference if all data for a
 site is permanent, or if all of it is semi-permanent, in both cases
 it'd all get nuked.

 However if the site has some permanent and some semi-permanent, then
 we'd clear out the semi-permanent first and remeasure if we're up
 against the global quota still.

 But I also really like the idea of having less UI for semi-permanent.

Requiring a user to explicitly hit an input or similar to allocate
permanent storage should be enough to make sites use volatile storage
unless they really need permanence.

Expiring permanent storage automatically, though, means that I can
never be sure that my mail archives are actually secure on my
computer, even though I told Gmail to save it to my hard drive.  I can
never be sure that my draft emails are really saved, and won't
disappear.

I have no problem with a browser simply not *exposing* permanent
storage, and not allowing authors to request it.  But if any major
browser exposes a permanent storage that isn't actually permanent,
then we app developers no longer have a permanent storage to rely on.
It's simply gone, unless we put up a Best viewed in anything but
Firefox sticker.

On mobile platforms where storage is at a premium and it's much more
difficult for users to manipulate the filesystem, just don't allow
permanent storage.  But don't lie to the application and say you
support something that you explicitly don't.  Lying browsers cause
horrific confusion and bugs.  _

~TJ



Re: [IndexedDB] Granting storage quotas

2010-04-29 Thread Tab Atkins Jr.
On Thu, Apr 29, 2010 at 10:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 I think we were operating under the assumption that we're going to
 avoid involving the user until neccesary. So for example letting the
 site store a few MB of data without the user getting involved, and
 only once enough storage is wanted by the site, ask the user if this
 is ok.

When you say per site do you mean per subdomain, or per domain?  The
former is too permissive, the latter is too restrictive.


On Thu, Apr 29, 2010 at 11:56 AM, Michael Nordman micha...@google.com wrote:
 Sounds like we agree on there being a distinction between two levels of
 persistence with one being more permanent than the other. Great, provided
 we have that agreement we can craft interfaces that allow callers to make
 the distinction!

Not quite.  If we agree that there are multiple levels, but one
browser interprets that as being varying levels of temporary storage,
but another interprets that as temporary and permanent storage, then
authors are still unhappy.  :/

~TJ



Re: Seeking implementation data for XBL2

2010-05-05 Thread Tab Atkins Jr.
On Wed, May 5, 2010 at 5:10 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Hi André, All,

 Below, André asks for XBL2 implementation status. I think the last time this
 was discussed on public-webapps was June 2009 [1] (and a somewhat related
 thread in March 2010 on www-tag [2]).

 All - if you have some status information re XBL2 implementations, please do
 share it with us.

 -Art Barstow

 [1] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0713.html
 [2] http://lists.w3.org/Archives/Public/www-tag/2009Nov/0036.html

Last week, Jonas Sicking said he'd be starting his implementation this
week.  (He's not on IRC at the moment, so I can't confirm whether he's
actually started yet.)

~TJ



Re: Can IndexedDB depend on JavaScript? (WAS: [Bug 9793] New: Allow dates and floating point numbers in keys)

2010-05-24 Thread Tab Atkins Jr.
On Mon, May 24, 2010 at 1:21 PM, Jonas Sicking jo...@sicking.cc wrote:
 As for the keyPath issue. The way the spec stands now (where I think
 it intends not to allow full expressions), I don't think it really
 depends on Javascript. It does depend on the language having some way
 to represent structured data. I.e. that the language can hold
 something like:

 { foo: bar,
  complex: { p1: hello, p2: world} }

 I'm not really sure how you would return a value like that to
 Objective-C. How does WebKit intend to deal with that in APIs where
 this issue already exist, such as postMessage?

Surely any language that has some way of dealing with JSON already has
an answer for this, correct?

~TJ



Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Tab Atkins Jr.
On Wed, Jun 9, 2010 at 3:27 PM, Jonas Sicking jo...@sicking.cc wrote:
 I'm well aware of this. My argument is that I think we'll see people
 write code like this:

 results = [];
 db.objectStore(foo).openCursor(range).onsuccess = function(e) {
  var cursor = e.result;
  if (!cursor) {
    weAreDone(results);
  }
  results.push(cursor.value);
  cursor.continue();
 }

 While the indexedDB implementation doesn't hold much data in memory at
 a time, the webpage will hold just as much as if we had had a getAll
 function. Thus we havn't actually improved anything, only forced the
 author to write more code.


 Put it another way: The raised concern is that people won't think
 about the fact that getAll can load a lot of data into memory. And the
 proposed solution is to remove the getAll function and tell people to
 use openCursor. However if they weren't thinking about that a lot of
 data will be in memory at one time, then why wouldn't they write code
 like the above? Which results as just as much data being in memory?

At the very least, explicitly loading things into an honest-to-god
array can make it more obvious that you're eating memory in the form
of a big array, as opposed to just a magically transform my blob of
data into something more convenient.

(That said, I dislike cursors and explicitly avoid them in my own
code.  In the PHP db abstraction layer I wrote for myself, every query
slurps the results into an array and just returns that - I don't give
myself any access to the cursor at all.  I probably like this better
simply because I can easily foreach through an array, while I can't do
the same with a cursor unless I write some moderately more complex
code.  I hate using while loops when foreach is beckoning to me.)

~TJ



Re: Transferring File* to WebApps - redux

2010-06-15 Thread Tab Atkins Jr.
On Tue, Jun 15, 2010 at 2:24 PM, SULLIVAN, BRYAN L (ATTCINW)
bs3...@att.com wrote:
 Arun,

 The basic concern I have is with the notion of browsers as the only
 Web context and use-case that matters. The browser-based model for API
 integration view (as I understand your position) is that the user must
 be actively involved in every significant action, and choose explicitly
 the actions that enable integration with browser-external resources
 (including local and remote). Step back and you will see the
 inconsistency in that (what would Ajax be if the user had to approved
 every HTTP API request via an input element?).

The similarity between AJAX and the use-cases we're discussing is
thin.  XHR is the page communicating back with its origin server, and
is security-wise in roughly the same category as a script adding an
img to a page (the img sends a script-crafted request back to the
server and receives data back).

Interacting directly with the user's file system is a substantially
more security-conscious action.  Involving the user in the action, at
least minimalloy, appears to be a common-sense good idea to mitigate
the possibility of attacks.

The decisions in this arena have been highly informed by security
considerations specific to the particular cases being discussed.

~TJ



Re: [IndexedDB] Syntax for opening a cursor

2010-06-24 Thread Tab Atkins Jr.
On Thu, Jun 24, 2010 at 1:25 PM, Jeremy Orlow jor...@chromium.org wrote:
 If I'm reading the current spec right (besides the [NoInterfaceObject]
 attributes that I thought Nikunj was going to remove), if I want to open a
 cursor, this is what I need to do:

 myObjectStore.openCursor(new IDBKeyRange().leftBound(key), new
 IDBCursor().NEXT_NO_DUPLICATE);

 Note that I'm creating 2 objects which get thrown away after using the
 constructor and constant.  This seems pretty wasteful.
 Jonas' proposal (which I guess Nikunj is currently in the middle of
 implementing?) makes things a bit better:

 myObjectStore.openCursor(window.indexedDB.makeLeftBoundedKeyRange(key),
 new IDBCursor().NEXT_NO_DUPLICATE);

 or, when you have a single key that you're looking for, you can use the
 short hand

 myObjectStore.openCursor(key, new IDBCursor().PREV);

 But even in these examples, we're creating a needless object.  I believe we
 could also use the prototype to grab the constant, but the syntax is still
 pretty verbose and horrid.
 Can't we do better?

If we're specifying something that will get wrapped in a library to
make it less horrible *on day 1* (or earlier), we're doing it wrong.

All of the variants above are very, very wrong.

~TJ



Re: [cors] Subdomains

2010-07-25 Thread Tab Atkins Jr.
On Sun, Jul 25, 2010 at 5:25 AM, Christoph Päper
christoph.pae...@crissov.de wrote:
 Maybe I’m missing something, but shouldn’t it be easy to use certain groups 
 of origins in ‘Access-Control-Allow-Origin’, e.g. make either the scheme, the 
 host or the port part irrelevant or only match certain subparts of the host 
 part?

 Consider Wikipedia/Wikimedia as an example. If all 200-odd Wikipedias 
 (*.wikiPedia.org) but no other site should be able to access certain 
 resources from the common repository at commons.wikiMedia.org, wouldn’t 
 everybody expect

  Access-Control-Allow-Origin: http://*.wikipedia.org

 to just work? Is the Commons server instead expected to parse the Origin 
 header and dynamically set ACAO accordingly?

This one might work, but:

 Likewise transnational corporations might want something like

  Access-Control-Allow-Origin: http://example.*, http://example.co.*

 although they cannot guarantee that they possess the second or third level 
 domain name under all top level domains.

This one won't, because it'll match example.co.evilsite.com.

~TJ



Re: HTTP access control confusion

2010-07-30 Thread Tab Atkins Jr.
On Thu, Jul 29, 2010 at 8:10 AM, Douglas Beck db...@mail.ucf.edu wrote:
 I have recently read through:
 https://developer.mozilla.org/En/HTTP_access_control
 https://wiki.mozilla.org/Security/Origin

 I've discussed what I've read and learned with my coworkers and there's been
 some confusion.  I understand and appreciate the need for a security policy
 that allows for cross-site https requests.  I do not understand how
 Access-Control-Allow-Origin addresses usability and security concerns.

 The basis of our confusion:
 I create domain-a.com and I want to make an ajax request to domain-b.com.  A
 preflight request is made to domain-b, domain-b responds with if it is safe
 to send the request.

 Does it not make more sense for me (the author of domain-a) to define the
 security policy of my website?  I know each and every request that should be
 made on my site and can define a list of all acceptable content sources.  If
 the preflight request is made to domain-a (not domain-b) then the content
 author is the source of authority.

 A more functional example (and the source of my curiosity), I work for the
 University of Central Florida.  I am currently working on a subdomain that
 wants to pull from the main .edu TLD.  The university has yet to define an
 Access-Control header policy, so my subdomain is unable to read what's
 available on the main .edu website.

 Additionally, if I am working with authorized content, it would be useful
 for me to define/limit where cross-site requests can be made.  It seems
 backwards that an external source can define a security policy that effects
 the usability of my content.

As the author of your site, you *already* have complete control over
where cross-site requests can be made.  If you don't want to make a
particular cross-site request, *just don't make that request*.

On the other hand, the content source doesn't have that kind of
control.  They can't prevent you from making requests to them that
they don't want, or allow requests that they like.  That's where the
same-origin policy (default deny all such requests) and CORS
(selectively allow certain requests) comes in.

I suppose you might be thinking of a situation where you are allowing
untrusted users to add content to your site, and you only want them to
be able to link to specific other sites.  Same-origin restrictions do
part of this for you automatically.  Most of the rest should be
handled by you in the first place - if untrusted users are doing XHRs,
you've got bigger problems.

~TJ



Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-12 Thread Tab Atkins Jr.
On Thu, Aug 12, 2010 at 11:44 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Aug 12, 2010 at 10:55 AM, Andrei Popescu andr...@google.com wrote:
 Given that open() is one of those functions that are likely to grow in
 parameters over time, I wonder if we should consider taking an object as the
 second argument with names/values(e.g. open(mydatabase, { description:
 foo }); ). That would allow us to keep the minimum specification small and
 easily add more parameters later without resulting un hard to read code that
 has a bunch of undefined in arguments.

 The only thing I'm not sure is if there is precedent of doing this in
 one of the standard APIs.

 That sounds great to me.

Thank god, maybe we can *finally* make this a pattern in the web
platform.  Javascript's lack of keyword parameters is already a pain;
the inexplicable resistance to adding this common hack around that
into the web platform has pained me every time.

~TJ



Re: FileWriter behavior between writes

2010-08-18 Thread Tab Atkins Jr.
On Wed, Aug 18, 2010 at 7:30 PM, Jonas Sicking jo...@sicking.cc wrote:
 How is this noticeable from a webpage? I.e. why does the spec need to
 say anything one way or another?

 On Wednesday, August 18, 2010, Eric Uhrhane er...@google.com wrote:
 For
 example, what if script A has a FileWriter for /foo.txt and script B
 [using the FileSystem api] moves it elsewhere?  If the file is closed,
 the next write from A may act as if the file was never there.  If the
 file stayed open, on some systems the write would succeed, but the
 data would land at the file's new location.

 Similar issues come up when files are opened for reading, then written
 from another script, written from multiple scripts, etc.

~TJ



Re: [DOMCore] Attr

2010-09-10 Thread Tab Atkins Jr.
On Fri, Sep 10, 2010 at 9:28 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Sep 10, 2010 at 7:48 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Fri, Sep 10, 2010 at 5:35 AM, Anne van Kesteren ann...@opera.com wrote:
 3) We can drop the concept of Attr being an object altogether. I do not
 think this is doable compatibility-wise, but not having Node.attributes,
 Attr, and just using getAttribute/getAttributeNS/setAttribute/setAttributeNS
 would be very nice complexity-wise.

 I know that I've written code that depends on Node.attributes, when I
 was just looping through all of the attributes for some reason
 (usually because I was transforming the element into something else,
 and just wanted to transfer all the irrelevant attributes over).

 This code would happen to break with approach #2 as well, but the
 relevant code I've written was minor and shouldn't be used by anyone
 else, I think.

 Indeed, Node.attributes is currently the only way to enumerate all the
 attributes of an Element. This makes me think there are probably
 people out there doing this, and so I suspect Node.attributes is
 needed for web compat. Additionally, it seems bad to remove the
 ability to enumerate attributes completely. Lastly, keeping Attrs as a
 type of object gives us something to return from DOM-XPath.

 What I suggest is #2 in Anne's list. Make Attrs into objects with the
 following properties:

 * name
 * value
 * namespaceURI
 * localName

 'name' would be my guess for most commonly used when iterating all the
 atttributes. The others are the minimum set of attributes needed for
 proper enumeration

 We might also consider these attributes, though if they're not needed
 for web compat, then it would be nice to not add them IMHO.

 * ownerElement
 * prefix
 * nodeName
 * nodeValue

Oh right, duh.  Sorry, wasn't reading #2 properly.  Yeah, with an
appropriate subset of values, #2 would be great for all the code I've
ever written.


 Also, I wouldn't mind making value/nodeValue readonly, but I don't
 feel strongly about that.

I've never written to a value obtained from Node.attributes.

~TJ



Re: [IndexedDB] IDBCursor.update for cursors returned from IDBIndex.openCursor

2010-09-16 Thread Tab Atkins Jr.
On Thu, Sep 16, 2010 at 2:23 PM, Jeremy Orlow jor...@chromium.org wrote:
 I think we should leave in openObjectCursor/getObject but remove
 openCursor/get for now.  We can then revisit any of these features as soon
 as there are implementations (both in the UAs and in web sites) mature
 enough for us to get real feedback on the features.

If you do so, could you migrate the names over?  No sense having a
useless Object hanging around in the name.  Terse is better.

~TJ



Re: XBL2

2010-09-17 Thread Tab Atkins Jr.
On Fri, Sep 17, 2010 at 6:06 AM, Arthur Barstow art.bars...@nokia.com wrote:
  Do we have a sense yet regarding who supports XBL2 as in the 2007 Candidate
 version [CR] versus who supports the version Hixie recently published in
 [Draft]?

 Feedback from all (potential) implementers would be especially useful.

 Thinking aloud here, perhaps [Draft] could be positioned more like XBL1++
 e.g. the XBL Note [Note] + bug fixes? (BTW, wow, didn't realize it's been
 almost 10 years since that Note was published.)

I can't answer the question you asked directly, but I can shed some
light on the reasoning behind this.

A group of us engineers at Chrome have been brainstorming on ways to
make the web platform easier to develop apps in.  One of the ideas we
came up with was conceptually very similar to what XBL2 does.  We
tried to avoid actually using XBL2, though, because we weren't happy
with several of the design decisions in the spec.  We then had some
quick sanity/strategy meetings with other browser devs, particularly
those who were involved or interested in XBL2.  From that, we
eventually decided that we probably shouldn't throw away the useful
work that's already been done in XBL2, and instead see what we can do
to work with it.

The rest then unfolded as Ian described - with several people actually
dusting the spec off and looking at in the light of modern practice,
we realized that, while the core is basically sound, there's a lot of
edge detail that doesn't make as much sense today as it did back when
the spec was originally written.  Thus, Ian cleaned it up and pushed
the current revision out for comment.  I still don't know if we
(Chrome) are completely happy with the design, but it's much closer to
our ideal, and we're experimenting with it so we can provide good
feedback.

~TJ



Re: A URL API

2010-09-17 Thread Tab Atkins Jr.
On Fri, Sep 17, 2010 at 5:43 PM, Adam Barth w...@adambarth.com wrote:
 I've removed the searchParameters attribute from the URL interface for
 the time being.  We can consider adding it back at a later time.

;_;

Just today my cubemate asked me if there was any way to get at the
search parameters of a URL without parsing it himself.  I replied No,
but abarth started working on an API for it today..

That said, Garrett's right.  The values of the dict should be arrays.
Most of the time they'll be single-element arrays, but the benefit of
having a consistent type of value at all times is better than the
benefit of being able to omit [0] from parts of your code.

~TJ



Re: A URL API

2010-09-19 Thread Tab Atkins Jr.
On Sun, Sep 19, 2010 at 5:03 PM, Devdatta Akhawe dev.akh...@gmail.com wrote:
 hi

 Is the word 'hash' for fragment identifiers common? I personally
 prefer the attribute being called 'fragment' or 'fragmentID' over
 'hash' - its the standard afaik in all the RFCs.

'hash' is the name given to the fragment identifier in the Location
object.  It's pretty common.

~TJ



Re: A URL API

2010-09-21 Thread Tab Atkins Jr.
On Mon, Sep 20, 2010 at 11:56 PM, Adam Barth w...@adambarth.com wrote:
 Ok.  I'm sold on having an API for constructing query parameters.
 Thoughts on what it should look like?  Here's what jQuery does:

 http://api.jquery.com/jQuery.get/

 Essentially, you supply a JSON object containing the parameters.  They
 also have some magical syntax for specifying multiple instances of the
 same parameter name.  I like the easy of supplying a JSON object, but
 I'm not in love with the magical syntax.  An alternative is to use two
 APIs, like we current have for reading the parameter values.

jQuery's syntax isn't magical - the example they give using the query
param name of 'choices[]' is doing that because PHP requires a [] at
the end of the query param name to signal it that you want multiple
values.  It's opaque, though - you could just as easily have left off
the '[]' and it would have worked the same.

The switch is just whether you pass an array or a string (maybe they
support numbers too?).

I recommend the method be called append*, so you can use it both for
first sets and later additions (this is particularly useful if you're
just looping through some data).  This obviously would then need a
clear functionality as well.

~TJ



Re: A URL API

2010-09-24 Thread Tab Atkins Jr.
On Wed, Sep 22, 2010 at 12:54 AM, Devdatta Akhawe dev.akh...@gmail.com wrote:
 2) I've added two flavors of appendParameter.  The first flavor takes
 a DOMString for a value and appends a single parameter.  The second
 flavor takes an array of DOMStrings and appends one parameter for each
 array.  This seemed better than using a variable number of arguments.

 -1

 I really want the setParameter method - appendParameter now requires
 the developer to know what someone might have done in the past with
 the URL object. this can be a cause of trouble as the web application
 might do something that the developer doesn't expect , so I
 specifically want the developer to opt-in to using appendParameters.

If you really don't want to care what happened before, either do a
clearParameter every time first, or define your own setParameter that
just clears then appends.  Append/clear is a cleaner API design in
general imo, precisely because you don't have to worry about colliding
with previous activity by default.  A set/clear pair means that you
have to explicitly check for existing data and handle it in a way that
isn't completely trivial.

 I know clearParameter is a method - but this is not the clear
 separation between the '2 APIs' that we talked about earlier in the
 thread.

 I remember reading about how some web application frameworks combine
 ?q=aq=b to q=ab at the server side, whereas some will only consider
 q=a and some will only consider q=b. This is such a mess - the
 developer should have to specifically opt-in to this.

It's a mess for server-side languages/frameworks, yes.  Some of them
handle this incorrectly.  Most of the current crop of popular ones,
though, do things properly with one method that retrieves the last
value and one that retrieves all values (PHP is marginal in this
respect with its magic naming convention).

Attempting to relegate same-name params to second-tier status isn't a
good idea.  It's very useful for far more than the old services that
are also accessed via basic HTML forms that you stated earlier.

~TJ



Re: [IndexedDB] Should removeIndex/ObjectStore be renamed to match s/remove/delete/ elsewhere?

2010-10-19 Thread Tab Atkins Jr.
On Tue, Oct 19, 2010 at 3:39 PM, Jeremy Orlow jor...@chromium.org wrote:
 Jonas just checked in a change to replace .remove() with .delete() (amongst
 other changes we agreed upon a while ago).  In light of that, does it make
 sense for removeIndex and removeObjectStore to be renamed to deleteIndex and
 deleteObjectStore to match the new naming?  I don't care a whole lot, but it
 seems like that'd make things more consistent.

Yes.  All of the action verbs in the API should be consistent.
Anything else is gratuitous inconsistency that makes it hard to use
the API from memory.

~TJ



Re: createBlobURL

2010-10-25 Thread Tab Atkins Jr.
On Mon, Oct 25, 2010 at 4:48 PM, Jonas Sicking jo...@sicking.cc wrote:
 Like I said, I think creating an OM that covers all the cases here
 would create something very complex. I'd love to see a useful proposal
 for http://dev.w3.org/csswg/css3-images/.

It doesn't seem overly difficult.  Using the proposed Values API,
you'd do something like elem.style.values.backgroundImage.url =
[DOMURL goes here].

Then you'd just have to define the serialization of this to cssText,
which would probably just involve an opaque URL like about:url or
something similar.  It wouldn't roundtrip through a string, but that's
probably an acceptable penalty.

~TJ



Re: createBlobURL

2010-10-25 Thread Tab Atkins Jr.
On Mon, Oct 25, 2010 at 5:51 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Oct 25, 2010 at 5:04 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Oct 25, 2010 at 4:48 PM, Jonas Sicking jo...@sicking.cc wrote:
 Like I said, I think creating an OM that covers all the cases here
 would create something very complex. I'd love to see a useful proposal
 for http://dev.w3.org/csswg/css3-images/.

 It doesn't seem overly difficult.  Using the proposed Values API,
 you'd do something like elem.style.values.backgroundImage.url =
 [DOMURL goes here].

 That doesn't cover nearly all the ways you can use URLs as defined in
 http://dev.w3.org/csswg/css3-images/ which support multiple levels of
 fallback images, with snapping and resolution as well as gradients and
 fallback colors. And with used in a property like backgroundImage, you
 can have several combined instances of those. Consider:

 style=background-image: image(sun.svg, 'sun.png' snap 150dpi),
 image(wavy.svg, 'wavy.png' 150dpi, 'wavy.gif', radial-gradient(...))

This would be part of the url interface, and would be accepted
anywhere a url is currently accepted.  Exposing the correct
interface of function arguments would be a job for the function
interface in the Values API, and is designed to be orthogonal.

~TJ



Re: createBlobURL

2010-10-25 Thread Tab Atkins Jr.
On Mon, Oct 25, 2010 at 5:59 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Oct 25, 2010 at 5:56 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Oct 25, 2010 at 5:51 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Oct 25, 2010 at 5:04 PM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 On Mon, Oct 25, 2010 at 4:48 PM, Jonas Sicking jo...@sicking.cc wrote:
 Like I said, I think creating an OM that covers all the cases here
 would create something very complex. I'd love to see a useful proposal
 for http://dev.w3.org/csswg/css3-images/.

 It doesn't seem overly difficult.  Using the proposed Values API,
 you'd do something like elem.style.values.backgroundImage.url =
 [DOMURL goes here].

 That doesn't cover nearly all the ways you can use URLs as defined in
 http://dev.w3.org/csswg/css3-images/ which support multiple levels of
 fallback images, with snapping and resolution as well as gradients and
 fallback colors. And with used in a property like backgroundImage, you
 can have several combined instances of those. Consider:

 style=background-image: image(sun.svg, 'sun.png' snap 150dpi),
 image(wavy.svg, 'wavy.png' 150dpi, 'wavy.gif', radial-gradient(...))

 This would be part of the url interface, and would be accepted
 anywhere a url is currently accepted.  Exposing the correct
 interface of function arguments would be a job for the function
 interface in the Values API, and is designed to be orthogonal.

 Note that the syntax for images is significantly different from the
 syntax for urls. So I suspect you mean image rather than url
 above.

No, I meant url, which happens to be a type of image as well.  I
don't know what it would mean for me to mean image, as that covers
much more than what you can produce with a blob.


 However it still leaves my original statement unanswered:

 Like I said, I think creating an OM that covers all the cases here
 would create something very complex. I'd love to see a useful proposal
 for http://dev.w3.org/csswg/css3-images/

I outlined how it would work above, I thought.  Any property that can
take a url should, in the Values API, be able to take a url object
like we're describing here.  Any function that can take a url as an
argument should do the same.  The exact interface for exposing
function arguments hasn't been nailed down yet, but once you can do
so, using it should be the same as using an ordinary property which
just takes a url.

~TJ



Re: createBlobURL

2010-10-25 Thread Tab Atkins Jr.
On Mon, Oct 25, 2010 at 7:48 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Oct 25, 2010 at 6:10 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Oct 25, 2010 at 5:59 PM, Jonas Sicking jo...@sicking.cc wrote:
 However it still leaves my original statement unanswered:

 Like I said, I think creating an OM that covers all the cases here
 would create something very complex. I'd love to see a useful proposal
 for http://dev.w3.org/csswg/css3-images/

 I outlined how it would work above, I thought.  Any property that can
 take a url should, in the Values API, be able to take a url object
 like we're describing here.  Any function that can take a url as an
 argument should do the same.  The exact interface for exposing
 function arguments hasn't been nailed down yet, but once you can do
 so, using it should be the same as using an ordinary property which
 just takes a url.

 The question is, how do you use a DOMURL together with all the other
 features of image. Using the createObjectURL proposal I can do:

 elem.style.backgroundImage =
 image(' + createObjectURL(file1) + ' snap 150dpi), +
 image(' + createObjectURL(file2) + ' 150dpi, radial-gradient(...));

 How would I do the equivalent if createObjectURL isn't available?

*Definitely* not with the current CSSOM.  String concatenation is
disgusting API design in the first place.

Anne hasn't proposed a way to handle functions and their arguments yet
in the Values API, so I can only speculate.  Particularly, initially
setting a new function is a bit of a mystery.  I could think of a few
possible ways to do it, though.

1)
elem.style.values.backgroundImage.function = new
CSSFunctionImage([file1, 'snap', {dpi:150}], [file2, {dpi:150}], new
CSSFunctionRadialGradient(...));

2)
elem.style.backgroundImage = image(url(404) snap 150dpi, url(404)
150dpi, radial-gradient(...));
elem.style.values.backgroundImage.function.a[0].url = file1;
elem.style.values.backgroundImage.function.a[1].url = file2;

Both of these are off the top of my head, so don't read too much into
them.  The point is just to illustrate that the issue is solveable.

~TJ



Re: Replacing WebSQL with a Relational Data Model.

2010-10-26 Thread Tab Atkins Jr.
On Tue, Oct 26, 2010 at 12:04 PM, Keean Schupke ke...@fry-it.com wrote:
 Take Firefox for example, it implements IndexedDB using SQLite apparently.
 So implementing a relational API if we have to talk to IndexedDB that means
 we have to convert from the relational data model to an object model and
 then back to a relational model for SQLite. So what I would like to do is
 punch through that excess layer in the middle and have the relational API
 talk directly to SQLite in the browser implementation. How could you argue
 that having an unnecessary middle layer is a good thing?

The SQLite back-end used by Firefox's implementation of IndexedDB (and
Chrome's, for the moment) is unnecessary; at least in Chrome's case,
we used a SQLite backend only because it was expedient and the code
was there.  We'll be changing it to a better backend in the future,
and I suspect that Firefox will do the same in time.

The middle layer isn't unnecessary, *it's the whole point*.  The
back-end shouldn't ever be exposed directly - you don't want your code
to break if we drop the SQLite backend and switch to a direct
b-tree-based backend.

~TJ



Re: [XHR2] why have an asBlob attribute at all?

2010-10-29 Thread Tab Atkins Jr.
On Fri, Oct 29, 2010 at 4:08 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 29 Oct 2010 07:55:58 +0200, David Flanagan da...@davidflanagan.com
 wrote:

 I doubt I understand all the implementation issues.  But if there really
 is some reason to have this blob/non-blob decision point before calling
 send(), can I suggest that instead of confusing the XHR API with it, it be
 moved into a separate BlobHttpRequest interface that has only reponseBlob
 and does not even define responseText, etc.

 Brainstorming here. We could choose to always expose resonseArrayBuffer and
 keep it together with responseText and responseXML. And for applications
 that are worried about memory usage or care about very large files we could
 have BlobXMLHttpRequest similar to AnonXMLHttpRequest. We'd abstract some
 things out from XMLHttpRequest so BlobXMLHttpRequest does not have the other
 response* members and so that AnonXMLHttpRequest does not need
 withCredentials and the fourth and fifth parameter to open().

Could we, um, not include the word XML in any new things?
BlobHttpRequest seems much less silly.

~TJ



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-03 Thread Tab Atkins Jr.
On Tue, Nov 2, 2010 at 9:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/2/10 11:35 PM, Jonas Sicking wrote:

 So your concern is that jQuery will update to use the new API before
 browsers implement it. And then once browsers do implement it and
 start honoring the .responseType by making various existing properties
 throw, things will fail?

 No, my concern is that browsers will implement this, and then sites that
 haven't updated their jquery, and probably never plan to do it, will start
 using the new stuff browsers have implemented.

In this particular case, if someone is using jQuery to do their XHR,
they will basically never touch the native XHR object.  Native XHR
sucks pretty badly, which is why $.get, $.post, and generally $.ajax
exist.

So, there's little chance that authors will be trying to use the new
features with old jQuery, because it's impossible without hacking down
into the native object.

~TJ



Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist

2010-11-08 Thread Tab Atkins Jr.
On Mon, Nov 8, 2010 at 8:24 AM, Jonas Sicking jo...@sicking.cc wrote:
 Hi All,

 One of the things we discussed at TPAC was the fact that
 IDBObjectStore.get() and IDBObjectStore.delete() currently fire an
 error event if no record with the supplied key exists.

 Especially for .delete() this seems suboptimal as the author wanted
 the entry with the given key removed anyway. A better alternative here
 seems to be to return (through a success event) true or false to
 indicate if a record was actually removed.

 For IDBObjectStore.get() it also seems like it will create an error
 event in situations which aren't unexpected at all. For example
 checking for the existence of certain information, or getting
 information if it's there, but using some type of default if it's not.
 An obvious choice here is to simply return (through a success event)
 undefined if no entry is found. The downside with this is that you
 can't tell the lack of an entry apart from an entry stored with the
 value undefined.

 However it seemed more rare to want to tell those apart (you can
 generally store something other than undefined), than to end up in
 situations where you'd want to get() something which possibly didn't
 exist. Additionally, you can still use openCursor() to tell the two
 apart if really desired.

 I've for now checked in this change [1], but please speak up if you
 think this is a bad idea for whatever reason.

In general I'd disagree with you on get(), and point to basically all
hash-table implementations which all give a way of telling whether you
got a result or not, but the fact that javascript has false, null,
*and* undefined makes me okay with this.  I believe it's sufficient to
use 'undefined' as the flag for there was nothing for this key in the
objectstore, and just tell authors don't put undefined in an
objectstore; use false or null instead.

~TJ



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-09 Thread Tab Atkins Jr.
On Tue, Nov 9, 2010 at 11:54 AM, Chris Rogers crog...@google.com wrote:
 Hi David,
 Sorry for the delayed response.  I think the idea of BinaryHttpRequest is a
 reasonable one.  As you point out, it simply side-steps any potential
 performance and compatibility issues.  Are you imagining that the API is
 effectively the same as XMLHttpRequest, except without the text and XML
 part?
 How do other people feel about David's proposal?

I'm in favor a new constructor.  It seems silly to first hack
ourselves into a corner by extending an API designed for text or XML,
then try to hack our way out of the problems that causes.  A new
object that does what's needed seems like the cleanest and most
correct solution to the problem.

~TJ



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-10 Thread Tab Atkins Jr.
On Tue, Nov 9, 2010 at 12:03 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Nov 9, 2010 at 11:54 AM, Chris Rogers crog...@google.com wrote:
 Hi David,
 Sorry for the delayed response.  I think the idea of BinaryHttpRequest is a
 reasonable one.  As you point out, it simply side-steps any potential
 performance and compatibility issues.  Are you imagining that the API is
 effectively the same as XMLHttpRequest, except without the text and XML
 part?
 How do other people feel about David's proposal?

 I'm in favor a new constructor.  It seems silly to first hack
 ourselves into a corner by extending an API designed for text or XML,
 then try to hack our way out of the problems that causes.  A new
 object that does what's needed seems like the cleanest and most
 correct solution to the problem.

After discussion with Anne and James, I retract my support for a new
constructor.  I'm in favor of .responseType.

Specifically, .responseType would take values like  (for legacy
treatment) / text / document / arraybuffer / blob / etc.  If
the value is , then .responseText and .responseXML are filled
appropriately, while .response is empty.  Otherwise, .responseText and
.responseXML are empty (or throw or something), while .response
contains the value in the chosen format.  .responseType must be set at
some appropriately early time; after the response is received, changes
to .responseType are ignored or throw.

~TJ



Re: [Bug 11270] New: Interaction between in-line keys and key generators

2010-11-10 Thread Tab Atkins Jr.
On Wed, Nov 10, 2010 at 1:43 PM, Pablo Castro
pablo.cas...@microsoft.com wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
 Behalf Of bugzi...@jessica.w3.org
 Sent: Monday, November 08, 2010 5:07 PM

 So what happens if trying save in an object store which has the following
 keypath, the following value. (The generated key is 4):

 foo.bar
 { foo: {} }

 Here the resulting object is clearly { foo: { bar: 4 } }

 But what about

 foo.bar
 { foo: { bar: 10 } }

 Does this use the value 10 rather than generate a new key, does it throw an
 exception or does it store the value { foo: { bar: 4 } }?

 I suspect that all options are somewhat arbitrary here. I'll just propose 
 that we error out to ensure that nobody has the wrong expectations about the 
 implementation preserving the initial value. I would be open to other options 
 except silently overwriting the initial value with a generated one, as that's 
 likely to confuse folks.

It's relatively common for me to need to supply a manual value for an
id field that's automatically generated when working with databases,
and I don't see any particular reason that my situation would change
if using IndexedDB.  So I think that a manually-supplied key should be
kept.


 What happens if the property is missing several parents, such as

 foo.bar.baz
 { zip: {} }

 Does this throw or does it store { zip: {}, foo: { bar: { baz: 4 } } }

 We should just complete the object with all the missing parents.

Agreed.


 If we end up allowing array indexes in key paths (like foo[1].bar) what 
 does
 the following keypath/object result in?

 I think we can live without array indexing in keys for this round, it's 
 probably best to just leave them out and only allow paths.

Agreed.

~TJ



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-10 Thread Tab Atkins Jr.
On Wed, Nov 10, 2010 at 2:05 PM, Chris Rogers crog...@google.com wrote:

 After discussion with Anne and James, I retract my support for a new
 constructor.  I'm in favor of .responseType.

 Specifically, .responseType would take values like  (for legacy
 treatment) / text / document / arraybuffer / blob / etc.  If
 the value is , then .responseText and .responseXML are filled
 appropriately, while .response is empty.  Otherwise, .responseText and
 .responseXML are empty (or throw or something), while .response
 contains the value in the chosen format.  .responseType must be set at
 some appropriately early time; after the response is received, changes
 to .responseType are ignored or throw.

 ~TJ

 So you prefer that .responseType take a string value as opposed to an
 integer enum value?  Darin Fisher had the idea that introspection of the
 supported values would be easier as an enum.

Yes, I think using an enum would be *extremely* verbose, particularly
given this particular API's name.  I don't want to see or type code
like:

myXHR.responseType = XMLHttpResponse.RESPONSETYPE_ARRAYBUFFER;

This is much better:

myXHR.responseType = arraybuffer;

~TJ



Re: [Bug 11270] New: Interaction between in-line keys and key generators

2010-11-10 Thread Tab Atkins Jr.
On Wed, Nov 10, 2010 at 2:07 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Nov 10, 2010 at 1:50 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Nov 10, 2010 at 1:43 PM, Pablo Castro
 pablo.cas...@microsoft.com wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of bugzi...@jessica.w3.org
 Sent: Monday, November 08, 2010 5:07 PM

 So what happens if trying save in an object store which has the following
 keypath, the following value. (The generated key is 4):

 foo.bar
 { foo: {} }

 Here the resulting object is clearly { foo: { bar: 4 } }

 But what about

 foo.bar
 { foo: { bar: 10 } }

 Does this use the value 10 rather than generate a new key, does it throw 
 an
 exception or does it store the value { foo: { bar: 4 } }?

 I suspect that all options are somewhat arbitrary here. I'll just propose 
 that we error out to ensure that nobody has the wrong expectations about 
 the implementation preserving the initial value. I would be open to other 
 options except silently overwriting the initial value with a generated one, 
 as that's likely to confuse folks.

 It's relatively common for me to need to supply a manual value for an
 id field that's automatically generated when working with databases,
 and I don't see any particular reason that my situation would change
 if using IndexedDB.  So I think that a manually-supplied key should be
 kept.

 I'm fine with either solution here. My database experience is too weak
 to have strong opinions on this matter.

 What do databases usually do with columns that use autoincrement but a
 value is still supplied? My recollection is that that is generally
 allowed?

I can only speak from my experience with mySQL, which is generally
very permissive, but which has very sensible behavior here imo.

You are allowed to insert values manually into an AUTO_INCREMENT
column.  The supplied value is stored as normal.  If the value was
larger than the current autoincrement value, the value is increased so
that the next auto-numbered row will have an id one higher than the
row you just inserted.

That is, given the following inserts:

insert row(val) values (1);
insert row(id,val) values (5,2);
insert row(val) values (3);

The table will contain [{id:1, val:1}, {id:5, val:2}, {id:6, val:3}].

If you have uniqueness constraints on the field, of course, those are
also used.  Basically, AUTO_INCREMENT just alters your INSERT before
it hits the db if there's a missing value; otherwise the query is
treated exactly as normal.

~TJ



Re: Updates to FileAPI

2010-11-11 Thread Tab Atkins Jr.
On Thu, Nov 11, 2010 at 1:28 AM, Anne van Kesteren ann...@opera.com wrote:
 On Thu, 11 Nov 2010 08:43:21 +0100, Arun Ranganathan
 aranganat...@mozilla.com wrote:

 Jian Li is right.  I'm fixing this in the editor's draft.

 Why does lastModified even return a DOMString? Can it not just return a
 Date? That seems much nicer.

Probably because WebIDL doesn't (didn't?) have a date type.  That's a
silly reason in the first place, and heycam is fixing (has fixed?) it
in the second place.

~TJ



Re: Updates to FileAPI

2010-11-12 Thread Tab Atkins Jr.
On Fri, Nov 12, 2010 at 3:05 AM, Anne van Kesteren ann...@opera.com wrote:
 On Thu, 11 Nov 2010 17:33:04 +0100, Arun Ranganathan
 aranganat...@mozilla.com wrote:

 I agree that a readonly Date object returned for lastModified is one way
 to go, but considered it overkill for the feature.  If you think a Date
 object provides greater utility to simply get at the lastModified data, I'm
 entirely amenable to putting that in the editor's draft.

 It depends on what the use cases are I suppose. But if the last modified
 date is going to be displayed somehow having a Date object seems more
 flexible.

Plus if you're going to do any actual work with it - there's no sense
parsing a date string just so you check if the file was modified more
than a week ago, when you could do it directly with a Date.

~TJ



Re: Discussion of File API at TPAC in Lyon

2010-11-12 Thread Tab Atkins Jr.
On Fri, Nov 12, 2010 at 3:47 PM, Jonas Sicking jo...@sicking.cc wrote:
 Maybe using a global object is better since we don't really want these
 functions to appear on documents created using XMLHttpRequest,
 DOMParser, etc.

 Quick, someone suggest a name, whoever comes up with one first wins a
 beer for next TPAC :)

I think that whoever suggested URL already wins that beer.  ^_^

~TJ



Re: Discussion of File API at TPAC in Lyon

2010-11-12 Thread Tab Atkins Jr.
On Fri, Nov 12, 2010 at 5:54 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Nov 12, 2010 at 5:18 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Fri, Nov 12, 2010 at 3:47 PM, Jonas Sicking jo...@sicking.cc wrote:
 Maybe using a global object is better since we don't really want these
 functions to appear on documents created using XMLHttpRequest,
 DOMParser, etc.

 Quick, someone suggest a name, whoever comes up with one first wins a
 beer for next TPAC :)

 I think that whoever suggested URL already wins that beer.  ^_^

 I guess me and Anne will have to split it then, since he proposed
 using the URL constructor, and I said that I didn't like using the
 constructor but suggested putting the functions on the URL interface
 object. Though it's quite possible that someone beat me to that
 proposal, in which case they better speak up or loose a beer forever
 :-)

 The downside of using URL though is that both Firefox and IE, and I
 think Chrome too, seems to be ready to ship
 createObjectURL/revokeObjectURL very soon, much sooner than the URL
 object will be fully specified. That means that if we set up the URL
 interface object for createObjectURL/revokeObjectURL, then it'll be
 harder to feature detect support for the real URL object.

Only marginally.  There'll be properties on URL that can be
existence-tested for in the future.

~TJ



Re: requestAnimationFrame

2010-11-16 Thread Tab Atkins Jr.
On Tue, Nov 16, 2010 at 10:52 AM, Gregg Tavares (wrk) g...@google.com wrote:
 On Mon, Nov 15, 2010 at 7:24 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
 Now, when animation is happening on a separate compositor thread that
 guarantee has to be relaxed a bit. But we'll still try to meet it on a
 best-effort basis --- i.e. we'll run the JS animations once per composited
 frame, if the JS can keep up.

 So you're saying that there's no guarantee that requestAnimationFrame will
 actually keep things in sync?

Right; if the browser is trying to paint animation frames every 20ms,
and two functions have both registered themselves for the next frame,
but the first function takes 50ms to run, then of course the second
one won't get to run at the same time.  It'll be delayed until the 3rd
frame after or so.

~TJ



Re: [XHR2] responseType / response / overrideMimeType proposal

2010-11-29 Thread Tab Atkins Jr.
On Mon, Nov 29, 2010 at 3:00 PM, Chris Rogers crog...@google.com wrote:
 Anne, for what it's worth, in my initial implementation in WebKit, I've
 allowed .responseText to be accessed (without throwing) if .responseType ==
 text.
 Likewise, .responseXML can be accessed (without throwing) if .responseType
 == document
 I don't have a strong opinion either way.  But it wasn't hard for us to
 implement that way.

IIRC, in our current experimental implementation accessing
.responseText and .responseXML *never* throw based on .responseType -
they're just empty if .responseType is wrong for them.

~TJ



Re: [IndexedDB] Compound and multiple keys

2010-12-01 Thread Tab Atkins Jr.
Disclaimer: all of my db experience is with SQL.

I prefer option A.  It's simple and easy.  Option B requires you to
potentially duplicate information into an array to use as a key, which
I don't like.

That said, I don't have much experience with out-of-line keys.  Can we
combine A  B such that in-line keys are A and out-of-line keys are B?
 That seems to be intuitive.

To answer your specific questions, I've never used a compound key with
variably numbers of columns.  (Disclaimer: I'm strongly in the
synthetic-key camp, so I don't really use compound keys anyway.  But
I've never seen an instance where I would have wanted to use a
variable number of columns, were I to index the table with a compound
key.)

I can't distinguish your second question from the first.

For your third question, the closest analogue in SQL to an array is a
SET.  I can't tell whether or not SETs can be used as keys.

~TJ



Re: Structured clone in WebStorage

2010-12-02 Thread Tab Atkins Jr.
On Thu, Dec 2, 2010 at 5:45 AM, Arthur Barstow art.bars...@nokia.com wrote:
 On Nov/29/2010 9:59 AM, ext Adrian Bateman wrote:
 On Wednesday, November 24, 2010 3:01 AM, Jeremy Orlow wrote:
 For over a year now, the WebStorage spec has stipulated that
 Local/SessionStorage store and retrieve objects per the structured clone
 algorithm rather than strings.  And yet there isn't a single
 implementation
 who's implemented this.  I've talked to people in the know from several
 of
 the other major browsers and, although no one is super against
 implementing
 it (including us), no one has it on any of their (even internal)
 roadmaps.  It's just not a high enough priority for anyone at the moment.
 I feel pretty strongly that we should _at least_ put in some
 non-normative
 note that no browser vendor is currently planning on implementing this
 feature.  Or, better yet, just remove it from the spec until support
 starts
 emerging.

 I agree. We have no plans to support this in the near future either. At
 the
 very least, I think this should be noted as a feature at risk in the
 Call
 for Implementations [1].

 I don't have a strong preference for removing this feature or marking it as
 a Feature At Risk when the Candidate is published.

 It would be good to get feedback from other implementers (Maciej?, Jonas?,
 Anne?). If no one plans to implement it, perhaps it should just be removed.

I won't be the person implementing it, but fwiw I highly value having
structured clones actually work.  Any time I talk about localStorage
or similar, I get people asking about storing non-string data, and not
wanting to have to futz around with rolling their own serialization.

~TJ



Re: Call for Editors for Server-sent Events, Web Storage, and Web Workers

2010-12-13 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 2:33 PM, Doug Schepers schep...@w3.org wrote:
 Hi, Ian-
 Ian Hickson wrote (on 12/13/10 4:24 PM):
 On Mon, 13 Dec 2010, Doug Schepers wrote:

  This is an active call for editors for the Server-sent Events [1], Web
  Storage [2], and Web Workers [3] specifications.  If you are interested
  in becoming an editor, with all the rights and responsibilities that go
  along with that, please respond on this thread or email us directly at
  team-weba...@w3.org.

 That's kinda funny since those drafts already all have an active editor.

 That's why I was explicit that we are looking for co-editors.  I hope that
 you are willing to work with other editors.


  We appreciate and acknowledge the work the current editor, Ian Hickson,
  has put into these specs, but he seems to have indicated that he does
  not wish to be the one to drive them forward (which is understandable,
  given his other commitments, such as the HTML5 spec).

 I have done no such thing. I've only said I'm not interested in doing the
 TR/ work.

 Ian, the Technical Report work is what W3C does.  You stated that you aren't
 interested in TR work [1], and that you are fine with having someone take
 the draft and regularly publish a REC snapshot of it for patent policy
 purposes [2]... and that's what an editor does.  I'm not sure what other
 way to move forward.  (And to be honest, the tone of your emails does not
 inspire confidence in your willingness to work with W3C's framework.)

 I'm not playing political games, and I'm not trying to insult you... I have
 been asked to move these specs along more rapidly, and I think that's a
 reasonable request.  Our expectation is that the specs will reach a stable
 state more quickly with an additional editor who can dedicate themselves
 more exclusively to the task.

 It may be that no-one is interested or has the time, or that a volunteer
 doesn't have the right skills to manage the task, in which case we have no
 conflict; if it happens that we do find someone to help out, then we can
 discuss the distribution of work.

 I'm not trying to shut you out of the process, and I respect any feedback
 you have on the subject.

 [1] http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/0865.html
 [2] http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/0866.html


I, too, thought that your email was stating that the aforementioned
documents had *no* editors, and that you were saying that Ian wasn't
willing to work on them.  If the idea is merely that you would like
co-editors who are willing to do the job of occasionally pushing TR
copies, that could have been communicated *much* better.

For example, an editor does *far* more than just publish snapshots and
deal with comments; I definitely don't have time to do all the work
that being an editor would entail.  You aren't asking for someone to
do all that though, you're just asking for someone to occasionally do
a bit of administrative work.  I have the bandwidth to help with that
if necessary.

~TJ



Re: Call for Editors for Server-sent Events, Web Storage, and Web Workers

2010-12-13 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 3:33 PM, Doug Schepers schep...@w3.org wrote:
 Hi, Ian-

 I'm sorry if it wasn't clear that we hope to keep you on as co-editor, if
 you are willing and able.

 I simply don't have time (nor, frankly, am I interested) in having a
 political or philosophical debate about what an editor is or isn't, or what
 makes a spec stable, or whether W3C is structured in the right way to meet
 any given aim.  That conversation would distract and detract from the
 pragmatic goal of finding additional co-editors for these specs.

 We are not looking for someone to do mere secretarial work, we are looking
 for people with a stated interest to work within the W3C process to move
 these specs along the W3C Recommendation track at a timely pace.  Helping
 coordinate test suites is part of that, as is making changes to the spec
 based on requirements, implementation experience, and working group
 decisions.


 So, I repeat: anyone interested in helping co-edit these specs, please
 contact the chairs or myself, or say so on this list.

Dude, it's not a philosophical argument.  It really is important to
frame your request appropriately.  You aren't looking for someone to
edit the spec, you're looking for someone to push snapshots and do a
little bit of other work.  Secretarial is a good adjective.  Very
few people have the time, expertise, or willingness to do the former.
Many more can do the latter.

Fiddling about with the definition of editor is a distraction that
just makes people immediately skip the rest of the request, because
they know that they're not interested in picking the specs up as
editors.  I did that initially, and only gave it a second look when
Ian rephrased your request in more succinct and correct terms.

And, like I said, I have enough bandwidth to do this.

~TJ



Re: Call for Editors for Server-sent Events, Web Storage, and Web Workers

2010-12-13 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 3:42 PM, Doug Schepers schep...@w3.org wrote:
 But we are looking for more than someone to just push TR copies, we want
 someone who (like Ian) understands the issues, and knows how to help drive
 progress through consensus and technical expertise, and who can dedicate
 themselves to the task.

Can we get a bullet-point listing of the responsibilities for the
desired position?  I've gone back and reread the OP, and I don't
understand what exactly you're asking for.  I'm sure the
responsibilities are hidden there, but the wordiness makes my eyes
slide right over them.

~TJ



Re: XBL2: First Thoughts and Use Cases

2010-12-13 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 5:16 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Mon, Dec 13, 2010 at 5:12 AM, Dimitri Glazkov dglaz...@google.com
 wrote:

  We definitely have use-cases that require the shadow DOM to be
  dynamically
  updated when an element that expands to a template instance has its
  subtree
  changed. Almost every application that combines dynamic DOM modification
  (e.g. editing) with templates needs this. So you do need to record how
  instances were created.

 Can you give a more specific example?


 Suppose I use XBL2 to define fancycontainer, a container with elaborate
 styling that I can't do with CSS alone. Changes to the children of a
 fancycontainer need to be reflected in the shadow DOM tree built for
 fancycontainer, otherwise dynamic changes in the presence of
 fancycontainer are just broken. For example, adding a child to container
 would need to find the associated template instance and insert the child
 into the right place in the instance.

Ah, you're thinking about changes to the normal DOM.  We're afraid of
changes to the template.  Different story.

To be more specific, if we assume something like the following
(handwavey syntax):

element name=x-fancycontainer
  template
div id=one
  div id=two
div id=three
  content selector=*
/div
  /div
/div
  /template
/element

x-fancycontainer
  spanfoo/span
/x-fancycontainer

Then there's no problem.  You don't need the templates to be live to
make child changes work.  You just need to maintain some record that
any normal-DOM elements which match * should appear as children of
the shadow node #three in the final flattened tree.  appendChild()'ing
new elements to the x-fancycontainer will appropriately wire the
elements into the shadow tree.  This sort of selector-node map can be
divined from the template and copied into a separate data structure,
just like the actual shadow nodes can just be cloned out of the
template into separate live DOM.  No linkage back to the original
template is required.

We're just afraid of, say, attaching event handlers or data-*
attributes or whatever to shadow nodes, and then having the nodes get
destroyed and recreated underneath us when the template changes.  An
element shouldn't destroy itself unless the author explicitly tells it
to.  XBL does try to be careful to destroy as little as possible, but
it shouldn't destroy *anything* unless explicitly requested.

~TJ



Re: XBL2: First Thoughts and Use Cases

2010-12-13 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 9:11 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/13/10 5:46 PM, Tab Atkins Jr. wrote:

 Ah, you're thinking about changes to the normal DOM.  We're afraid of
 changes to the template.

 I think roc explicitly said that he thinks the XBL2 spec's section on this
 seems ... dispensable.

 I agree with him, for what it's worth.

Then we're all in agreement.  ^_^  The rules that templates set up for
assigning normal DOM nodes to places in the final flattened tree
should stick around somehow even if we don't retain a reference to the
template itself, so that adding children to the element afterwards has
the same effect as parsing them in the original HTML.

~TJ



Re: XBL2: First Thoughts and Use Cases

2010-12-14 Thread Tab Atkins Jr.
On Mon, Dec 13, 2010 at 10:33 PM, Robert O'Callahan
rob...@ocallahan.org wrote:
 On Tue, Dec 14, 2010 at 2:46 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
 Then there's no problem.  You don't need the templates to be live to
 make child changes work.  You just need to maintain some record that
 any normal-DOM elements which match * should appear as children of
 the shadow node #three in the final flattened tree.  appendChild()'ing
 new elements to the x-fancycontainer will appropriately wire the
 elements into the shadow tree.  This sort of selector-node map can be
 divined from the template and copied into a separate data structure,
 just like the actual shadow nodes can just be cloned out of the
 template into separate live DOM.  No linkage back to the original
 template is required.

 Sure, but you also have to handle the includes attribute and the
 attributes attribute, so in fact you need to know a fair bit about the
 template to handle dynamic changes to the bound document. You might decide
 it's easier to just hold a reference to the template itself.

 But yeah, we're agreeing.

Begging the question.  ^_^

All of the information from the template can be duplicated in
appropriate data structures on the element itself, like Dimitri
explains.  This allows us to treat the template solely as a stamp,
used only at initialization and then thrown away.

This gains us a few things.  For one, you now have a simpler, more
static model of how things work.  There's no action-at-a-distance
where changes to the template late in the page lifecycle can affect
elements created during the original page parse; once an element is
created with the appropriate information, it stays that way forever,
unless the author explicitly monkeys around with it.  For two, it
naturally exposes all the magical template abilities to plain
javascript, allowing everything to be manipulated by script
after-the-fact, or even done entirely through script if that is, for
whatever reason, easier than writing a template into a page.  I think
this is A Good Thing(tm).  In general, I don't think we shouldn't be
adding new magical features to the platform without ensuring they can
be handled in script as well.

Looking just at the problem itself, it's an open question as to
whether it would be simpler to hold a reference to the template or
just create the appropriate data structures out of the template.
Likely, you'll be doing the latter in C++ anyway, so pushing them out
into js as well feels pretty natural.  But with the other added
benefits that you get from making everything happen out in the open,
I think the decision is a lot clearer.

~TJ



Re: XBL2: First Thoughts and Use Cases

2010-12-14 Thread Tab Atkins Jr.
On Tue, Dec 14, 2010 at 11:23 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/14/10 11:16 AM, Dimitri Glazkov wrote:

 This is interesting. Can you give an example? I am wondering if you
 and Tab are talking about the same thing. What sorts of problems?

 The issues we've run into is that the shadow DOM tree can get mutated, which
 makes the actual DOM get out of sync with the data structures that represent
 insertion points (what I think XBL2 calls output ports) and the like).
  After this, adding normal DOM children to the bound element at best puts
 them in the wrong place in the shadow DOM; at worst we've had exploitable
 crash issues we had to fix.

Hmm.  I'm not well-versed enough in XBL1 to understand what all the
difficulties are, but what we're envisioning is pretty simple and
shouldn't lead to many problems.

Given a template with some output ports, it's instantiated by cloning
the shadow DOM and then setting up a map of selectors-shadow-nodes to
represent the output ports.

If you mutate the shadow DOM without paying attention to the
outputPorts map, there are three possibilities for each port:

1. It points to a shadow node that wasn't mutated.  No change.

2. It points to a shadow node that was moved.  Everything currently
attached to that shadow node, and any new elements added to the
component which match the selector, will show up wherever the shadow
node was moved to.

3. It points to a shadow node that was removed.  Existing normal nodes
which were pointing to that shadow node now don't show up at all in
the final flattened tree (they lose their attachment, unless you ask
for them to be reattached).  New elements that get added and which
match the selector can either ignore the selector (because we know
that port is invalid) or just explicitly get put nowhere in the final
flattened tree.  Either option would be fine with me.


 Now if the shadow DOM can only be mutated by the binding itself, then it's
 possible to just avoid those problems in the binding script or restrict the
 things that script can do.  But if the shadow DOM is exposed to the page the
 bound element is in, then the implementation needs to handle arbitrary
 mutations _somehow_, since you can't rely on things outside the binding
 playing nice with the binding.  Or, of course, restrict what _that_ script
 can do with the shadow DOM, but that has more potential for weird breakage
 if the binding changes out from under the scripts that are trying to poke at
 it.

All of the cases I outlined above can be run into when you're mutating
a live template as well.  Are there additional cases I'm missing that
you have problems with?  Are they perhaps a result of having both a
mutable shadow and a live template?

~TJ



Re: Rename XBL2 to something without X, B, or L?

2010-12-14 Thread Tab Atkins Jr.
On Tue, Dec 14, 2010 at 1:24 PM, Dimitri Glazkov dglaz...@google.com wrote:
 Dear all,

 Looking at the use cases and the problems the current XBL2 spec is
 trying address, I think it might be a good idea to rename it into
 something that is less legacy-bound? Hixie already cleverly disguised
 the X as  [X]engamous in the latest draft, and if this spec is to
 become part of HTML, it probably should lose an 'L'. As for 'B',
 describing what XBL2 aims to do as 'bindings' ain't super-accurate.

 The way I look at it, the problems we're trying to solve are:

 a) templating --  for astoundingly fast creation of DOM chunks using
 declarative syntax;
 b) shadow DOM -- for maximum-pleasure encapsulation and leak-free
 component abstraction of DOM chunks;
 c) binding -- for joy-filled extension and decoration DOM elements.

 Describing all these as just Binding just feels wrong. Web
 Components perhaps or something along these lines?

 Who's with me? :)

I'm partial to Web Component Model.  This lends a good name to the
things that use it (components), and is pretty clear I think.

~TJ



Re: [widgets] Storage keys and ECMAScript incompatibility?

2010-12-15 Thread Tab Atkins Jr.
On Wed, Dec 15, 2010 at 5:29 AM, Scott Wilson
scott.bradley.wil...@gmail.com wrote:
 We've come across an issue with storage keys in Widget preferences; namely
 that the Web Storage spec [1] states that:
 Keys are strings. Any string (including the empty string) is a valid key.
 Values can be any data type supported by the structured clone algorithm.
 However, common guidance on JavaScript states that:
 Variable names must begin with a letter or the underscore character
 ECMAScript[3] follows the Unicode identifier syntax[4], which defines
 variable names as starting with:
 Characters having the Unicode General_Category of uppercase letters (Lu),
 lowercase letters (Ll), titlecase letters (Lt), modifier letters (Lm), other
 letters (Lo), letter numbers (Nl), minus Pattern_Syntax
 and Pattern_White_Space code points, plus stability extensions
 So we get into problems using keys that begin with digits, which are allowed
 as far as I can tell in WebStorage and Widgets, but not in ECMAScript, and
 things like widgets.preferences.12345=xyz throw exceptions.

timeless got it.  Only a subset of possible keys can be used in the
dot syntax.  Everything else can be used in the array notation
instead.  This is perfectly fine.

~TJ



Fwd: XBL2: First Thoughts and Use Cases

2010-12-15 Thread Tab Atkins Jr.
On Tue, Dec 14, 2010 at 10:32 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/14/10 10:08 PM, Tab Atkins Jr. wrote:

 Hm, good point.  So then, no, there has to be an element in the shadow
 DOM that represents an output port, which is then *replaced* with the
 appropriate normal-DOM children in the final flattened tree.

 So just to make sure we're on the same page... are you thinking in terms of
 there being separate DOM nodes in the template, in the shadow DOM and in the
 final flattened tree?

Yes to the first two.  Maybe to the last - the final flattened tree is
just what's handed to CSS as the element-tree.  There aren't really
DOM nodes there, or at least it doesn't matter whether or not there
is.

(Events and such don't work on the final flattened tree, they work on
the DOM augmented with shadow DOMs, in such a way that the existence
of shadow DOMs isn't revealed to elements that don't need to know
about them.)


 content
  div
    span/span
    children/
    span/span
  /div
 content

 And then you remove the first span.

 So that in this case there would be a span element in the shadow DOM and a
 different span element in the flattened tree?

Subject to what I said above, maybe?


 Ah, ok.  Given what I said above (shadow node representing the port,
 which is replaced in the final flattened tree), then this is trivial.
 Removing the first span would just change the shadow to be:

 div
   outputport
   span/span
 /div

 OK; how would it change the flattened tree?  Or am I misunderstanding your
 conceptual model?

The final flattened tree wouldn't have the first original first span,
since it's not in the DOM anymore.  It would just look like:

div
 ...any normal-DOM elements associated with the output port...
 span/span
/div

Hopefully this is the obvious answer.


 ...exactly as expected, since you're just mutating a DOM, and output
 ports are real DOM nodes.

 Should they be, though?  Should .childNodes.length on the parent of an
 output port in the flattened tree count the output port?

Sure - from the perspective of the shadow node, it has some shadow
children, which may include output ports.  The shadow doesn't directly
know whether its output port has any normal-DOM elements associated
with it, or how many, though this is something you should be able to
easily query with script (possibly a property on the output port
returning a NodeList of normal-DOM elements associated with it).


 The way Gecko's implementation works if one can call it that is that
 there
 is the template DOM and then the shadow DOM.  The shadow DOM is created
 by
 cloning the template DOM, more or less.  Output ports are kept track of
 on
 the template DOM.  When you insert a node as a child under the bind
 element,
 you find the right port in the template DOM, then try to find the
 corresponding location in the (possibly mutated) shadow DOM.  This
 clearly
 doesn't work very well!

 Ah, so it *is* an issue with combining mutable shadows with live
 templates!

 No.  The template is not live in the sense that it never mutates. It's a
 completely static DOM.

Oh, gotcha.  Well, still, the problem arises from the (cloned)
template DOM and the shadow DOM being separate things that can drift
out of sync.  That's not what happens in our idea - the shadow is
cloned from the template, and then it's the only source of truth.

~TJ



Re: Fwd: XBL2: First Thoughts and Use Cases

2010-12-15 Thread Tab Atkins Jr.
On Wed, Dec 15, 2010 at 10:19 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/15/10 7:51 AM, Tab Atkins Jr. wrote:
 (Events and such don't work on the final flattened tree

 Sort of.  Hit testing clearly needs to work on the layout structure
 generated from the final flattened tree, so event target determination works
 on the flattened tree, while event propagation works on the shadow DOMs.

 What worries me is that if we bake this conceptual assumption about the
 shadow DOM nodes being distinct from the flattened tree elements that gives
 us the freedom to write a spec that in fact requires both to be represented
 by distinct objects, increasing the memory and complexity needed to
 implement.  More on this below.

True.  We need to dive into event handling a bit more and make sure
we're being consistent.  I suspect we are, but I need to make sure so
I can talk a consistent story.


 Should they be, though?  Should .childNodes.length on the parent of an
 output port in the flattened tree count the output port?

 Sure - from the perspective of the shadow node, it has some shadow
 children, which may include output ports.

 So should the output port nodes then be exposed to methods manipulating the
 shadow DOM?  Should it be ok to move output ports around in the shadow tree?
  If so, why?

 My preference, fwiw, would be that output ports are not present as DOM nodes
 in the shadow DOM.  That significantly reduces the complexity of specifying
 the behavior, I think

Yes, output ports can be moved.  I don't have any particular use-case
for it, but under the current conceptual model for how output ports
work, it's simpler to allow it than to disallow it, because output
ports are just elements.

I think that having output ports be elements is a good and simple
answer, because we want output ports to be insertion points, not
containers.  Other answers are either incompatible (for example,
having a map of selectors to shadow nodes, which makes the pointed-to
shadow node a container) or more complicated (trying to match the
current shadow DOM to the template DOM to find out where the insertion
point should be).


 No.  The template is not live in the sense that it never mutates. It's
 a
 completely static DOM.

 Oh, gotcha.  Well, still, the problem arises from the (cloned)
 template DOM and the shadow DOM being separate things that can drift
 out of sync.  That's not what happens in our idea - the shadow is
 cloned from the template, and then it's the only source of truth.

 So here's the thing.  XBL1 was originally designed as a reusable component
 model with the idea that the components would actually be reused, with
 possibly many (tens of thousands) of instantiations of a given template.
  Which means that memory usage for each instantiation is a concern, which is
 why as much as possible is delegated to the shared state in the template.

 At least in Gecko's case, we still use XBL1 in this way, and those design
 goals would apply to XBL2 from our point of view.  It sounds like you have
 entirely different design goals, right?

Sounds like it.  We're approaching the problem from the angle of
Every major javascript framework creates its own non-interoperable
component framework.  How can we make a lingua franca that would allow
them all to talk the same language?.  We want a jQuery component and
a MooTools component to work nicely together, rather than each having
their own entirely separate notion of what a component is, how to
manage its lifecycle, etc.

Under this model, existing components already expose all their DOM
separately every time, as real live DOM nodes in the document, so
instantiating fresh shadow for each instance of a component is no
worse.  Encapsulating it in shadow trees restores some sanity to the
DOM, and allows some optimizations (like not attempting to match
normal selectors against component-internal nodes, or
component-internal selectors against the rest of the page).

(Elaborating for the viewers at home, what I mean by sanity is the
nice hiding of inconsequential DOM that exists only for display and
interaction purposes.  For example, if you made input type=range in
normal HTML, you'd use a nice chunk of DOM structure for it.  The
details of exactly what the DOM is, though, are unimportant.  All you
need to know is that there's a slider input, and some relevant knobs
are exposed as attributes.  You don't want a rule from elsewhere in
the page accidentally styling the grabber for the slider just because
it happens to match div  div or something, *particularly* if
different browsers use different DOM structures for the slider input.)

~TJ



Re: Fwd: XBL2: First Thoughts and Use Cases

2010-12-15 Thread Tab Atkins Jr.
On Wed, Dec 15, 2010 at 11:14 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/15/10 10:51 AM, Tab Atkins Jr. wrote:
 Yes, output ports can be moved.  I don't have any particular use-case
 for it, but under the current conceptual model for how output ports
 work, it's simpler to allow it than to disallow it, because output
 ports are just elements.

 It significantly complicates implementation; when an output port is moved
 you have to find all the elements in the flattened tree that came through
 the output port and move them to different places (note that they don't all
 end up in the same place, in general).

If all you're doing is moving the output port, why wouldn't all the
associated normal-DOM elements end up in the same place?  Mutating the
output port would obviously cause changes, and the final box-tree for
CSS can indeed be changed in non-trivial ways, but I'm not immediately
seeing any reason why the final flattened tree would be changed in any
extraordinary way.


 I think that having output ports be elements is a good and simple
 answer, because we want output ports to be insertion points, not
 containers.

 Sure.  But them being insertion points can happen without being elements.
  For example, an insertion point can be tracked conceptually as a collapsed
 range (e.g. similar to the way a caret works in text controls; that too is
 an insertion point).

True, but having them be anything other than elements complicates the
handling of shadow DOM mutations.  I don't think there's a
non-arbitrary answer to what happens if the shadow tree contains
*only* an output port (as a collapsed range) and then you append a
child to the shadow tree.  Does the range go before or after the node?
 Is there any way to make this obvious to an author?

I'm not wedded to the output ports are elements in the shadow DOM
idea, but I think it's a pretty strong idea.


 At least in Gecko's case, we still use XBL1 in this way, and those design
 goals would apply to XBL2 from our point of view.  It sounds like you
 have
 entirely different design goals, right?

 Sounds like it.

 OK, so given contradictory design goals, where do we go from here?

Hmm, good question.  To start, I don't think I fully understand the
value of the situation you outline as a design goal.  What sort of
situation do you envision where you want to optimize producing tens of
thousands of components on a single page?

In the long term, if our use-cases truly are contradictory or
incompatible, then we can decide if it's worthwhile to approach each
case independently with different solutions.  We need to look at
use-cases first, though, so we can decide exactly what problems we're
trying to solve.


 Under this model, existing components already expose all their DOM
 separately every time, as real live DOM nodes in the document, so
 instantiating fresh shadow for each instance of a component is no
 worse.

 Sure.  And Gecko instantiates a fresh shadow tree copy for each instance.
  However you're suggesting also instantiating a fresh copy of various
 metadata, whose size can easily dwarf the size of the shadow tree itself.

I don't think I agree with that characterization.  The necessary
metadata isn't very large:

1. A list of output ports.

2. For each output port, a list of which normal-DOM descendants of the
component are associated with that port.

3. A list of attribute forwards (a map from name to node/name).

4. A list of pseudos (a map from idents to shadow nodes).

5. Other stuff?

This is a few NodeLists and a few maps, comparable in size to a small
DOM tree I'd think.  Am I missing something?

~TJ



Re: XBL2: First Thoughts and Use Cases

2010-12-15 Thread Tab Atkins Jr.
On Wed, Dec 15, 2010 at 1:18 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 14 Dec 2010, Boris Zbarsky wrote:

 So that in this case there would be a span element in the shadow DOM and
 a different span element in the flattened tree?

 As XBL2 is specced currently, the nodes in the explicit DOM and in the
 shadow DOM are the same nodes as in the final flattened tree, except that
 certain elements in the shadow tree don't appear in the final flattened
 tree (the root template and the insertion point content elements, in
 particular; also the element used for inheritance insertion).

 The example in this section, while initially rather perplexing, is
 probably the quickest way of visualising this:

   
 http://dev.w3.org/cvsweb/~checkout~/2006/xbl2/Overview.html?content-type=text/html;%20charset=utf-8#the-final-flattened-tree

 The key is just that each element in the final flattened tree is _also_ in
 a DOM somewhere. It's the same elements, they just have two sets of tree
 pointers (parent, children, siblings, etc). Selectors and events work in
 XBL2 as specified work on a carefully chosen hybrid of these trees.

As far as I know (and I've been in the center of the discussions over
here, so hopefully I know pretty far), we agree with this design in
XBL2.  We have some nits to pick with precisely how shadows are
constructed and flattened, but otherwise, yeah, basically the same
deal.

~TJ



Re: Fwd: XBL2: First Thoughts and Use Cases

2010-12-16 Thread Tab Atkins Jr.
On Thu, Dec 16, 2010 at 10:40 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/15/10 11:29 AM, Dimitri Glazkov wrote:

 That seems like an implementation detail. Metadata can be shared and
 cloned as needed, just like styles in CSS.

 Sort of.  It would need to be cloned as soon as the shadow tree is mutated,
 right?  That seems like very fragile behavior from a web author point of
 view, where it's easy to deoptimize without realizing it.

At least we can produce simple advice on how to definitely avoid
deoptimizing - stick with the declarative syntax and don't mutate the
shadow.

With luck, enough use-cases will be solveable with the declarative
syntax that this will be an acceptable restriction.

~TJ



Re: Fwd: XBL2: First Thoughts and Use Cases

2010-12-16 Thread Tab Atkins Jr.
On Thu, Dec 16, 2010 at 1:33 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/16/10 1:00 PM, Dimitri Glazkov wrote:

 I agree that it's going to be difficult to get this right, but
 semi-live templates (if you change it here, it will reflect on all
 instances, but it you change it here, it won't) seem even more
 fragile.

 Sure.  I'm proposing that templates be completely dead.  I'm also proposing
 that, for a first cut, shadow trees be completely dead (in the will throw
 exception if you try to add or remove nodes sense), unless we can figure
 out how to efficiently implement live shadow trees.

Hmm.  Olli just said that shadow mutations are common in XBL1.  I'm
somewhat loathe to make it automatically dead.

On the other hand, there are lots of use-cases where dead shadows are
perfectly fine, so having some declarative way to differentiate
between whether the shadow needs to be live or dead might work.

For example, adding resize handles to an image doesn't require a live
shadow.  The handles can be static; they'll need listeners registered
on them, but that's it.  Same with video controls, or select
substructure.

It sounds like it's fine for the shadow to mutate, so long as nodes
aren't added/created/moved.  For example, I can twiddle attributes on
a shadow node without requiring the more expensive map all the
metadata out step, right?  The idea is just that we can, as an
optimization, keep all the metadata on a central shared object, so
that any time I, say, add a normal-DOM node to a component, I can just
go check that central data object to see where to forward the node?

~TJ



Re: Hash functions

2010-12-21 Thread Tab Atkins Jr.
On Mon, Dec 20, 2010 at 5:49 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/20/10 7:42 PM, Glenn Maynard wrote:

 Has a hash functions API been considered, so browsers can expose, for
 example, a native SHA-1 implementation?  Doing this in JS is possible,
 but painfully slow, even with current JS implementations.

 Before we go further into this, can we quantify painfully slow so we have
 some idea of the magnitude of the problem?

 Using the testcase at https://bugzilla.mozilla.org/attachment.cgi?id=487844
 but modifying the input string to be 768 chars, I see a current js
 implementation on modern laptop hardware take around 7 seconds to hash it
 50,000 times.

I get similar times for an MD5 implementation I found and chopped down.


 So I guess the question is how much data we want to be pushing through the
 hash function and what throughput we expect, and whether we think JS engines
 simply won't get there or will take too long to do so.

Notice that all three of the OP's use-cases were based on checksumming
files.  I don't know how reading in a Blob and then hashing it would
compare to just hashing an equivalent string, but I suspect it would
have a decent perf hit.  This seems like a pretty useful ability in
general, enough so that it's probably worthwhile to build it in
directly as a Blob.checksum() function or something.

I still think it may be useful for the security use-case as well,
where you explicitly want a slow hash to begin with.  If JS imposes a
slowdown on top of that, it could render a good hash too slow to
actually use in practice.  Plus, you have to depend on the hash
implementation you pulled off the web or hacked together yourself,
which you probably didn't manually verify before starting to use.

~TJ



Re: Updates to FileAPI

2010-12-21 Thread Tab Atkins Jr.
On Tue, Dec 21, 2010 at 11:31 AM, Arun Ranganathan a...@mozilla.com wrote:
 There are more rigid conformance requirements around lastModifiedDate.

 http://dev.w3.org/2006/webapi/FileAPI/#dfn-lastModifiedDate


The last modified date of the file; on getting, this MUST return a
Date object [HTML5] with the last modified date on disk. On getting,
user agents MUST create a new Date object with the last modified date
on disk; a different Date object MUST be returned each time. On
getting, if user agents cannot make this information available, they
MUST return null; on getting, even if the user agent could make this
information available on previous gets, if it cannot make this
information available on the current access it MUST return null.


This is worded really confusingly - there are 4 on gettings, and two
of the phrases are just duplicating information expressed in previous
phrases.  Can we get something clearer, like this:


The last modified date of the file.  On getting, if user agents can
make this information available, this MUST return a fresh Date object
initialized to the last modified date of the file; otherwise, this
MUST return null.


?

~TJ



Re: Web workers: synchronously handling events

2010-12-28 Thread Tab Atkins Jr.
On Sun, Dec 26, 2010 at 4:29 PM, Glenn Maynard gl...@zewt.org wrote:
 Havn't been able to find this in the spec: is there a way to allow
 processing messages synchronously during a number-crunching worker
 thread?

Yes, by pausing every once in a while with setTimeout and letting the
event loop spin.

Doing anything else would break javascript's appearance of single-threadedness.

I agree that it's not particularly nice to write your algorithms like
this, but it's already familiar to any js dev who uses any algorithm
with significant running time.  If we were to fix this, it needs to be
done at the language level, because there are language-level issues to
be solved that can't be hacked around by a specialized solution.

~TJ



Re: [Bug 11606] New: wanted: awareness of non-persistent web storage

2010-12-28 Thread Tab Atkins Jr.
On Mon, Dec 27, 2010 at 8:43 PM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, Dec 27, 2010 at 10:55 PM, Drew Wilson atwil...@google.com wrote:
 FWIW, the Chrome team has come down pretty hard on the side of not ever
 leaking to apps that the user is in incognito mode, for precisely the
 reasons described previously. Incognito mode loses much of its utility if
 pages are able to screen for it and block access.

 A similar argument can be made for ad blockers, and in my opinion much
 more convincingly: ad blockers very directly (even measurably) mean
 sites make less money.  Yet, in my years of using ABP, I've never once
 encountered in the wild a site that refused to work because of it,
 despite the fact that they're trivial to detect.

You haven't looked widely enough.  There was a fad for a little while
of doing precisely that - hiding the content if the page detected that
an adblocker was in use, and showing an explanation of why the content
was hidden.  This fad died out, though, because it's pretty rude and
most users don't know how to turn off their adblockers anyway.

Note, though, that turning off your adblocker doesn't really open you
up to privacy violations.  Switching out of incognito (when you don't
really understand the distinction in the first place, and just want
things to work) does.


 If ad blockers had been designed to hide their activity from pages,
 the end result would have been much worse.  Images would have to be
 marked visibility: hidden rather than display: none, since the changes
 in layout are detectable.  A huge amount of bandwidth would be wasted,
 since the server can check to see that a banner is actually being
 downloaded.

 This just has the feel of those theoretical problems that are easy to
 argue for, but are unlikely to ever actually surface.

I agree that making adblockers undetectable would have been a huge
problem, and almost certainly not worth the trouble.  On the other
hand, making incognito mode undetectable is very easy - just act like
a normal, fresh invocation of the browser, then silently throw away
all the data you've stored at the end of the session.  The page has no
way to tell you apart from any other new user.


 I do think there's a user education burden that isn't entirely being met
 yet, though - the Chrome documentation doesn't really talk about local
 storage, for example. But I don't think that pushing this responsibility
 onto individual web applications is the right solution.

 My experience suggests that most users will never know the difference
 between local and server-side storage, and probably don't want to;
 most designs that require that much user education don't work.  The
 most likely end result is ignoring the issue: let a few people lose
 data, and if they complain, tell them it's your fault for using
 incognito mode, and your browser's fault for preventing us from
 warning you.  Not ideal, but pushing the blame onto the browser is
 likely to be the path of least resistance.

I agree that it's the path of least resistance.  I also believe it's
the best solution overall.

~TJ



Re: [chromium-html5] LocalStorage inside Worker

2011-01-11 Thread Tab Atkins Jr.
On Tue, Jan 11, 2011 at 2:37 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Jan 11, 2011 at 2:11 PM, Keean Schupke ke...@fry-it.com wrote:
 would:
 withNamedStorage('x', function(store) {...});
 make more sense from a naming point of view?

 I have a different association for 'with', especially in context of
 JavaScript, so I prefer 'get'. But others feel free to express an
 opinion.

In the context of other languages with similar constructs (request a
resource which is available within the body of the construct), the
with[resource] naming scheme is pretty common and well-known.  I
personally like it.

~TJ



Re: [IndexedDB] Compound and multiple keys

2011-01-20 Thread Tab Atkins Jr.
On Thu, Jan 20, 2011 at 10:12 AM, Keean Schupke ke...@fry-it.com wrote:
 Compound primary keys are commonly used afaik.

Indeed.  It's one of the common themes in the debate between natural
and synthetic keys.

~TJ



Re: several messages

2011-02-24 Thread Tab Atkins Jr.
On Thu, Feb 24, 2011 at 5:46 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 24 Feb 2011, Arthur Barstow wrote:

 Given the information below, I think it would be useful to move this
 spec to a test-ready state. That is, publish it as a Last Call Working
 Draft now and if there are known issues, document them in the Status of
 the Document Section. Then, after a fixed review period, if no
 substantial changes are agreed, the spec can be moved to Candidate
 Recommendation and work on a test suite can begin. Naturally, if major
 changes are agreed, the spec will need to return to Working Draft.

 On Thu, 24 Feb 2011, Arthur Barstow wrote:

 Given the information below, I think it would be useful to move this
 spec to a test-ready state. That is, publish it as a Last Call Working
 Draft now and if there are known issues, document them in the Status of
 the Document Section. Then, after a fixed review period, if no
 substantial changes are agreed, the spec can be moved to Candidate
 Recommendation and work on a test suite can begin. Naturally, if major
 changes are agreed, the spec will need to return to Working Draft.

 I'll defer to Tab for these.

I'll do the necessary edits for publishing tomorrow.

~TJ



Re: Moving XBL et al. forward

2011-03-09 Thread Tab Atkins Jr.
This email is written as the position of several Chrome engineers
working in this problem area at Google, though not Google's official
position.

On Wed, Mar 9, 2011 at 6:14 AM, Arthur Barstow art.bars...@nokia.com wrote:
 * What is the latest implementation status of the XBL2 CR [XBL2-CR] and
 Hixie's September 2010 version [XBL-ED] (previously referred to as
 XBL2-cutdown)?

Chrome does not implement either form of XBL2.


 * Which members of WebApps want to continue with the XML-based version of
 XBL2 as codified in the XBL2 CR? If you are groupin this , what firm
 commitments can you make to push the spec along the REC track? Would you
 object to the Forms WG taking over this spec?

We object to continuing with XBL2.  The original XBL2 was flawed, and
the cutdown version of XBL2 is incomplete and still too complex.  I'm
not sure if we would object, per se, to the Forms WG taking over XBL2,
but we would consider it wasted effort that would not result in us
implementing it in Chrome/Webkit.


 * Which members of WebApps want to continue with the non-XML version as
 Hixie created last September? If you are in this group, what firm
 commitments can you make to push this version along the REC track
 (especially implementation)?

We do not wish to work on either version of XBL2.


 * Should the WG pursue Dimitri Glazkov's Component Model proposal
 [Component]? If yes, who is willing to commit to work on that spec?

We believe that the Component Model proposal should be pursued.
Dimitri Glazkov volunteers to edit the spec.

~TJ



Re: Moving XBL et al. forward

2011-03-09 Thread Tab Atkins Jr.
(off-list)

On Wed, Mar 9, 2011 at 1:25 PM, Cameron McCormack c...@mcc.id.au wrote:
  svg …
    star cx=100 cy=100 points=5/
  /svg

svg
  x-star cx=100 cy=100 points=5/
/svg

~TJ



Re: Moving XBL et al. forward

2011-03-10 Thread Tab Atkins Jr.
On Thu, Mar 10, 2011 at 1:51 PM, Daniel Glazman
daniel.glaz...@disruptive-innovations.com wrote:
 Le 10/03/11 16:46, Cameron McCormack a écrit :

 We should think of XBL as being a DOM-based thing, rather than an XML-
 based thing.  Then we can have HTML syntax for the cases where
 everything is within a text/html document, and XML syntax for the cases
 like the ones I brought up, where you might be purely within an SVG
 document.

 I disagree. If you do that, the HTML serialization of a binding won't
 be usable in a user agent having no knowledge of HTML.

The HTML serialization of an ordinary web page isn't usable in a user
agent having no knowledge of HTML, either.  Why is this different?

~TJ



Re: Moving XBL et al. forward

2011-03-10 Thread Tab Atkins Jr.
On Thu, Mar 10, 2011 at 2:39 PM, Daniel Glazman
daniel.glaz...@disruptive-innovations.com wrote:
 Le 10/03/11 16:55, Tab Atkins Jr. a écrit :
 The HTML serialization of an ordinary web page isn't usable in a user
 agent having no knowledge of HTML, either.  Why is this different?

 Do you have different serializations for another helper technology
 called CSS ? No. Why should it be different here?

Languages whose syntax is *significantly* different from HTML/XML,
like CSS or WebVTT, don't run into the dual representation issue
because, well, attempting to represent them in HTML would be a ton of
work and would result in something fairly unrecognizable.

As Cameron noted, however, it seems to be useful and accepted to
expose XML/HTML languages in both an XML and an HTML serialization, as
the two languages are very close to each other and the differences are
relatively minor.  Those minor differences, unfortunately, tend to
cause authors quite a lot of problems when they're currently using one
and try to use the other, so allowing an author to use whichever they
prefer is a good thing.

We now expose an HTML serialization of SVG and MathML embedded in
HTML.  Similarly, Component Model in HTML will have an HTML
serialization, but it's easy to imagine it also having an XML
serialization for use directly in SVG or similar.

~TJ



Re: API for matrix manipulation

2011-03-15 Thread Tab Atkins Jr.
On Tue, Mar 15, 2011 at 5:00 PM, Chris Marrin cmar...@apple.com wrote:
 I think it would be nice to unify the classes somehow. But that might be 
 difficult since SVG and CSS are (necessarily) separate specs. But maybe one 
 of the API gurus has a solution?

We just discussed this on Monday at the FXTF telcon.  Sounds like
people are, in general, okay with just using a 4x4 matrix, though
there are some possible implementation issues with devices that can't
do 3d at all.  (It was suggested that they can simply do a 2d
projection, which is simple.)

~TJ



Re: [WebSQL] Any future plans, or has IndexedDB replaced WebSQL?

2011-04-04 Thread Tab Atkins Jr.
On Mon, Apr 4, 2011 at 8:07 AM, Joran Greef jo...@ronomon.com wrote:
 On 04 Apr 2011, at 4:39 PM, Jonas Sicking wrote:
 Hence it would still be the case that we would be relying on the
 SQLite developers to maintain a stable SQL interpretation...

 SQLite has a fantastic track record of maintaining backwards compatibility.

 IndexedDB has as yet no track record, no consistent implementations, no 
 widespread deployment,

It's new.


 only measurably poor performance

Ironically, the poor performance is because it's using sqlite as a
backing-store in the current implementation.  That's being fixed by
replacing sqlite.


 and a lukewarm indexing and querying API.

Kinda the point, in that the power/complexity of SQL confuses a huge
number of develoeprs, who end up coding something which doesn't
actually use the relational model in any significant way, but still
pays the cost of it in syntax.

(I found normalization forms and similar things completely trivial
when I was learning SQL, but for some reason almost every codebase
I've looked at has a horribly-structured db.  As far as I can tell,
the majority of developers just hack SQL into being a linear object
store and do the rest in their application code.  We can reduce the
friction here by actually giving them a linear object store, which is
what IndexedDB is.)

~TJ



Re: [WebSQL] Any future plans, or has IndexedDB replaced WebSQL?

2011-04-06 Thread Tab Atkins Jr.
On Wed, Apr 6, 2011 at 10:14 AM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 4/6/2011 9:44 AM, Joran Greef wrote:
 We only need one fixed version of SQLite to be shipped across Chrome,
 Safari, Opera, Firefox and IE. That in itself would represent a tremendous
 goal for IndexedDB to target and to try and achieve. When it actually does,
 and surpasses the fixed version of SQLite, those developers requiring the
 raw performance and reliability of SQLite could then switch over.

 I don't believe any browser vendor would be interested in shipping two
 different version of SQLite (one for internal use, and one for the web).  I
 can say, with certainty, that Mozilla is not.

In addition, as previously stated, the near certainty that there is,
hidden somewhere in the code, some security bugs (there are *always*
security bugs) means that browsers can not/will not ship a single
fixed version.

When a security bug is encountered, either the browsers update to a
new version of sqlite (if it's already been fixed), thus potentially
breaking sites, or they patch sqlite and then upgrade to the patched
version, thus potentially breaking sites, or they fork sqlite and
patch the error only in their forked version, still potentially
breaking sites but also forking the project.  The only thing that is
*not* a valid possibility is the browsers staying on the single fixed
version, thus continuing to expose their users to the security bug.

~TJ



Re: CfC: publish new Working Draft of Indexed Database API; deadline April 16

2011-04-09 Thread Tab Atkins Jr.
On Sat, Apr 9, 2011 at 4:22 AM, Arthur Barstow art.bars...@nokia.com wrote:
 The Editors of the Indexed Database API would like to publish a new Working
 Draft of their spec and this is a Call for Consensus to do so:

  http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

 If one agrees with this proposal, it: a) indicates support for publishing a
 new WD; and b) does not necessarily indicate support for the contents of the
 WD.

 If you have any comments or concerns about this proposal, please send them
 to public-webapps by April 16 at the latest.

 As with all of our CfCs, positive response is preferred and encouraged and
 silence will be assumed to be agreement with the proposal.

I support publishing this document.

~TJ



Re: Reminder: RfC: Last Call Working Draft of Web Workers; deadline April 21

2011-04-20 Thread Tab Atkins Jr.
On Wed, Apr 20, 2011 at 12:47 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
 (This time before the deadline :)

 Microsoft has the following additional feedback for this Last Call of Web 
 Workers.

 We are concerned about the privacy implications we discovered when reviewing 
 the current web workers editor's draft in its treatment of shared workers 
 [1]. Specifically, the spec as currently written allows for 3rd party content 
 to use shared workers to connect and share (broker) information between 
 top-level domains as well as make resource requests on behalf of all 
 connections. For example, a user may visit a site A.com which hosts a 3rd 
 party iframe of domain 3rdParty.com which initially creates a shared 
 worker. Then, the user (from a different page/window) opens a web site 
 B.com which also hosts a 3rd party iframe of domain 3rdParty.com, which 
 (per the spec text below, and as confirmed several browser's implementations) 
 should be able to connect to the same shared worker. The end user only sees 
 domains A.com and B.com in his or her browser window, but can have 
 information collected about those pages by way of the third party connected 
 shared worker.

 Here's the relevant spec text:

 SharedWorker constructor steps:
 7.5. If name is not the empty string and there exists a 
 SharedWorkerGlobalScope object whose closing flag is false, whose name 
 attribute is exactly equal to name, and whose location attribute represents 
 an absolute URL with the same origin as scriptURL, then let worker global 
 scope be that SharedWorkerGlobalScope object.

 Given our current position on privacy and privacy technologies in IE9 [2], we 
 will not be able to implement shared workers as described above.

 We believe it is appropriate to limit the scenarios in which connections to 
 existing shared workers are allowed. We propose that connections should only 
 be established to existing shared workers when *top-level* domains match 
 (rather than when the location attribute represents an absolute URL with the 
 same origin as scriptURL). By limiting sharing to top-level domains, privacy 
 decisions can be made on behalf of the top-level page (from the user's point 
 of view) with scoped impact to the functionality of the 3rd party iframe.

 [1] 
 http://dev.w3.org/html5/workers/#shared-workers-and-the-sharedworker-interface
 [2] http://www.w3.org/2011/track-privacy/papers/microsoft-bateman.pdf

Please correct me if I'm missing something, but I don't see any new
privacy-leak vectors here.  Without Shared Workers, 3rdparty.com can
just hold open a communication channel to its server and shuttle
information between the iframes on A.com and B.com that way.

~TJ



Re: Reminder: RfC: Last Call Working Draft of Web Workers; deadline April 21

2011-04-20 Thread Tab Atkins Jr.
On Wed, Apr 20, 2011 at 3:13 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Apr 20, 2011 at 12:54 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Please correct me if I'm missing something, but I don't see any new
 privacy-leak vectors here.  Without Shared Workers, 3rdparty.com can
 just hold open a communication channel to its server and shuttle
 information between the iframes on A.com and B.com that way.

 Not if the user disables third-party cookies (or cookies completely), right?

No, what I described is independent of cookies.  You just have to use
basic long-polling techniques, so the iframe on A.com sends a message
to the server, and the server then passes that message to the iframe
on B.com.

As Drew mentions, cookies are another way to pass this information
around, as are multiple other shared-in-a-domain information sources.

~TJ



Re: Reminder: RfC: Last Call Working Draft of Web Workers; deadline April 21

2011-04-20 Thread Tab Atkins Jr.
On Wed, Apr 20, 2011 at 3:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Apr 20, 2011 at 3:19 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Apr 20, 2011 at 3:13 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Apr 20, 2011 at 12:54 PM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 Please correct me if I'm missing something, but I don't see any new
 privacy-leak vectors here.  Without Shared Workers, 3rdparty.com can
 just hold open a communication channel to its server and shuttle
 information between the iframes on A.com and B.com that way.

 Not if the user disables third-party cookies (or cookies completely), right?

 No, what I described is independent of cookies.  You just have to use
 basic long-polling techniques, so the iframe on A.com sends a message
 to the server, and the server then passes that message to the iframe
 on B.com.

 But how does the server know to pair the two incoming connections and
 forward data between them? If 50 users visit these sites, all the
 server sees is 100 incoming connections with no idea which are coming
 from the same user.

True, you need some side-channel to link the two iframes for a
particular client.  You can use something simple like one of the
*other* within-domain communication mediums (cookies, localStorage,
etc.) to share a uniqueid, or you can just pull information out of the
client, which the two windows are likely to share, and use that as the
identifier.  We already know that you can fingerprint a larger
percentage of users with only a handful of information sources
available to JS.

~TJ



Re: Model-driven Views

2011-04-26 Thread Tab Atkins Jr.
On Mon, Apr 25, 2011 at 9:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/22/11 8:35 PM, Rafael Weinstein wrote:
 Myself and a few other chromium folks have been working on a design
 for a formalized separation between View and Model in the browser,
 with needs of web applications being the primary motivator.

 Our ideas are implemented as an experimental Javascript library:
 https://code.google.com/p/mdv/ and the basic design is described here:
 http://mdv.googlecode.com/svn/trunk/docs/design_intro.html.

 The interesting thing to me is that the DOM is what's meant to be the model
 originally, as far as I can tell, with the CSS presentation being the
 view

 I guess we ended up with too much view leakage through the model so we're
 adding another layer of model, eh?

There's always multiple layers of model in any non-trivial system.  ^_^

In this case, the original DOM as model is valid in the sense of the
page as a more-or-less static document, where it's the canonical
source of information.  With an app, though, the data canonically
lives in Javascript, with the DOM being relegated to being used to
display the data and allow user interaction.  MDV is one possibility
for making this relationship cleaner and simpler.

~TJ



Re: Model-driven Views

2011-04-27 Thread Tab Atkins Jr.
On Wed, Apr 27, 2011 at 11:47 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 04/27/2011 09:13 PM, Erik Arvidsson wrote:
 1. We wanttemplate  to be able to contain arbitrary content that is
 used as the content of the instances that are created from it. Now
 assume that the template element contains some active content such as
 audio autoplay, script or simply just an image. We don't want the
 audio to start playing, we don't want the script to run inside the
 template element and we don't want the image to be requested at this
 point.

 audio works even if it is not in document, and so
 does img.
 But I see the problem you're trying to avoid.

Yeah, we basically just want the actual nodes inside of template to
be dead, since their only purpose is to be cloned when you create
real DOM from the template.  How precisely this is accomplished is
more-or-less irrelevant at this point.

~TJ



Re: Server-Sent Event types

2011-04-28 Thread Tab Atkins Jr.
On Wed, Apr 27, 2011 at 11:26 PM, Brett Zamir bret...@gmail.com wrote:
 I am a newcomer to the Server-Sent Events spec, so my apologies if I am
 covering old ground.

 While I can understand that Server-Sent Events may be intending to start off
 simple, I wonder whether there is some reason a formal mechanism was not
 adopted to at least allow the specification of event types. I think such a
 convention would have a number of important benefits.

After reading the entire email, I still can't tell what you mean by
'event types'.  I can only assume that you mean something like what
the spec already allows by having the author send an event: foo
line, which makes the next batch of data be dispatched as a foo
event instead of a message event.

Is this what you're attempting to do?

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 10:18 AM, Adam Barth w...@adambarth.com wrote:
 So it sounds like we don't have a security model but we're hoping UA
 implementors can dream one up by combining enough heuristics.

A model which I suggested privately, and which I believe others have
suggested publicly, is this:

1. While fullscreen is enabled, you can lock the mouse to the
fullscreened element without a prompt or persistent message.  A
temporary message may still be shown.  The lock is automatically
released if the user exits fullscreen.

2. During a user-initiated click, you can lock the mouse to the target
or an ancestor without a permissions prompt, but with a persistent
message, either as an overlay or in the browser's chrome.

3. Otherwise, any attempt to lock the mouse triggers a permissions
prompt, and while the lock is active a persistent message is shown.

These wouldn't be normative, of course, because different platforms
may have different permissions models, but they seem like a good
outline for balancing user safety with author convenience/lack of user
annoyance.

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 12:18 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jun 20, 2011 at 11:17 AM, Adam Barth w...@adambarth.com wrote:
 On Mon, Jun 20, 2011 at 10:48 AM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 On Mon, Jun 20, 2011 at 10:18 AM, Adam Barth w...@adambarth.com wrote:
 So it sounds like we don't have a security model but we're hoping UA
 implementors can dream one up by combining enough heuristics.

 A model which I suggested privately, and which I believe others have
 suggested publicly, is this:

 1. While fullscreen is enabled, you can lock the mouse to the
 fullscreened element without a prompt or persistent message.  A
 temporary message may still be shown.  The lock is automatically
 released if the user exits fullscreen.

 ^^^ This part sounds solid.

 Why do you need to lock the mouse when you're in full-screen mode? The
 mouse can't go outside the element anyway, right?

 Or is this to handle the multi-monitor scenario where a fullscreen'ed
 element might just cover one monitor.

Multi-monitor is one reason.  Another is that, even in single-monitor
fullscreen, some browsers pop up an overlay when the mouse hits the
top of the screen.  Mouselocking would allow browsers to keep doing
that in the normal case (it's useful) while making it not happen
during games.


 2. During a user-initiated click, you can lock the mouse to the target
 or an ancestor without a permissions prompt, but with a persistent
 message, either as an overlay or in the browser's chrome.

 ^^^ That also sounds reasonable too.  There's some subtly to make sure
 the message is actually visible to the user, especially in desktop
 situations where one window can overly another.  It's probably also
 useful to instruct the user how to release the lock.

 Hmm.. I'm less comfortable with this I think. It's always very easy to
 get the user to click somewhere on a page, so this effectively means
 that it's very easy for any page to lock the mouse.

Yes, which is why I suggest a persistent message be shown with
instructions on how to release the lock.  Thus the user is aware that
the website is being hostile and knows how to stop it long enough to
get away (clicking Back or closing the tab).

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 1:06 PM, Glenn Maynard gl...@zewt.org wrote:
 On Thu, Jun 16, 2011 at 6:21 PM, Vincent Scheib sch...@google.com wrote:
 - Mousemove event gains .deltaX .deltaY members, always valid, not just
 during mouse lock.

 Is this implementable?

 First-person games typically implement delta mouse movement by hiding
 the mouse cursor, warping the invisible cursor to the center of the
 screen when it moves, and monitoring the distance of mouse movement
 from the center of the screen to calculate deltas.  I don't think
 Windows provides a way to retrieve delta mouse movements that doesn't
 clip when the mouse reaches the edge of the screen.  I'm not sure
 about other environments.

In a non-mouselock situation, mouse events stop being fired anyway
when the mouse goes outside of the window, so you don't have to worry
about the delta information.

In a mouselock situation, the browser can do precisely what you
describe to keep the mouse from leaving the window.

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 1:19 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 06/20/2011 10:18 PM, Jonas Sicking wrote:
 On Mon, Jun 20, 2011 at 11:17 AM, Adam Barthw...@adambarth.com  wrote:
 On Mon, Jun 20, 2011 at 10:48 AM, Tab Atkins Jr.jackalm...@gmail.com
 2. During a user-initiated click, you can lock the mouse to the target
 or an ancestor without a permissions prompt, but with a persistent
 message, either as an overlay or in the browser's chrome.

 ^^^ That also sounds reasonable too.  There's some subtly to make sure
 the message is actually visible to the user, especially in desktop
 situations where one window can overly another.  It's probably also
 useful to instruct the user how to release the lock.

 Hmm.. I'm less comfortable with this I think. It's always very easy to
 get the user to click somewhere on a page, so this effectively means
 that it's very easy for any page to lock the mouse.

 Yeah. Mouse could be locked on mousedown, but it should be automatically
 released on mouseup.
 That is the way set/releaseCapture works in Firefox.

 Other cases should need explicit permission from user.

The use-case is non-fullscreen games and similar, where you'd prefer
to lock the mouse as soon as the user clicks into the game.  Minecraft
is the first example that pops into my head that works like this -
it's windowed, and mouselocks you as soon as you click at it.

Requiring a permissions prompt in this case would be suboptimal, as it
would mean the user needs to click at least two things to actually
play the game.  (The game itself, and then the permissions prompt.)

Releasing on mouseup isn't useful for mouselock.  The sorts of things
that you can do on a drag like that are captured by the mousecapture
concept, which is distinct.

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 3:03 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 06/21/2011 12:25 AM, Tab Atkins Jr. wrote:
 The use-case is non-fullscreen games and similar, where you'd prefer
 to lock the mouse as soon as the user clicks into the game.  Minecraft
 is the first example that pops into my head that works like this -
 it's windowed, and mouselocks you as soon as you click at it.

 And how would user unlock when some evil sites locks the mouse?
 Could you give some concrete example about
  It's probably also useful to instruct the user how to release the lock.

I'm assuming that the browser reserves some logical key (like Esc) for
releasing things like this, and communicates this in the overlay
message.

~TJ



Re: Mouse Lock

2011-06-20 Thread Tab Atkins Jr.
On Mon, Jun 20, 2011 at 3:26 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 06/21/2011 01:08 AM, Tab Atkins Jr. wrote:
 On Mon, Jun 20, 2011 at 3:03 PM, Olli Pettayolli.pet...@helsinki.fi
  wrote:
 On 06/21/2011 12:25 AM, Tab Atkins Jr. wrote:
 The use-case is non-fullscreen games and similar, where you'd prefer
 to lock the mouse as soon as the user clicks into the game.  Minecraft
 is the first example that pops into my head that works like this -
 it's windowed, and mouselocks you as soon as you click at it.

 And how would user unlock when some evil sites locks the mouse?
 Could you give some concrete example about
  It's probably also useful to instruct the user how to release the
 lock.

 I'm assuming that the browser reserves some logical key (like Esc) for
 releasing things like this, and communicates this in the overlay
 message.

 And what if the web page moves focus to some browser window, so that ESC
 is fired there? Or what if the web page moves the window to be outside the
 screen so that user can't actually see the message how to
 unlock mouse?

How is a webpage able to do either of those things?

~TJ



Re: Mouse Lock

2011-06-22 Thread Tab Atkins Jr.
On Wed, Jun 22, 2011 at 2:54 AM, Glenn Maynard gl...@zewt.org wrote:
 Unrelated, another detail: if most implementations are going to need to warp
 the mouse cursor to do this, the other mouse event coordinates should always
 be 0 (or null).  Otherwise, implementations on platforms which don't need to
 warp the cursor may still fill these in, causing incompatibilities.  Events
 like mouseover should probably be suppressed, too.  At that point, it's
 probably cleaner to stop firing *all* mouse movement events entirely, as if
 the mouse isn't moving, and to use a separate mousedelta event when locked
 which only has deltaX and deltaY.

I had this thought initially, but didn't pursue it.  Now that you
bring it up again, though, I think I agree.

~TJ



Re: Mouse Lock

2011-06-22 Thread Tab Atkins Jr.
On Wed, Jun 22, 2011 at 8:02 AM, Ryosuke Niwa rn...@webkit.org wrote:
 On Mon, Jun 20, 2011 at 10:48 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

 2. During a user-initiated click, you can lock the mouse to the target
 or an ancestor without a permissions prompt, but with a persistent
 message, either as an overlay or in the browser's chrome.

 Does this mean that website is forced to release the lock before mouse-up
 happens?  Or did you just mean that a website can start the lock during a
 click?

The latter; the former is useful, but in a different way (it's for
letting people drag something, like a scrollbar, without having to
stay precisely on the element).


 If it's latter, I have a problem with that.  I know many people who
 select text on a page just to read them.  They obviously don't expect their
 cursors being tracked/locked by a website.

We can't do anything about cursors being tracked - that's allowed
already by the existing mouse events.

I don't expect authors to lock the mouse arbitrarily with this ability
unless they're being malicious.  The intent is to allow non-fullscreen
games to lock the mouse when the user clicks the Start Game button
or similar.  (For the malicious authors, see below.)

 Also, once my mouse is locked, how do I free it?

That was covered in the paragraph you quoted, though cursorily.  If
the mouse is locked in this way, the browser should show a persistent
message (either in its chrome or as an overlay) saying something like
Your mouse cursor is being hidden by the webpage.  Press Esc to show
the cursor..

This shouldn't be too annoying for the games case, but should allow
users, even clueless ones, to know when a site is being malicious and
how to fix it.  Once they get their cursor back, they can just leave
that page.

~TJ



Re: Mouse Lock

2011-06-22 Thread Tab Atkins Jr.
On Wed, Jun 22, 2011 at 8:27 AM, Ryosuke Niwa rn...@webkit.org wrote:
 On Wed, Jun 22, 2011 at 8:17 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

  Also, once my mouse is locked, how do I free it?

 That was covered in the paragraph you quoted, though cursorily.  If
 the mouse is locked in this way, the browser should show a persistent
 message (either in its chrome or as an overlay) saying something like
 Your mouse cursor is being hidden by the webpage.  Press Esc to show
 the cursor..

 This shouldn't be too annoying for the games case, but should allow
 users, even clueless ones, to know when a site is being malicious and
 how to fix it.  Once they get their cursor back, they can just leave
 that page.

 This might just be a UI problem but I can assure you many users won't be
 able to find such a message or won't be able to understand what it means.
  e.g. I know quite few people who don't know what mouse cursor or Esc is.

I have trapped your mouse cursor in this box: [picture of mouse cursor].

...that would actually be pretty funny.  Someone should do that.

~TJ



Re: Mutation events replacement

2011-07-08 Thread Tab Atkins Jr.
On Fri, Jul 8, 2011 at 6:55 AM, Sean Hogan shogu...@westnet.com.au wrote:
 On 8/07/11 10:21 PM, Sean Hogan wrote:
 - ARIA support in JS libs currently involves updating aria-attributes to
 be appropriate to behavior the lib is implementing. Attribute mutation
 listeners would allow an inverse approach - behaviors being triggered off
 changes to aria-attributes.

 As has been mentioned, listening for attribute mutations is horrendously
 inefficient because your handler has to receive every mutation, even if only
 interested in one attribute.

This is a limitation of current mutation events.  We don't have to
repeat this mistake.  Allowing a script to listen for changes to a
specific attribute is a big low-hanging fruit.

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-01 Thread Tab Atkins Jr.
On Mon, Aug 1, 2011 at 7:05 PM, Charles Pritchard ch...@jumis.com wrote:
 Can we have it 'inherit' a parent namespace, and have chaining properties?

 Element.create('div').create('svg').create('g').create('rect', {title: 'An 
 svg rectangle in an HTML div'});

Ooh, so .create is defined both on Element (defaults to HTML
namespace, just creates an element) and on Element.prototype (defaults
to namespace of the element, inserts as a child)?  That's pretty
interesting.  Presumably the new element gets inserted as a last child
of the parent.

I like it.

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-02 Thread Tab Atkins Jr.
On Tue, Aug 2, 2011 at 12:36 AM, Jonas Sicking jo...@sicking.cc wrote:
 I'm not sure if it's better to include the children as a var-args
 list, or as an array. Certainly when typing things normally var-args
 saves you the [ and ], but when coding, if you've built the child
 list dynamically and have an array, you have to make awkward .apply
 calls.

Read again - the idea is to auto-expand arrays.

(I don't have much of a preference between just use an array and
use varargs, but expand arrays.  I agree that using only varargs
without expansion would be bad.)

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-02 Thread Tab Atkins Jr.
On Tue, Aug 2, 2011 at 9:48 AM, Aryeh Gregor a...@aryeh.name wrote:
 On Mon, Aug 1, 2011 at 9:33 PM, Maciej Stachowiak m...@apple.com wrote:
 In an IRC discussion with Ian Hickson and Tab Atkins, we can up with the
 following idea for convenient element creation:
 Element.create(tagName, attributeMap, children…)
    Creates an element with the specified tag, attributes, and children.

 How does this compare to popular JS helper libraries like jQuery?  It
 would be useful to know what convenience APIs authors are using now
 before introducing our own.

jQuery's element creation is basically driven by innerHTML.  That is,
to create an element, you just make a call like $('p class=foo').
I doubt that's a pattern we actually want to copy, as it's kinda
dirty, and inconvenient in some cases.  (For example, to apply a bag
of properties as attributes, you have to first create the element,
then call attr() on it.  You can't pass the attrs as an initial arg
without string-building.)

Prototype's element creation is almost identical to what is proposed
here, except it uses something that looks like a constructor.  You
create an element with new Element('p',{class:'foo'}).  You can't
set children as part of the initial call; they have to be appended in
later calls.

MooTools is basically identical to Prototype, except that you can
additionally set listeners on the element during creation by using a
magical events property in the attribute bag, which takes an object
of event names and functions.  This would be nice to look into adding.

Dojo uses a factory method fairly similar to what's proposed (with the
same name, even - Dojo.create()).  Its first two arguments are the
tagname and an attribute bag, same as the proposal.  Its next two
arguments are used to set a parent node and offset within that parent
node, for automatic DOM insertion after creation.  I don't think it's
valuable to have this in the constructor, though the facilities that
the libraries offer for easier DOM insertion should definitely be
looked at separately.

I think those are the major libraries to pay attention to.  It looks
like jQuery's model is probably not something we want to emulate,
while the other three libraries are almost identical to this proposal.
 The one thing I suggest looking into is the ability to set listeners
on an element during creation, like MooTools allows.

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-02 Thread Tab Atkins Jr.
On Tue, Aug 2, 2011 at 11:26 AM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Aug 2, 2011 at 2:18 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 MooTools is basically identical to Prototype, except that you can
 additionally set listeners on the element during creation by using a
 magical events property in the attribute bag, which takes an object
 of event names and functions.  This would be nice to look into adding.

 Is this much better than just saying eg. Element.create(a, {href:
 http://link;, onclick: function(e) { ... } }, link}?

Hmm, is everything exposed as on* attributes now?  If so, then yeah,
just do that; no need to mess around with a magic property in the
attributes bag.

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-08 Thread Tab Atkins Jr.
On Sat, Aug 6, 2011 at 9:05 AM, Dominic Cooney domin...@google.com wrote:
 Third, is the order of attributes significant for XML namespace
 declarations? eg does this:
 x xmlns:foo=… foo:bar=… /
 mean the same thing as
 x foo:bar=… xmlns:foo=… /
 ? If not, including namespaces in the attribute dictionary is fraught,
 because the iteration order of properties is undefined.

The order is unimportant when setting them via markup, but important
when setting them via successive setAttribute calls.  I'd prefer that
the attribute bag be handled like markup attributes, where xmlns
attributes are handled early so that later attributes fall into the
correct namespace.

~TJ



Re: Element.create(): a proposal for more convenient element creation

2011-08-08 Thread Tab Atkins Jr.
On Mon, Aug 8, 2011 at 1:17 AM, Jonas Sicking jo...@sicking.cc wrote:
 Is there a reason to support namespaced attributes at all? They are
 extremely rare, especially on the web.

 Ideally I'd like to deprecate them, but I suspect that's not doable.
 But I see no reason to support them in new APIs.

SVG requires namespaced attributes for xlink, at least.  We're
planning to get rid of that in SVG2, but for now it would be
necessary.

We could, of course, just say Too bad, don't write things that need
the xlink namespace, and wait for SVG2 to get rid of them.  I don't
think this would be very bad.

~TJ



Re: Mouse Lock

2011-08-12 Thread Tab Atkins Jr.
On Fri, Aug 12, 2011 at 1:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Aug 12, 2011 at 9:53 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Aug 11, 2011 at 10:14 PM, Robert O'Callahan
 rob...@ocallahan.org wrote:
 If your implementation had to warp the mouse cursor on Windows to get
 accurate delta information, the mouse position in the existing mouse
 events would no longer be very meaningful and a new event type seemed
 more logical. But assuming Klaas is right, we no longer need to worry
 about this. It seems we can unconditionally add delta information to
 existing mouse events. So I withdraw that comment.

 I suspect that, while locked, we still don't actually want to expose
 the various x and y properties for the mouse.  I agree with Vincent
 that the *other* mouseevent properties are all useful, though, and
 that the delta properties are really useful in non-mouselock
 situations.

 We should just zero all the position information.  Even if we can
 switch all OSes to a delta mode, the position will be arbitrary and
 meaningless.  This seems easier than making a new type of mouse event
 that exposes all of normal mouse events except the position, and
 ensuring that the two stay in sync when we add new info.

 If we expose delta information in all mouse events, which seems like
 it could be a good idea, then what is the usecase for the success
 callback for mouselock?

 I was under the impression that that was so that the page could start
 treating mousemove events differently, but if all mousemove events
 have deltas, then that won't be needed, no?

No, it's still definitely needed.  You can't do an FPS with non-locked
deltas; the user will end up moving their mouse off the screen.

The use-cases for delta-without-mouselock are pretty separate from
those for delta-within-mouselock.

~TJ



  1   2   3   4   5   >