Re: Persistent Storage vs. Database

2013-03-07 Thread Kyle Huey
On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk  wrote:

> Physical commit (write) of objects to storage happens on either
> a) GC cycle or b) on explicit storage.commit() call or on c) VM shutdown.
>

Persisting data off a GC cycle (via finalizers or something else) is a
pretty well known antipattern.[0]

At least it is easier than http://www.w3.org/TR/IndexedDB/ :)


Easier doesn't necessarily mean better.  LocalStorage is certainly easier
to use than any async storage system ;-)

 - Kyle

[0] e.g.
http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx


Persistent Storage vs. Database

2013-03-07 Thread Andrew Fedoniouk
I am not sure if my approach in Sciter/TIScript for data persistence
is applicable
to JS engines used in browsers... but I'll try to explain it with the
hope that its
idea can be useful at least to some extent.

The whole data persistence in TIScript [1,2] is defined by two objects
[classes]:

- Storage - the data storage, and
- Storage.Index - index, persistable ordered key/value map.

Each Storage instance has property named `root`.

Any object that is reachable from this root is persistent - will
survive VM shutdown.

Example:

var storage = ...;

storage.root = {
   one: 1,
   two: [ 1,2,3 ],
   three: { a:"A", b: "B", c: "C"  }
};
this will initialize persistent storage by the structure above.

After this, in current or next script session, access to
the storage.root will return that object. From that object
all its descendants are reachable as usual.

Physical commit (write) of objects to storage happens on either
a) GC cycle or b) on explicit storage.commit() call or on c) VM shutdown.

Objects from storage are loaded on demand - when they accessed.

Therefore the storage provides transparent (for the JS programmer) set of
operations. Data access to persistent objects is made exactly in the same way
as to other objects in script heap.

The only limitation: not all objects in script heap can be persistent this way.
Only "JSON" subset (array,object, numeric, bool, string and null),
Dates and Indexes.
For obvious reasons.

This persistence schema uses minimum number of abstractions and
has full set of operations needed for accessing and storing structured
data. At least it is easier than http://www.w3.org/TR/IndexedDB/ :)

My pardon if all above is not exactly in mainstream.

Persistence in TIScript is based on Konstantin Knizhnik's DyBase [3].

[1] TIScript high level definition:
http://www.codeproject.com/Articles/33662/TIScript-language-a-gentle-extension-of-JavaScript
[2] TIScript source code: https://code.google.com/p/tiscript/
[3] DyBase http://www.garret.ru/dybase.html

-- 
Andrew Fedoniouk.

http://terrainformatica.com



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Bronislav Klučka


On 7.3.2013 19:54, Scott González wrote:



Who is killing anything?

Hi, given
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0676.html
I've misunderstood your point as advocating against Shadow altogether.

My concerns are twosome:
1st one is ideological: you do not touch internals, it's not good 
practice, but I appreciate the fact, that good practice is not a dogma 
and from time to time is good thing to break it.
2nd is is practical: not having to care about the internals, so I do not 
break it by accident from outside. If the only way to work with 
internals is by explicit request for internals and then working with 
them, but without the ability to breach the barrier accidentally, 
without the explicit request directly on the shadow host, this concern 
is satisfied and yes, there will be no clashes except for control naming.


Brona




Re: Streams and Blobs

2013-03-07 Thread Jonas Sicking
On Thu, Mar 7, 2013 at 4:42 PM, Glenn Maynard  wrote:
> The alternative argument is that XHR should represent the data source,
> reading data from the network and pushing it to Stream.

I think this is the approach I'd take. At least in Gecko this would
allow the XHR code to generally do the same thing it does today with
regards to actions taken on incoming network data. The only thing we'd
do differently is which consumer to send the data to. We already have
several such consumers which are used to implement the different
.responseType modes, so adding another one fits right in with that
model.

>From an author point of view it also means that the XHR object behaves
consistently for all .responseTypes. I.e. the same set of events are
fired and the XHR object goes through the same set of states. The only
difference is in how the data is consumed.

The XHR object and the stream would still be separate objects. So
aborting the XHR object would not affect the behavior of the Stream
object, other than that it would stop further data from becoming
readable through the stream. Possibly the stream would also indicate
some sort of "abort" error once the end was reached.

/ Jonas



Re: Streams and Blobs

2013-03-07 Thread Glenn Maynard
If we decide to do streaming with the Streams API, the StreamReader API, at
least, seems to need some work.  In particular, it seems designed with only
binary protocols that specify block sizes in mind.  It can't handle textual
protocols at all.  For example, it couldn't be used to parse a keepalive
HTTP stream, since you don't know the size of the headers in advance.  You
want to read the data as quickly as possible, whenever new data becomes
available.  Parsing Event Source has the same problem.

Put in socket terms, StreamReader only lets you do a blocking (in the
socket sense) read() of a fixed size.  It doesn't let you set O_NONBLOCK,
monitor the socket (select()) and read data as it becomes available.

StreamBuilder, used to source data from script to native, makes me nervous,
since unless it's done carefully it seems like it would expose a lot of
implementation details.  For example, the I/O block size used by
HTMLImageElement's network fetch could be exposed as a side-effect of when
"thresholdreached" is fired, which could lead to websites that only work
with the block size of a particular browser.  Also, what happens if a
synchronous API (eg. sync XHR) reads from a URL that's sourced from a
StreamBuilder?  That would deadlock.  It could also happen cross-thread,
with two XHRs each reading a URL sourced from the other.


On Thu, Mar 7, 2013 at 3:37 AM, Jonas Sicking  wrote:

> This seems awkward and not very future proof. Surely other things than
> HTTP requests can generate streams of data. For example the TCPSocket
> API seems like a good candidate of something that can generate a
> stream. Likewise WebSocket could do the same for large message frames.
>
> Other potential sources of data streams is obviously local file data,
> and simply generating content using javascript.
>
> So I think it makes a lot more sense to create the concept of Stream
> separate from the XHR object.


Most of the thread has been about whether to use the Stream API, which
already exists (in spec form) and is used by XHR (also in spec form--I
don't think any of this is in production):
https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

If we do use that API or one like it, another question raised is how XHR
and the Stream object interact.  I think the Stream object created by XHR
should represent the actual stream, with the Stream being the data source.
XHR is just a factory for the Stream object.  This way, XHR gets out of the
picture immediately and a lot of complexity goes away.  For example, if you
load an image from a Stream created by XHR, both XHR and HTMLImageElement
fire onload.  Which one is fired first?  Are they fired synchronously, or
can other things happen in between them?  They're on different task
sources, so how do you ensure ordering at all?  What about onerror?  What
if you .abort XHR during XHR's 100% .progress event, after the image has
loaded the data but before it's fired onload?  The same applies to various
events in every API that might receive a Stream.

The alternative argument is that XHR should represent the data source,
reading data from the network and pushing it to Stream.

An important difference between a stream and a blob is that you can
> read the contents of a Blob multiple times, while a stream is
> optimized for lower resource use by throwing out data as soon as it
> has been consumed. I think both are needed in the web platform.
>

For the streaming-to-script case (not to native), Blobs can be used as the
API while still allowing the data to be discarded; just use multiple blobs
that happen to share a backing store.  That's what my "clearResponse()"
proposal does.  It handles both the "expose the whole response
incrementally" and the "expose data in chunks, discarding as you go" cases.

If we want the functionality of moz-blob and friends directly on XHR (even
if we also have a Stream API and they're redundant with it), I think the
clearResponse approach is clearer (the exact place where the response data
is emptied is explicit), gives better usability (you can choose when and if
to clear it), and doesn't overload responseType with a second axis.

But this difference is important to consider with regards to
> connecting a Stream to a  or  element. Users generally
> expect to be able to rewind a media element which would mean that if
> you can connect a Stream to a media element, the element would need to
> buffer the full stream.
>
> But this isn't an issue that we need to tackle right now. What I think
> the first thing to do is is to create a Stream primitive, figure out
> what API would go directly on it, or what new interfaces needs to be
> created to allow getting the data out of it, and a way to get XHR to
> produce such a primitive.
>

Just to sum up the earlier discussion: If you only need to do simple
fetches (a GET, with no custom headers and so on), then you don't need any
of this; all you need is to hand the URL to the API, as with .
That's simple and robust:

Re: File API: Blob.type

2013-03-07 Thread Glenn Maynard
As an aside, I'd recommend minimizing normative dependencies on RFC2046.
Like many RFCs it's an old, unclear spec.

On Thu, Mar 7, 2013 at 12:35 PM, Arun Ranganathan 
wrote:
> At some point there was a draft that specified *strict* parsing for
compliance with RFC2046, including tokenization ("/") and eliminating
non-ASCII cruft.  But we scrapped that because bugs in all major browser
projects showed that this spec. text was callously ignored.  And I didn't
want to spec. fiction, so we went with the current model for Blob.type,
which is, as Anne points out, pretty lax.

Chrome, at least, throws on new Blob([], {type: "漢字"}), as well as
lowercasing the string.

> I'm in favor of introducing stricter rules for Blob.type, and I'm also in
favor of allowing charset params; Glenn's example of  'if(blob.type ==
"text/plain")' will break, but I don't think we should be encouraging
strict equality comparisons on blob.type (and in fact, should *discourage*
it as a practice).
>
> Glenn: I think that introducing a separate interface for other parameters
actually takes away from the elegance of a simple Blob.type.  The RFC
doesn't separate them, and I'm not sure we should either.  My reading of
the RFC is that parameters *are an intrinsic part of* the MIME type.

A couple points:

- I disagree that we should discourage comparing against Blob.type, but
ultimately it's such an obvious use of the property, people will do it
whether it's encouraged or not.  I'd never give it a second thought, since
that appears to be its very purpose.  Web APIs should be designed
defensively around how people will actually use the API, not how we wish
they would.  Unless lots of Blob.type parameters actually include
parameters, code will break unexpectedly when it ends up encountering one.
- The RFC defines a protocol ("Content-Type"), not a JavaScript API, and a
good protocols are rarely good APIs.  Having Blob.type be the literal value
of a Content-Type header isn't an elegant API.  You shouldn't need to do
parsing of a string value to extract "text/plain", and you shouldn't have
to do serialization to get "text/plain; charset=UTF-8".

(My reading of RFC2046 is different, but either way I don't think the
intent of that RFC should determine the design of this API, at least on
this point.  It's a spec designed with completely different goals than a
JavaScript API.)


On Thu, Mar 7, 2013 at 2:02 PM, Alexey Proskuryakov  wrote:

> The current File API spec seems to have a mismatch between type in
> BlobPropertyBag, and type as Blob attribute. The latter declaratively
> states that the type is an ASCII lower case string. As mentioned by Glenn
> before, WebKit interpreted this by raising an exception in constructor for
> non-ASCII input, and lowercasing the string. I think that this is a
> reasonable reading of the spec. I'd be fine with raising exceptions for
> invalid types more eagerly.
>

With the file API spec as currently written, there's no normative text
saying to throw an exception, so WebKit's interpretation is incorrect, but
it's simple to fix.  In 7.1 (Constructors), add a step that says "If the
type member of the options argument is set, and contains any Unicode
codepoints less than U+0020 or greater than U+007E, throw a SyntaxError
exception and abort these steps."

(WebKit actually only throws outside of [0,0x7F].  This language throws
outside of [0x20,0x7E], excluding control characters.)

I'd suggest importing WebKit's lowercasing of .type, too, in the same place.

-- 
Glenn Maynard


Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-07 Thread Glenn Maynard
On Wed, Mar 6, 2013 at 1:02 PM, Ian Fette (イアンフェッティ) wrote:

> I seem to recall we contemplated people writing libraries on top of IDB
> from the beginning. I'm not sure why this is a bad thing.
>

Expecting libraries providing higher-level abstractions is fine, but it's
bad if an API is inconvenient to use directly for common cases.  For
example, it's natural to expect people to use a game engine library
wrapping Canvas to write a game, but Canvas itself is easy to use directly
most of the time, for lots of use cases.

The only API on the platform that I regularly use which I honestly find
unreasonable to use without a wrapper of some kind is cookies, which is one
of the worst APIs we've got.  Other than that, I can't think of any web API
that I actually need a wrapper for.  This is very good, since it means
everyone else reading my code already understands the APIs I'm using.

We originally shipped "web sql" / sqlite, which was a familiar interface
> for many and relatively easy to use, but had a sufficiently large API
> surface area that no one felt they wanted to document the whole thing such
> that we could have an inter-operable standard. (Yes, I'm simplifying a bit.)
>

(Not to get sidetracked on this, but this seems oversimplified to the point
of being confusing.
http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0025.html)

As a result, we came up with an approach of "What are the fundamental
> primitives that we need?", spec'd that out, and shipped it. We had
> discussions at the time that we expected library authors to produce
> abstraction layers that made IDB easier to use, as the "fundamental
> primitives" approach was not necessarily intended to produce an API that
> was as straightforward and easy to use as what we were trying to replace.
> If that's now what is happening, that seems like a good thing, not a
> failure.
>

It's fine to not try to be as simple to use as localStorage.  That's not an
attainable goal; it's not a database in any practical sense and never tried
to be.

But if we've added a new API to the platform that typical developers
wouldn't want to use directly without any wrapper library, we've made an
error.

-- 
Glenn Maynard


[webcomponents]: First stab at the Web Components spec

2013-03-07 Thread Dimitri Glazkov
Hello fellow web-appanauts,

The day you've been waiting for had finally arrived (or not, depending
on the type of day been waiting for).

Here's a first rough draft of the Web Components spec:

https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/components/index.html

This spec looks really small, and I am really hoping to keep it that way.

Thanks to Ian's excellent HTML spec (which allowed plugging most of
the existing behaviors straight in) and delegating most of the heavy
lifting to other specs under Web Components umbrella, there's very
little left in the actual bell cap.

Things missing:
* somehow processing  elements when loading components.
* unfixed bugs
* examples and nicer intro

Please look over it. I look forward to your eagle-eyed insights in the
form of bugs and emails.


:DG<



[webcomponents]: HTMLElementElement missing a primitive

2013-03-07 Thread Scott Miles
Currently, if I document.register something, it's my job to supply a
complete prototype.

For HTMLElementElement on the other hand, I supply a tag name to extend,
and the prototype containing the extensions, and the system works out the
complete prototype.

However, this ability of HTMLElementElement to construct a complete
prototype from a tag-name is not provided by any imperative API.

As I see it, there are three main choices:

1. HTMLElementElement is recast as a declarative form of document.register,
in which case it would have no 'extends' attribute, and you need to make
your own (complete) prototype.

2. We make a new API for 'construct prototype from a tag-name to extend and
a set of extensions'.

3. Make document.register work like HTMLElementElement does now (it takes a
tag-name and partial prototype).

Am I making sense? WDYT?

Scott


Re: File API for Review

2013-03-07 Thread Arun Ranganathan
Anne,


> On Wed, Feb 6, 2013 at 7:58 PM, Arun Ranganathan 
> wrote:
> > 3. Progress events have been clarified.
> 
> You're still using the old IDL syntax for event handlers.
> 


Fixed.



> I think we should rename URI to URL. That's what everyone is
> converging on.


Fixed.


> I'm also not convinced that leaving what exactly to return in the
> HTTP
> scenario open to implementors is a good thing. We've been through
> such
> things before and learned that handwaving is bad. Lets just pick
> something.


Just to be clear, are you referring to the 500 Error Condition for Blob URLs?  
If so, the only handwaving is about the text of the error message.  I'm happy 
to tighten even this.



> Just like HTML, CSS, etc. this specification should defer to
> http://encoding.spec.whatwg.org/ for its encoding related
> requirements.


I fully agree that what we've currently got, which favors a "heuristic guessing 
model" for encoding, and forces UTF-8 in a void, isn't sufficient.  AND I agree 
that the encoding spec. is much more detailed.  But what exactly does deferring 
to it entail?

Right now, the specification encourages user agents to get encoding from:

1. The encoding parameter supplied with the readAsText.
2. A byte order detection heuristic, if 1. is missing.
3. The charset component of Blob.type, if provided and if 1. and 2. yield no 
result.
4. Just use utf-8 if 1, 2, and 3 yield no result.

Under the encoding spec., it returns failure if encoding isn't valid, and it 
returns failure if the BOM check fails.  So should the spec. say something 
about throwing?


> 
> I don't think we should throw for limitations on URL length. We
> always
> leave undefined lengths unaddressed in specifications, including with
> regards to how to handle them.


OK.

-- A*



Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-07 Thread pira...@gmail.com
+1 to be able to use easily the default API without requiring third party
libraries.

Sent from my Android cell phone, please forgive the lack of format on the
text, and my fat thumbs :-P
El 07/03/2013 21:30, "Shwetank Dixit"  escribió:

> On Thu, 07 Mar 2013 13:01:20 +0100, Alex Russell 
> wrote:
>
>  On Wednesday, March 6, 2013, Ian Fette (イアンフェッティ) wrote:
>>
>>  I seem to recall we contemplated people writing libraries on top of IDB
>>> from the beginning. I'm not sure why this is a bad thing.
>>>
>>>
>> It's not bad as an assumption, but it can quickly turn into an excuse for
>> API design malpractice because it often leads to the (mistaken) assumption
>> that user-provided code is as cheap as browser-provided API. Given that
>> users pull their libraries from the network more often than from disk (and
>> must parse/compile, etc.), the incentives of these two API-providers could
>> not be more different. That's why it's critical that API designers try to
>> forestall the need for libraries for as long as possible when it comes to
>> web features.
>>
> +1
>
> Libraries are important in many areas, but the goal should be to have a
> spec which doesn't *require* it. It should be easy to understand and
> implement without it. I would rather learn the spec and write a few lines
> of code and have it run - rather than learn the spec, then learn a library,
> and then use that library in every required page (increasing my bandwidth
> costs and the costs to my users who are accessing my site on mobile, often
> on limited data plans). The former option should be the design goal
> whenever possible.
>
> Also, Alec's points were spot on.
>
>
>>
>>  We originally shipped "web sql" / sqlite, which was a familiar interface
>>> for many and relatively easy to use, but had a sufficiently large API
>>> surface area that no one felt they wanted to document the whole thing
>>> such
>>> that we could have an inter-operable standard. (Yes, I'm simplifying a
>>> bit.)
>>>
>>>
>> Yeah, I recall that the SQLite semantics were the big obstacle.
>>
>>
>>  As a result, we came up with an approach of "What are the fundamental
>>> primitives that we need?", spec'd that out, and shipped it. We had
>>> discussions at the time that we expected library authors to produce
>>> abstraction layers that made IDB easier to use, as the "fundamental
>>> primitives" approach was not necessarily intended to produce an API that
>>> was as straightforward and easy to use as what we were trying to replace.
>>> If that's now what is happening, that seems like a good thing, not a
>>> failure.
>>>
>>>
>> It's fine in the short run to provide just the low-level stuff and work up
>> to the high-level things -- but only when you can't predict what the
>> high-level needs will be. Assuming that's what the WG's view was, you're
>> right; feature not bug, although there's now more work to do.
>>
>> Anyhow, IDB is incredibly high-level in many places and primitive in
>> others. ISTM that it's not easy to get a handle on it's intended level of
>> abstraction.
>>
>>
>>  On Wed, Mar 6, 2013 at 10:14 AM, Alec Flett >> >wrote:
>>>
>>> My primary takeaway from both working on IDB and working with IDB for
>>> some
>>> demo apps is that IDB has just the right amount of complexity for really
>>> large, robust database use.. but for a "welcome to noSQL in the browser"
>>> it
>>> is way too complicated.
>>>
>>> Specifically:
>>>
>>>1. *versioning* - The reason this exists in IDB is to guarantee a
>>>schema (read: a fixed set of objectStores + indexes) for a given set
>>> of
>>>operations.  Versioning should be optional. And if versioning is
>>> optional,
>>>so should *opening* - the only reason you need to "open" a database is
>>>so that you have a handle to a versioned database. You can *almost*
>>> implement
>>>versioning in JS if you really care about it...(either keep an
>>> explicit
>>>key, or auto-detect the state of the schema) its one of those cases
>>> where
>>>80% of versioning is dirt simple  and the complicated stuff is really
>>> about
>>>maintaining version changes across multiply-opened windows. (i.e. one
>>>window opens an idb, the next window opens it and changes the schema,
>>> the
>>>first window *may* need to know that and be able to adapt without
>>>breaking any in-flight transactions) -
>>>2. *transactions* - Also should be optional. Vital to complex apps,
>>>but totally not necessary for many.. there should be a default
>>> transaction,
>>>like db.objectStore("foo").get("**bar")
>>>3. *transaction scoping* - even when you do want transactions, the api
>>>is just too verbose and repetitive for "get one key from one object
>>> store"
>>>- db.transaction("foo").**objectStore("foo").get("bar") - there
>>> should be
>>>implicit (lightweight) transactions like db.objectStore("foo").get("*
>>> *bar")
>>>4. *forced versioning* - when versioning is optional, it should be
>>>  

Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-07 Thread Shwetank Dixit
On Thu, 07 Mar 2013 13:01:20 +0100, Alex Russell   
wrote:



On Wednesday, March 6, 2013, Ian Fette (イアンフェッティ) wrote:


I seem to recall we contemplated people writing libraries on top of IDB
from the beginning. I'm not sure why this is a bad thing.



It's not bad as an assumption, but it can quickly turn into an excuse for
API design malpractice because it often leads to the (mistaken)  
assumption

that user-provided code is as cheap as browser-provided API. Given that
users pull their libraries from the network more often than from disk  
(and
must parse/compile, etc.), the incentives of these two API-providers  
could

not be more different. That's why it's critical that API designers try to
forestall the need for libraries for as long as possible when it comes to
web features.

+1

Libraries are important in many areas, but the goal should be to have a  
spec which doesn't *require* it. It should be easy to understand and  
implement without it. I would rather learn the spec and write a few lines  
of code and have it run - rather than learn the spec, then learn a  
library, and then use that library in every required page (increasing my  
bandwidth costs and the costs to my users who are accessing my site on  
mobile, often on limited data plans). The former option should be the  
design goal whenever possible.


Also, Alec's points were spot on.





We originally shipped "web sql" / sqlite, which was a familiar interface
for many and relatively easy to use, but had a sufficiently large API
surface area that no one felt they wanted to document the whole thing  
such
that we could have an inter-operable standard. (Yes, I'm simplifying a  
bit.)




Yeah, I recall that the SQLite semantics were the big obstacle.



As a result, we came up with an approach of "What are the fundamental
primitives that we need?", spec'd that out, and shipped it. We had
discussions at the time that we expected library authors to produce
abstraction layers that made IDB easier to use, as the "fundamental
primitives" approach was not necessarily intended to produce an API that
was as straightforward and easy to use as what we were trying to  
replace.

If that's now what is happening, that seems like a good thing, not a
failure.



It's fine in the short run to provide just the low-level stuff and work  
up

to the high-level things -- but only when you can't predict what the
high-level needs will be. Assuming that's what the WG's view was, you're
right; feature not bug, although there's now more work to do.

Anyhow, IDB is incredibly high-level in many places and primitive in
others. ISTM that it's not easy to get a handle on it's intended level of
abstraction.


On Wed, Mar 6, 2013 at 10:14 AM, Alec Flett  
wrote:


My primary takeaway from both working on IDB and working with IDB for  
some

demo apps is that IDB has just the right amount of complexity for really
large, robust database use.. but for a "welcome to noSQL in the  
browser" it

is way too complicated.

Specifically:

   1. *versioning* - The reason this exists in IDB is to guarantee a
   schema (read: a fixed set of objectStores + indexes) for a given set  
of
   operations.  Versioning should be optional. And if versioning is  
optional,
   so should *opening* - the only reason you need to "open" a database  
is
   so that you have a handle to a versioned database. You can *almost*  
implement
   versioning in JS if you really care about it...(either keep an  
explicit
   key, or auto-detect the state of the schema) its one of those cases  
where
   80% of versioning is dirt simple  and the complicated stuff is  
really about

   maintaining version changes across multiply-opened windows. (i.e. one
   window opens an idb, the next window opens it and changes the  
schema, the

   first window *may* need to know that and be able to adapt without
   breaking any in-flight transactions) -
   2. *transactions* - Also should be optional. Vital to complex apps,
   but totally not necessary for many.. there should be a default  
transaction,

   like db.objectStore("foo").get("bar")
   3. *transaction scoping* - even when you do want transactions, the  
api
   is just too verbose and repetitive for "get one key from one object  
store"
   - db.transaction("foo").objectStore("foo").get("bar") - there should  
be
   implicit (lightweight) transactions like  
db.objectStore("foo").get("bar")

   4. *forced versioning* - when versioning is optional, it should be
   then possible to change the schema during a regular transaction.  
Yes, this
   is a lot of rope but this is actually for much more complex apps,  
rather

   than simple ones. In particular, it's not uncommon for more complex
   database systems to dynamically create indexes based on observed  
behavior
   of the API, or observed data (i.e. when data with a particular key  
becomes

   prevalent, generate an index for it) and then dynamically use them if
   present. At the moment you have to do a manual 

Re: File API: Blob.type

2013-03-07 Thread Alexey Proskuryakov

The current File API spec seems to have a mismatch between type in 
BlobPropertyBag, and type as Blob attribute. The latter declaratively states 
that the type is an ASCII lower case string. As mentioned by Glenn before, 
WebKit interpreted this by raising an exception in constructor for non-ASCII 
input, and lowercasing the string. I think that this is a reasonable reading of 
the spec. I'd be fine with raising exceptions for invalid types more eagerly.

This is the text in question:

(1)
> type, a DOMString which corresponds to the Blob object's type attribute. If 
> not the empty string, user agents must treat it as an RFC2616 media-type 
> [RFC2616], and as an opaque string that can be ignored if it is an invalid 
> media-type. This value must be used as the Content-Type header when 
> dereferencing a Blob URI.
> 


(2)
> type
> The ASCII-encoded string in lower case representing the media type of the 
> Blob, expressed as an RFC2046 MIME type [RFC2046]. On getting, conforming 
> user agents must return the MIME type of the Blob, if it is known. If 
> conforming user agents cannot determine the media type of the Blob, they must 
> return the empty string. A string is a valid MIME type if it matches the 
> media-type token defined in section 3.7 "Media Types" of RFC 2616 [RFC2616]. 
> If not the empty string, user agents must treat it as an RFC2616 media-type 
> [RFC2616], and as an opaque string that can be ignored if it is an invalid 
> media-type. This value must be used as the Content-Type header when 
> dereferencing a Blob URI.


It would be helpful to have the terminology corrected, and to have this 
generally clarified - for example, validity is mentioned here, but seems to be 
unused.

It seems pretty clear from normative text that charset parameter is supposed to 
work. A non-normative example supports that too. I agree with Arun that this 
seems best to keep as is.

However,  is about a different 
case - it's about posting multipart form data that has Blob elements with 
invalid media-types. I'm not even sure which spec is in charge of this behavior 
- I don't think that anything anywhere says that Blob.type affects media-type 
of posted multipart data, even though that's obviously the intention. 
XMLHttpRequest spec defers to HTML, which defers to RFC2388, which mentions 
files "returned via filling out a form", but not Blobs (which is no surprise 
given its age).

Making Blobs only hold valid media-types would solve practical issues, but it 
would be helpful to know what formally defines multipart data serialization 
with blobs.

We also previously had 
 for sending 
non-multipart data. Back then, we determined that "Content-Type: " should be 
sent when the value is invalid. I'm no longer sure if that's right. For this 
case, XMLHttpRequest authoritatively defines the behavior, although heavily 
leaning on File API to decide when the type attribute is empty:

> If the object's type attribute is not the empty string let mime type be its 
> value.


Note that "mime type" is then directly used as default media-type for 
Content-Type header, but it's not parsed to set encoding variable. The encoding 
could be needed to update a charset in author provided Content-Type header 
field in later steps of the algorithm. This is probably not right, as Blob 
should know its encoding better than code that sets header fields on an 
XMLHttpRequest object.

- WBR, Alexey Proskuryakov




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Dimitri Glazkov
On Thu, Mar 7, 2013 at 11:00 AM, Boris Zbarsky  wrote:
> We're talking about both, in general.  Until this conversation started at
> least one implementor was planning to ship exposed-by-default with no way to
> not expose, as far as I can tell.
>
> I _think_, but am not sure, that this is no longer in the plans.  At least I
> hope so.  There's been no definitive statement.

Chrome is indeed shipping a prefixed implementation of
exposed-by-default shadow trees as of M25: http://jsfiddle.net/h5S9V/

Please don't let this stop the discussion. Prefixed implementations
are meant to be experiments, unless they stick around for too long.

:DG<



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Boris Zbarsky

On 3/7/13 1:54 PM, Dave Methvin wrote:

Another example, let's say Disqus created a webcomponent to show
discussions related to content. I want to use that on my page but
enhance it with a bozo/spam filter, fully understanding that it will
require knowledge of Disqus webcomponent internals.


But you want to continue linking to the version hosted on the Disqus 
server instead of hosting it yourself and modifying as desired, presumably?


Because if you're hosting yourself you can certainly just make a slight 
modification to opt into not hiding the implementation if you want, right?


-Boris



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Boris Zbarsky

On 3/7/13 1:54 PM, Scott González wrote:

We're talking about a default value, not what functionality is or isn't
available.


We're talking about both, in general.  Until this conversation started 
at least one implementor was planning to ship exposed-by-default with no 
way to not expose, as far as I can tell.


I _think_, but am not sure, that this is no longer in the plans.  At 
least I hope so.  There's been no definitive statement.


-Boris



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Elliott Sprehn
On Thu, Mar 7, 2013 at 9:55 AM, Bronislav Klučka <
bronislav.klu...@bauglir.com> wrote:

>
> ...
>
> I do not mean to sound cocky here, but I'd really like to know how many
> people here are used to languages that can separate internals and
> externals, because if you are simply not used to it, you simply cannot see
> the benefits and all goes to "I'm used to play with internals of controls",


I think you'll find everyone in this discussion has used a wide variety of
systems from XUL to Cocoa to Swing to MFC and many more.

I think it's important to note that all these native platforms support
walking the hierarchy as well.

Cocoa has [NSView subviews], Windows has FindWindowEx/EnumChildWindows,
Swing has getComponents(), ...

I'm struggling to think of a widely used UI platform that _doesn't_ give
you access. Sure, there's encapsulation, Shadow DOM that has too, but they
all still give you an accessor to get down into the components.

...
>
> From my JS/HTML control experience?
> * I want all my tables to look certain way - boom jQury datepicker brokes
> down, tinyMCE brokes down
> * I want all my tables to have and option for exporting data - boom jQury
> datepicker brokes down, tinyMCE brokes down
> * I switch from content-box to border-box - pretty much every 3rd party
> control breaks down
> * I want to autogenerate table of contents (page menu links) from headings
> in the article, f*ck, some stupid plugin gets involved
> that's like the last week experience
> ...


Private shadows are not necessary to address any if the issues you cite.
Indeed all of these issues are already fixed with the current design by way
of scoped styles, resetting style inheritance, and shadows being separate
trees you don't accidentally fall into.

I think this is really the compelling argument. We solved the major issues
already, and none of the other very successful platforms (ex. Cocoa,
Android, etc.) needs to be so heavy handed as to prevent you from walking
the tree if you choose.

- E


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Dave Methvin
 wrote:

> As for the  example: This isn't arbitrary 3rd party
> scripts coming and crippling your DOM in unexpected ways. This is you as
> the developer of the site saying the native experience is too limiting and
> then opting in to a different UI. This is also not global, change the world
> behavior, this is on a per-element basis.
>

Something like this is an example of where judicious breaking of the seals
can make a big difference. It would be a shame if all sorts of useful
components were trapped in opaque boxes with no way for the enclosing pages
to enhance or examine them. That seems counter to what the web has been
about since its inception, awesome stuff that forms the kernel of all sorts
of innovative mashups.

Another example, let's say Disqus created a webcomponent to show
discussions related to content. I want to use that on my page but enhance
it with a bozo/spam filter, fully understanding that it will require
knowledge of Disqus webcomponent internals. Yes it may break. But my
alternative is to make a feature request to Disqus, hope they approve, and
wait for an implementation if it's even *possible*. For example, I may be
using information only I have to do the filtering, and I don't want to
share it with Disqus.

So sure, put up a big warning that says "CAUTION: The edges of this sign
are sharp!" but don't prevent people from getting to the internals of
webcomponents at all. That puts the web at the mercy of the implementer
almost as surely as closed source does.


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Scott González
On Thu, Mar 7, 2013 at 1:37 PM, Bronislav Klučka <
bronislav.klu...@bauglir.com> wrote:

> Your questions about things like should scripts use closures are just
> derailing the conversation. I'm honestly not sure it's worth replying to
> any of your points. But to clarify some points I think are relevant:
>
>
> You are right, because someone is trying to kill important (from my point
> of view) technology: being able to actually create reusable UI libraries.
>

We're talking about a default value, not what functionality is or isn't
available. I don't see who is trying to kill your important technologies.

But regardless of how clever people work on it, it's not self contained, it
> leaks where? in DOM/CSS and it collides.
>

This is the nature of the web. There does not exist technology that
prevents this. We're discussing that technology right now. But
specifically, we're only discussing the default behavior: Do we default to
how the web has always worked or do we break tradition?

>  It's not possible to expose potions of a DOM. So, if you want any
> customization at the DOM level, it's all or nothing. You can't expect to
> expose a JS API on top of a web component that is small and nice to work
> with and provide the flexibility of having control over the DOM. You can
> document that your web component provides some hierarchical structure and
> uses classes in a specific way. Then users can make modifications, for
> example, injecting additional markup, without breaking the structure or
> semantics of the existing web component. I'm not advocating for total
> anarchy.
>
> But that is exactly my point. I do not want to expose the whole DOM and
> then make programmer read tons of docs. about internals because they leak.
>

So don't. Opt-in to having your DOM be private.


> I cannot imagine to have app using 50 different controls/component from 4
> vendors and having to figure out how to make them not clash
>

Why are they clashing? Web components are self-contained. The only
collisions that would exist are either 2 vendors creating the same custom
name or a script that isn't even a web component reaching into a web
component. The former is impossible to avoid, the latter is what we're
discussing (usefulness of being able to dive into a shadow root.


> If it's safe to modify DOM, I make it public. If it's not, than it is not,
> then do not touch it.
>

Again, this is all or nothing. If you want it private, you can do that.
Nobody is saying this shouldn't be an option.

>  As for the  example: This isn't arbitrary 3rd party
> scripts coming and crippling your DOM in unexpected ways. This is you as
> the developer of the site saying the native experience is too limiting and
> then opting in to a different UI. This is also not global, change the world
> behavior, this is on a per-element basis.
>
> well... if the 3rd party control is not fitting to your scenario, don't
> use it, or rewrite it (if you have the permission).
>

Again, you're completely missing the point. I WANT the 3rd party control,
but I also want the semantics of . Today it's one or the
other. In the future, web components will allow you to have both (as
explained by Dimitri).

My JS example may seem like distraction to you, but it's actually the same
> point here yet again. If you find JS class that is almost there,  you have
> 3 choices: rewrite it, throw it away and find another or write your own.
>

Those options suck. Seriously. By the way, option 1 and option 3 are the
same.


> Yes, you are the developer of the site, so you can choose what you
> want/can use. It's not mandatory for you to use input[type="date"]
> containing shadow. Pick another, write your own.
>

Again, the web platform is providing functionality which I cannot use then.
We need to empower developers to leverage the technology built into
browsers. I should probably take the time to dig through the archives and
find threads about why making everything a black box sucks for web
developers.


> If it's wrong technology for you, do not use it. But why killing it
> altogether for anyone? Because someone else wrote something you cannot
> modify?
>

Who is killing anything?


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Bronislav Klučka

  
  

On 7.3.2013 19:08, Scott González
  wrote:


  This just seems like a big rant. 

may seem a bit harsh, but I'm trying to compress my point of view,
it's not like we can have hours long discussion about pros and cons.
i mean no harm :)


  Your questions about things like should scripts use
closures are just derailing the conversation. I'm honestly not
sure it's worth replying to any of your points. But to clarify
some points I think are relevant:


You are right, because someone is trying to kill important (from my
point of view) technology: being able to actually create reusable UI
libraries. You may think we have those, we do not... Againg from my
experience from other languages... I have approx. 200 controls
installed in my Delphy from approx. 12 vendors. I can use them in
any combination... seamlessly, without caring how are those written
(unless pig wrote them). You cannot do that on Web, don't get me
wrong I admire jQuery, it's like the alphabet of JS programming. But
regardless of how clever people work on it, it's not self contained,
it leaks where? in DOM/CSS and it collides. 


  

  

It's not possible to expose potions of a DOM. So, if you
  want any customization at the DOM level, it's all or nothing.
  You can't expect to expose a JS API on top of a web component
  that is small and nice to work with and provide the
  flexibility of having control over the DOM. You can document
  that your web component provides some hierarchical structure
  and uses classes in a specific way. Then users can make
  modifications, for example, injecting additional markup,
  without breaking the structure or semantics of the existing
  web component. I'm not advocating for total anarchy.
  

But that is exactly my point. I do not want to expose the whole DOM
and then make programmer read tons of docs. about internals because
they leak. I cannot imagine to have app using 50 different
controls/component from 4 vendors and having to figure out how to
make them not clash 
If it's safe to modify DOM, I make it public. If it's not, than it
is not, then do not touch it.

  


As for the  example: This isn't
  arbitrary 3rd party scripts coming and crippling your DOM in
  unexpected ways. This is you as the developer of the site
  saying the native experience is too limiting and then opting
  in to a different UI. This is also not global, change the
  world behavior, this is on a per-element basis.
  

well... if the 3rd party control is not fitting to your scenario,
don't use it, or rewrite it (if you have the permission). My JS
example may seem like distraction to you, but it's actually the same
point here yet again. If you find JS class that is almost there, 
you have 3 choices: rewrite it, throw it away and find another or
write your own.
Yes, you are the developer of the site, so you can choose what you
want/can use. It's not mandatory for you to use input[type="date"]
containing shadow. Pick another, write your own.

If it's wrong technology for you, do not use it. But why killing it
altogether for anyone? Because someone else wrote something you
cannot modify?

B.





  


  
  

On Thu, Mar 7, 2013 at 12:55 PM,
  Bronislav Klučka 
  wrote:
  
On 7.3.2013 17:51, Scott González wrote:

  

  On Wed, Mar 6, 2013 at 3:00 PM, Boris Zbarsky >
  wrote:
  
      On 3/6/13 1:31 PM, Scott González wrote:
  
          but we feel the pros of exposing internals
  outweigh the cons.
  
  
      When you say "exposing internals" here, which one
  of the following
      do you mean:
  
      1)  Exposing internals always.
      2)  Exposing internals by default, with a way to
  opt into not
      exposing.
      3)  Not exposing internals by default, with a way
  to opt into
      exposing.
  
  
  I was replying in the context of jQuery, in which we
  expose most internals always. There is no option to
  have jQuery hide it's internals.
  
      And what do you

Re: File API: Blob.type

2013-03-07 Thread Arun Ranganathan
On Mar 6, 2013, at 7:42 PM, Glenn Maynard wrote: 

On Wed, Mar 6, 2013 at 8:29 AM, Anne van Kesteren  wrote: 
On Wed, Mar 6, 2013 at 2:21 PM, Glenn Maynard  wrote: 
> Blob.type is a MIME type, not a Content-Type header. It's a string of 
> codepoints, not a series of bytes. XHR is a protocol-level API, so maybe it 
> makes sense there, but it doesn't make sense for Blob. 

>> It's a Content-Type header value and should have those restrictions. 

>>> It's not a Content-Type header, it's a MIME type. That's part of a 
>>> Content-Type header, 
>>> but they're not the same thing. 

In fact, the intent is that the value of Blob.type is reflected in the 
Content-Type, and that setting Blob.type means that when fetching that Blob as 
a blob: you'll get the value of Blob.type in the Content-Type header. This 
model *did* allow for charset params -- it always has (perhaps not advertised, 
but it always has). 

At some point there was a draft that specified *strict* parsing for compliance 
with RFC2046, including tokenization ("/") and eliminating non-ASCII cruft. But 
we scrapped that because bugs in all major browser projects showed that this 
spec. text was callously ignored. And I didn't want to spec. fiction, so we 
went with the current model for Blob.type, which is, as Anne points out, pretty 
lax. 

>>That doesn't make sense. Blob.type isn't a string of bytes, it's a string of 
>>Unicode codepoints that happens 
>> to be restricted to the ASCII range. Applying WebKit's validity checks 
>> (with the addition of disallowing nonprintable characters) will make it have 
>> the restrictions you want; 
>> ByteString has nothing to do with this. 

I'm in favor of introducing stricter rules for Blob.type, and I'm also in favor 
of allowing charset params; Glenn's example of 'if(blob.type == "text/plain")' 
will break, but I don't think we should be encouraging strict equality 
comparisons on blob.type (and in fact, should *discourage* it as a practice). 

But I'm not sure about why we'd choose ByteString in lieu of being strict with 
what characters are allowed within DOMString. Anne, can you shed some light on 
this? And of course we should eliminate CR + LF as a possibility at constructor 
invocation time, possibly by throwing. 

Glenn: I think that introducing a separate interface for other parameters 
actually takes away from the elegance of a simple Blob.type. The RFC doesn't 
separate them, and I'm not sure we should either. My reading of the RFC is that 
parameters *are an intrinsic part of* the MIME type. 

-- A* 

- Original Message -

> On Wed, Mar 6, 2013 at 8:29 AM, Anne van Kesteren < ann...@annevk.nl
> > wrote:

> > On Wed, Mar 6, 2013 at 2:21 PM, Glenn Maynard < gl...@zewt.org >
> > wrote:
> 
> > > Blob.type is a MIME type, not a Content-Type header. It's a
> > > string
> > > of
> 
> > > codepoints, not a series of bytes. XHR is a protocol-level API,
> > > so
> > > maybe it
> 
> > > makes sense there, but it doesn't make sense for Blob.
> 

> > It's a Content-Type header value and should have those
> > restrictions.
> 

> It's not a Content-Type header, it's a MIME type. That's part of a
> Content-Type header, but they're not the same thing.

> But String vs. ByteString has nothing to do with the restrictions
> applied to it.

> > Making it a ByteString plus additional restrictions will make it do
> > as
> 
> > required.
> 

> That doesn't make sense. Blob.type isn't a string of bytes, it's a
> string of Unicode codepoints that happens to be restricted to the
> ASCII range. Applying WebKit's validity checks (with the addition of
> disallowing nonprintable characters) will make it have the
> restrictions you want; ByteString has nothing to do with this.

> On Wed, Mar 6, 2013 at 11:47 AM, Darin Fisher < da...@chromium.org >
> wrote:

> > So the intent is to allow specifying attributes like "charset"?
> > That
> > sounds useful.
> 
> I don't think so. This isn't very well-defined by RFC2046 (it seems
> vague about the relationship of parameters to MIME types), but I'm
> pretty sure Blob.type is meant to be only a MIME type, not a MIME
> type plus content-type parameters. Also, it would lead to a poor
> API: you could no longer simply say 'if(blob.type == "text/plain")';
> you'd have to parse it out yourself (which I expect nobody is
> actually doing).

> Other parameters should have a separate interface, eg.
> blob.typeParameters.charset = "UTF-8", if we want that.

> --
> Glenn Maynard


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Scott González
This just seems like a big rant. Your questions about things like should
scripts use closures are just derailing the conversation. I'm honestly not
sure it's worth replying to any of your points. But to clarify some points
I think are relevant:

It's not possible to expose potions of a DOM. So, if you want any
customization at the DOM level, it's all or nothing. You can't expect to
expose a JS API on top of a web component that is small and nice to work
with and provide the flexibility of having control over the DOM. You can
document that your web component provides some hierarchical structure and
uses classes in a specific way. Then users can make modifications, for
example, injecting additional markup, without breaking the structure or
semantics of the existing web component. I'm not advocating for total
anarchy.

As for the  example: This isn't arbitrary 3rd party
scripts coming and crippling your DOM in unexpected ways. This is you as
the developer of the site saying the native experience is too limiting and
then opting in to a different UI. This is also not global, change the world
behavior, this is on a per-element basis.



On Thu, Mar 7, 2013 at 12:55 PM, Bronislav Klučka <
bronislav.klu...@bauglir.com> wrote:

>
> On 7.3.2013 17:51, Scott González wrote:
>
>  On Wed, Mar 6, 2013 at 3:00 PM, Boris Zbarsky > bzbar...@mit.edu>> wrote:
>>
>> On 3/6/13 1:31 PM, Scott González wrote:
>>
>> but we feel the pros of exposing internals outweigh the cons.
>>
>>
>> When you say "exposing internals" here, which one of the following
>> do you mean:
>>
>> 1)  Exposing internals always.
>> 2)  Exposing internals by default, with a way to opt into not
>> exposing.
>> 3)  Not exposing internals by default, with a way to opt into
>> exposing.
>>
>>
>> I was replying in the context of jQuery, in which we expose most
>> internals always. There is no option to have jQuery hide it's internals.
>>
>> And what do you feel the pros are of whichever one you're talking
>> about compared to the items after it on the list, just so we're on
>> the same page?
>>
>>
>> In terms of web components, I'm not sure I (or anyone else on the jQuery
>> team) have too strong of an opinion on the default. However, I can say that
>> I find it extremely annoying that I can't reach into the Shadow DOM for new
>> input types and just kill everything. I want  to render
>> as  because native HTML will likely never be as flexible
>> as custom JS components. Obviously I'd prefer a standard, and web
>> components are supposed to solve this. But in the meantime, we're provided
>> with useful semantics and validation that go unused if you want the
>> flexibility of a JS date picker.
>>
>> As someone building JS components, I see the benefit of having the
>> internals exposed to me so I can do as I please. I also recognize the pain
>> of maintaining code that reaches into internals. As someone who cares about
>> the future of the web, I see the very real danger of this becoming
>> widespread and ending up in the situation Boris wants us to avoid.
>>
>
> I do not mean to sound cocky here, but I'd really like to know how many
> people here are used to languages that can separate internals and
> externals, because if you are simply not used to it, you simply cannot see
> the benefits and all goes to "I'm used to play with internals of controls",
> regardless how wrong it is. I mean it's like discussion of introducing
> private properties of an class and someone complaining that he/she is used
> to touch everything he/she desires. I find it hard to explain how wrong
> that simply is. You do not touch anything else than external API provided
> by opaque blackbox. As someone building components in several languages not
> being able to hide certain things is scary to me.
>
> I mean why do we wrap JS code in anonymous functions?
> (function(){
> //place code here
> })();
> well.. are you advocating in jQuery not to do that? Because someone wants
> to play with internals? are you advocating that it does not matter what can
> anyone change in jQuery internals or what can leak from jQuery internals to
> outside space? - "well he/she likes to play, let him/her, and if anything
> brakes down, his/hers fault, and by the way he/she have to be especially
> careful not to use variable names we are using already , functions as
> well... but he/she gets used to it" - I mean where is the difference here?
> Just open jQuery code... bunch of internal variables... why are those not
> public? Again, how can you advocate for non-private DOM while working in
> script that is whole private... just exposing the necessary.
> And thanks for making my point, if my control depends on 
> I do not want some external script to cripple it!
> I opened jQuery datepicker source code (you seem to like it, I cannot tell
> you how glad I am for Chrome date picker that I can finally get rid of jQ
> datepicker [and jQ altogether])... so many pr

Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Bronislav Klučka


On 7.3.2013 17:51, Scott González wrote:
On Wed, Mar 6, 2013 at 3:00 PM, Boris Zbarsky > wrote:


On 3/6/13 1:31 PM, Scott González wrote:

but we feel the pros of exposing internals outweigh the cons.


When you say "exposing internals" here, which one of the following
do you mean:

1)  Exposing internals always.
2)  Exposing internals by default, with a way to opt into not
exposing.
3)  Not exposing internals by default, with a way to opt into
exposing.


I was replying in the context of jQuery, in which we expose most 
internals always. There is no option to have jQuery hide it's internals.


And what do you feel the pros are of whichever one you're talking
about compared to the items after it on the list, just so we're on
the same page?


In terms of web components, I'm not sure I (or anyone else on the 
jQuery team) have too strong of an opinion on the default. However, I 
can say that I find it extremely annoying that I can't reach into the 
Shadow DOM for new input types and just kill everything. I want type="date"> to render as  because native HTML will 
likely never be as flexible as custom JS components. Obviously I'd 
prefer a standard, and web components are supposed to solve this. But 
in the meantime, we're provided with useful semantics and validation 
that go unused if you want the flexibility of a JS date picker.


As someone building JS components, I see the benefit of having the 
internals exposed to me so I can do as I please. I also recognize the 
pain of maintaining code that reaches into internals. As someone who 
cares about the future of the web, I see the very real danger of this 
becoming widespread and ending up in the situation Boris wants us to 
avoid.


I do not mean to sound cocky here, but I'd really like to know how many 
people here are used to languages that can separate internals and 
externals, because if you are simply not used to it, you simply cannot 
see the benefits and all goes to "I'm used to play with internals of 
controls", regardless how wrong it is. I mean it's like discussion of 
introducing private properties of an class and someone complaining that 
he/she is used to touch everything he/she desires. I find it hard to 
explain how wrong that simply is. You do not touch anything else than 
external API provided by opaque blackbox. As someone building components 
in several languages not being able to hide certain things is scary to me.


I mean why do we wrap JS code in anonymous functions?
(function(){
//place code here
})();
well.. are you advocating in jQuery not to do that? Because someone 
wants to play with internals? are you advocating that it does not matter 
what can anyone change in jQuery internals or what can leak from jQuery 
internals to outside space? - "well he/she likes to play, let him/her, 
and if anything brakes down, his/hers fault, and by the way he/she have 
to be especially careful not to use variable names we are using already 
, functions as well... but he/she gets used to it" - I mean where is the 
difference here? Just open jQuery code... bunch of internal variables... 
why are those not public? Again, how can you advocate for non-private 
DOM while working in script that is whole private... just exposing the 
necessary.
And thanks for making my point, if my control depends on type=date> I do not want some external script to cripple it!
I opened jQuery datepicker source code (you seem to like it, I cannot 
tell you how glad I am for Chrome date picker that I can finally get rid 
of jQ datepicker [and jQ altogether])... so many private variables... 
should I repeat myself? Where is the difference? Apparently even jQuery 
team does not care whether programers wants to play with everything, but 
cares about that fact, that you can plug jQuery anywhere and it will not 
break by outside scripts and that outside space will not break by jQuery.
And that's what I'm expecting from ShadowDOM = controls you insert and 
they work, no side effects what so ever.


From my JS/HTML control experience?
* I want all my tables to look certain way - boom jQury datepicker 
brokes down, tinyMCE brokes down
* I want all my tables to have and option for exporting data - boom 
jQury datepicker brokes down, tinyMCE brokes down
* I switch from content-box to border-box - pretty much every 3rd party 
control breaks down
* I want to autogenerate table of contents (page menu links) from 
headings in the article, f*ck, some stupid plugin gets involved

that's like the last week experience
I mean why do I have to spend time learning *internals* of controls in 
HTML? Why do I have to look to what class names that control is using 
not to break thinks? etc.


And I can go on and on and on... I mean there are people who see the 
clear borders as benefit for their work. Why not giving them this option 
(considering HTML/JS controls are like the only major language 
constructed controls 

Re: Web Storage's Normative References and PR / REC

2013-03-07 Thread Philippe Le Hegaret
On Thu, 2013-03-07 at 12:04 -0500, Arthur Barstow wrote:
> > Hope this helps,
> 
> The above was helpful but I'm wondering about WebStorage's normative 
> reference to DOMCore WD. If we do the same type of evaluation and 
> testing for DOMCore that is needed for HTML5, will that be sufficient to 
> move the spec to REC?

We can certainly advocate for it. Since DOMCore is also in WebApps, that
should make the case easier,

Philippe





Re: Web Storage's Normative References and PR / REC

2013-03-07 Thread Anne van Kesteren
On Thu, Mar 7, 2013 at 5:04 PM, Arthur Barstow  wrote:
> The above was helpful but I'm wondering about WebStorage's normative
> reference to DOMCore WD. If we do the same type of evaluation and testing
> for DOMCore that is needed for HTML5, will that be sufficient to move the
> spec to REC?

HTML depends on DOM too. There shouldn't be a difference.


-- 
http://annevankesteren.nl/



Re: Web Storage's Normative References and PR / REC

2013-03-07 Thread Arthur Barstow

On 3/7/13 8:52 AM, ext Philippe Le Hegaret wrote:

On Thu, 2013-03-07 at 07:28 -0500, Arthur Barstow wrote:

Yves, Philippe,

WebApps agreed via [CfC] to publish a Proposed Recommendation of Web
Storage [CR] (implementation report is [ImplReport]). The CR has three
normative W3C references that are not yet Recommendations: DOMCore WD,
HTML5 CR and WebIDL CR. As such, we need you to clarify the implications
of these references re publishing a Web Storage PR and REC.

As I understand it, the Consortium's Process Document is actually silent
regarding maturity level of normative references. However, the Team
enforces - with some very specific exceptions - a reference policy via
"transition rules" ([TransRules]), in particular:

[[
Note: In general, documents do not advance to Recommendation with
normative references to W3C specifications that are not yet Recommendations.
]]

I _think_ the various processes and policies permit the Web Storage PR
to be published with the normative references in their current status.
Is this true?

I believe that you are indeed correct.


OK, then I will send a transition request for PR.


However, for the REC to be published, we can either wait until all of
the normative references are PRs themselves or we can ask the Director
for "exceptions". In case we want to pursue this later exception route,
would you please explain, what exactly the group would need to do for
each of these references?

The goal is to demonstrate that the materials referenced are stable and
any change to those references won't have an impact on the
recommendations.

For HTML5, by demonstrating that the concepts and features from HTML5
that are used are stable. One can do so by evaluating the referrences
and providing HTML5 tests for the features.


I do recall HTML5 being one of the specs that was granted an exception 
to the references policy (with the proviso you state above) but I don't 
think WebApps has done this "evaluating". If you are aware of any 
PRs/RECs that have done that evaluation/testing, please provide an 
URL(s) to the documentation.


WebApps - if you are willing to lead or help this `evaluating and 
testing`, please let us know.



For WebIDL, the Web Applications Working Group advised the Director
that, by providing idlharness.js tests and demonstrating their support,
it would enough to demonstrate that the WebIDL syntax used in the
specifications was stable and well understood.


[Oh right ;-).]

WebApps - if you are willing to lead or help with the idlharness 
testing, please let us know. FYI, a couple of RECs have already done this:


Navigation Timing 




High Resolution Timing 



Hope this helps,


The above was helpful but I'm wondering about WebStorage's normative 
reference to DOMCore WD. If we do the same type of evaluation and 
testing for DOMCore that is needed for HTML5, will that be sufficient to 
move the spec to REC?


-Thanks, ArtB




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Dimitri Glazkov
On Thu, Mar 7, 2013 at 8:55 AM, Boris Zbarsky  wrote:

> Chances are that behavior would remain for the foreseeable future even if
> page-provided components expose their internals, from what I understand...
> So that's a somewhat orthogonal discussion, sadly.  :(

I agree, it's very unlikely the UA shadow trees will ever be public.
However, Shadow DOM (the spec) does allow you to completely override
the UA shadow tree by just creating a new shadow tree on top of it.
Current implementation in WebKit is not yet quite there, but I hope it
will be soon.

:DG<



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Boris Zbarsky

On 3/7/13 11:51 AM, Scott González wrote:

I was replying in the context of jQuery, in which we expose most
internals always. There is no option to have jQuery hide it's internals.


Yes, but you explicitly said there are pros to exposing the internals. 
I'd like to understand what those pros are in your context, and whether 
they're explicitly tied the fact that externals are always exposed 
(which you're forced into right now) or whether the pros are just to do 
with the _ability_ to expose internals as desired.


That is, what are the specific pros?


In terms of web components, I'm not sure I (or anyone else on the jQuery
team) have too strong of an opinion on the default. However, I can say
that I find it extremely annoying that I can't reach into the Shadow DOM
for new input types and just kill everything.


Chances are that behavior would remain for the foreseeable future even 
if page-provided components expose their internals, from what I 
understand...  So that's a somewhat orthogonal discussion, sadly.  :(


-Boris



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Scott González
On Wed, Mar 6, 2013 at 3:00 PM, Boris Zbarsky  wrote:

> On 3/6/13 1:31 PM, Scott González wrote:
>
>> but we feel the pros of exposing internals outweigh the cons.
>>
>
> When you say "exposing internals" here, which one of the following do you
> mean:
>
> 1)  Exposing internals always.
> 2)  Exposing internals by default, with a way to opt into not exposing.
> 3)  Not exposing internals by default, with a way to opt into exposing.
>

I was replying in the context of jQuery, in which we expose most internals
always. There is no option to have jQuery hide it's internals.


> And what do you feel the pros are of whichever one you're talking about
> compared to the items after it on the list, just so we're on the same page?


In terms of web components, I'm not sure I (or anyone else on the jQuery
team) have too strong of an opinion on the default. However, I can say that
I find it extremely annoying that I can't reach into the Shadow DOM for new
input types and just kill everything. I want  to render
as  because native HTML will likely never be as flexible
as custom JS components. Obviously I'd prefer a standard, and web
components are supposed to solve this. But in the meantime, we're provided
with useful semantics and validation that go unused if you want the
flexibility of a JS date picker.

As someone building JS components, I see the benefit of having the
internals exposed to me so I can do as I please. I also recognize the pain
of maintaining code that reaches into internals. As someone who cares about
the future of the web, I see the very real danger of this becoming
widespread and ending up in the situation Boris wants us to avoid.


Re: Web Storage's Normative References and PR / REC

2013-03-07 Thread Philippe Le Hegaret
On Thu, 2013-03-07 at 07:28 -0500, Arthur Barstow wrote:
> Yves, Philippe,
> 
> WebApps agreed via [CfC] to publish a Proposed Recommendation of Web 
> Storage [CR] (implementation report is [ImplReport]). The CR has three 
> normative W3C references that are not yet Recommendations: DOMCore WD, 
> HTML5 CR and WebIDL CR. As such, we need you to clarify the implications 
> of these references re publishing a Web Storage PR and REC.
> 
> As I understand it, the Consortium's Process Document is actually silent 
> regarding maturity level of normative references. However, the Team 
> enforces - with some very specific exceptions - a reference policy via 
> "transition rules" ([TransRules]), in particular:
> 
> [[
> Note: In general, documents do not advance to Recommendation with 
> normative references to W3C specifications that are not yet Recommendations.
> ]]
> 
> I _think_ the various processes and policies permit the Web Storage PR 
> to be published with the normative references in their current status. 
> Is this true?

I believe that you are indeed correct.

> However, for the REC to be published, we can either wait until all of 
> the normative references are PRs themselves or we can ask the Director 
> for "exceptions". In case we want to pursue this later exception route, 
> would you please explain, what exactly the group would need to do for 
> each of these references?

The goal is to demonstrate that the materials referenced are stable and
any change to those references won't have an impact on the
recommendations.

For HTML5, by demonstrating that the concepts and features from HTML5
that are used are stable. One can do so by evaluating the referrences
and providing HTML5 tests for the features.

For WebIDL, the Web Applications Working Group advised the Director
that, by providing idlharness.js tests and demonstrating their support,
it would enough to demonstrate that the WebIDL syntax used in the
specifications was stable and well understood.

Hope this helps,

Philippe




Web Storage's Normative References and PR / REC

2013-03-07 Thread Arthur Barstow

Yves, Philippe,

WebApps agreed via [CfC] to publish a Proposed Recommendation of Web 
Storage [CR] (implementation report is [ImplReport]). The CR has three 
normative W3C references that are not yet Recommendations: DOMCore WD, 
HTML5 CR and WebIDL CR. As such, we need you to clarify the implications 
of these references re publishing a Web Storage PR and REC.


As I understand it, the Consortium's Process Document is actually silent 
regarding maturity level of normative references. However, the Team 
enforces - with some very specific exceptions - a reference policy via 
"transition rules" ([TransRules]), in particular:


[[
Note: In general, documents do not advance to Recommendation with 
normative references to W3C specifications that are not yet Recommendations.

]]

I _think_ the various processes and policies permit the Web Storage PR 
to be published with the normative references in their current status. 
Is this true?


However, for the REC to be published, we can either wait until all of 
the normative references are PRs themselves or we can ask the Director 
for "exceptions". In case we want to pursue this later exception route, 
would you please explain, what exactly the group would need to do for 
each of these references?


-Thanks, AB

[CfC] 


[CR] 
[ImplReport] 
[TransRules] < 
http://services.w3.org/xslt?xmlfile=http://www.w3.org/2005/08/01-transitions.html&xslfile=http://www.w3.org/2005/08/transitions.xsl&docstatus=pr-tr>




Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-07 Thread Alex Russell
On Wednesday, March 6, 2013, Ian Fette (イアンフェッティ) wrote:

> I seem to recall we contemplated people writing libraries on top of IDB
> from the beginning. I'm not sure why this is a bad thing.
>

It's not bad as an assumption, but it can quickly turn into an excuse for
API design malpractice because it often leads to the (mistaken) assumption
that user-provided code is as cheap as browser-provided API. Given that
users pull their libraries from the network more often than from disk (and
must parse/compile, etc.), the incentives of these two API-providers could
not be more different. That's why it's critical that API designers try to
forestall the need for libraries for as long as possible when it comes to
web features.


> We originally shipped "web sql" / sqlite, which was a familiar interface
> for many and relatively easy to use, but had a sufficiently large API
> surface area that no one felt they wanted to document the whole thing such
> that we could have an inter-operable standard. (Yes, I'm simplifying a bit.)
>

Yeah, I recall that the SQLite semantics were the big obstacle.


> As a result, we came up with an approach of "What are the fundamental
> primitives that we need?", spec'd that out, and shipped it. We had
> discussions at the time that we expected library authors to produce
> abstraction layers that made IDB easier to use, as the "fundamental
> primitives" approach was not necessarily intended to produce an API that
> was as straightforward and easy to use as what we were trying to replace.
> If that's now what is happening, that seems like a good thing, not a
> failure.
>

It's fine in the short run to provide just the low-level stuff and work up
to the high-level things -- but only when you can't predict what the
high-level needs will be. Assuming that's what the WG's view was, you're
right; feature not bug, although there's now more work to do.

Anyhow, IDB is incredibly high-level in many places and primitive in
others. ISTM that it's not easy to get a handle on it's intended level of
abstraction.


> On Wed, Mar 6, 2013 at 10:14 AM, Alec Flett wrote:
>
> My primary takeaway from both working on IDB and working with IDB for some
> demo apps is that IDB has just the right amount of complexity for really
> large, robust database use.. but for a "welcome to noSQL in the browser" it
> is way too complicated.
>
> Specifically:
>
>1. *versioning* - The reason this exists in IDB is to guarantee a
>schema (read: a fixed set of objectStores + indexes) for a given set of
>operations.  Versioning should be optional. And if versioning is optional,
>so should *opening* - the only reason you need to "open" a database is
>so that you have a handle to a versioned database. You can *almost* 
> implement
>versioning in JS if you really care about it...(either keep an explicit
>key, or auto-detect the state of the schema) its one of those cases where
>80% of versioning is dirt simple  and the complicated stuff is really about
>maintaining version changes across multiply-opened windows. (i.e. one
>window opens an idb, the next window opens it and changes the schema, the
>first window *may* need to know that and be able to adapt without
>breaking any in-flight transactions) -
>2. *transactions* - Also should be optional. Vital to complex apps,
>but totally not necessary for many.. there should be a default transaction,
>like db.objectStore("foo").get("bar")
>3. *transaction scoping* - even when you do want transactions, the api
>is just too verbose and repetitive for "get one key from one object store"
>- db.transaction("foo").objectStore("foo").get("bar") - there should be
>implicit (lightweight) transactions like db.objectStore("foo").get("bar")
>4. *forced versioning* - when versioning is optional, it should be
>then possible to change the schema during a regular transaction. Yes, this
>is a lot of rope but this is actually for much more complex apps, rather
>than simple ones. In particular, it's not uncommon for more complex
>database systems to dynamically create indexes based on observed behavior
>of the API, or observed data (i.e. when data with a particular key becomes
>prevalent, generate an index for it) and then dynamically use them if
>present. At the moment you have to do a manual close/open/version change to
>dynamically bump up the version - effectively rendering fixed-value
>versions moot (i.e. the schema for version 23 in my browser may look
>totally different than the schema for version 23 in your browser) and
>drastically complicating all your code (Because if you try to close/open
>while transactions are in flight, they will be aborted - so you have to
>temporarily pause all new transactions, wait for all in-flight transactions
>to finish, do a close/open, then start running all pending/paused
>transactions.) This last case MIGHT be as simple as ad

Re: [shadow-dom] Counters and list item counting

2013-03-07 Thread Andrei Bucur
Hello,

I want to clarify a certain situation:
  
A


X
Y


C
  

How is this case supposed to be rendered?
1. A
2. 1. X
2. Y
3. C

or

1.  A
2,3. X
4. Y
5. C

Basically, do we want the shadow root to become the counting root for the s 
inside the shadow or we let them go through the upper boundary and use the  
instead?
I would vote for the first rendering as it seems to better respect the shadow 
encapsulation. If so, it also means we need to prevent the propagation of the 
type, reversed etc. attributes of the parent 
to the shadow s, right?

Thanks,
Andrei.

On Feb 19, 2013, at 9:20 PM, Elliott Sprehn  wrote:

> Currently in Webkit list item counting is done on the render tree, but we are 
> looking at making it use the DOM instead so that ordered lists work properly 
> in regions. This raises an interesting question about if they should use the 
> composed shadow tree, or the original tree.
> 
> ex.
> 
> 
> 
>   
>   
> 
> 
> 
> inside x-widget:
> 
> 
>   
> 
> 
> What's the count on that projected list item?
> 
> This also raises questions of how counters interact with shadows. Should 
> counters work on the project DOM or the original DOM?
> 
> We're leaning towards the original DOM since otherwise counters become 
> difficult to work with when they're reprojected deeper and deeper down a 
> component hierarchy.
> 
> - E




Re: Streams and Blobs

2013-03-07 Thread Jonas Sicking
On Tue, Feb 26, 2013 at 2:56 AM, Anne van Kesteren  wrote:
> So currently Mozilla has these extensions to XMLHttpRequest:
>
>  * moz-blob
>  * moz-chunked-text
>  * moz-chunked-arraybuffer
>
> The first offers incremental read. The latter two offer chunked read
> (data can be discarded as soon as it's read).
>
> There's also Microsoft's Streams API which I added to the
> XMLHttpRequest draft at some point. SreamReader offers incremental
> read, but only from the beginning of the stream, which makes it
> nothing more than a Blob which can grow in size over time.
>
> The advantage the Streams API seems to have over moz-blob is that you
> do not need to create a new object to read from each time there's
> fresh data. The disadvantage is that that's only a minor advantage and
> there's a whole lot of new API that comes with it.
>
> Did I miss something?
>
> I'm kinda leaning towards adding incremental Blob and chunked
> ArrayBuffer support and removing the Streams API. I can see use for
> Stream construction going forward to generate a request entity body
> that increases over time, but nobody is there yet.

I think the API that Gecko is exposing for "streams" on XHR is a good
start for a feature set. However the problem is that the API that we
have marries the concept of streaming data directly with the XHR
object. I.e. if we want to enable an API accepts streaming data, say
something FileWriter-like, with Gecko's current design this API would
have to take an XHR object. I.e. there would have to be something like
FileWriter.write(myXHR).

This seems awkward and not very future proof. Surely other things than
HTTP requests can generate streams of data. For example the TCPSocket
API seems like a good candidate of something that can generate a
stream. Likewise WebSocket could do the same for large message frames.

Other potential sources of data streams is obviously local file data,
and simply generating content using javascript.

So I think it makes a lot more sense to create the concept of Stream
separate from the XHR object. A good start for what to expose on the
Stream object is likely the three extensions you list above, in
addition to simply receiving the full stream contents as a Blob (and
maybe ArrayBuffer). Though I would expect that list to change pretty
quickly once we start looking at it.

An important difference between a stream and a blob is that you can
read the contents of a Blob multiple times, while a stream is
optimized for lower resource use by throwing out data as soon as it
has been consumed. I think both are needed in the web platform.

But this difference is important to consider with regards to
connecting a Stream to a  or  element. Users generally
expect to be able to rewind a media element which would mean that if
you can connect a Stream to a media element, the element would need to
buffer the full stream.

But this isn't an issue that we need to tackle right now. What I think
the first thing to do is is to create a Stream primitive, figure out
what API would go directly on it, or what new interfaces needs to be
created to allow getting the data out of it, and a way to get XHR to
produce such a primitive.

> (Also, in retrospect, I think we should have made Blob's be able to
> increase in size over time and not have the synchronous size
> getter...)

This is something that I go back and forth on a lot. I.e. if it was a
mistake to make Blob.size synchronous or not. It's certainly adding a
lot of implementation complexity, and I'm not sure how much it
benefits authors.

But I think this ship has sailed. And I also think that it's an
orthogonal question to Streams. Either way I think we need the Stream
primitive in order to model a data stream that can only be consumed
once.

> I've also heard requests for give me the last bit of data that was
> transferred (rather than data since last read), for real-time audio. I
> think we should probably leave that use case to WebRTC.

Agreed.

/ Jonas