Re: Minutes of Shadow DOM meeting

2015-04-25 Thread Bjoern Hoehrmann
* cha...@yandex-team.ru wrote:
We'll post a summary - there is most of one at
https://docs.google.com/spreadsheets/d/1hnCoaJTXkfSSHD5spISJ76nqbDcOVNMamgByiz3QWLA/edit?pli=1#gid=0
 

Perhaps a document in some kind of open format would be a better medium
than some proprietary application with unclear stability policies that
does not work in my browser.

The minutes (thanks to Taylor Savage fora  great scribing job) are at 
http://www.w3.org/2015/04/25-webapps-minutes.html

That contains just a few lines. Looks like the decade-old UTC day change
bug is still plaguing the minutes generation tool.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de



Re: CORS performance

2015-02-19 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
We most likely can consider the content-type header as *not* custom.
I was one of the people way back when that pointed out that there's a
theoretical chance that allowing arbitrary content-type headers could
cause security issues. But it seems highly theoretical.

I suspect that the mozilla security team would be fine with allowing
arbitrary content-types to be POSTed though. Worth asking. I can't
speak for other browser vendors of course.

I think the situation might well be worse now than it was when we first
started discussing what is now CORS. In any case, this would be an ex-
periment that cannot easily be undone, browser vendors would not pay the
bill if there are actually large scale security vulnerabilities opened
up by such a change, and I do not really see notable benefits in con-
ducting such an experiment.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance proposal

2015-02-19 Thread Bjoern Hoehrmann
* Martin Thomson wrote:
On 20 February 2015 at 00:29, Anne van Kesteren ann...@annevk.nl wrote:
   Access-Control-Allow-Origin-Wide-Cache: [origin]

This has some pretty implications for server deployments that host
mutual distrustful applications.  Now, these servers are already
pretty well hosed from other directions, but I don't believe that
there is any pre-existing case where a header field set in a request
to /x could affect future requests to /y.

An alternative would be to use /.well-known for site wide policies.

The proposal is to use `OPTIONS * HTTP/1.1` not `OPTIONS /x HTTP/1.1`.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance proposal

2015-02-19 Thread Bjoern Hoehrmann
* Martin Thomson wrote:
On 20 February 2015 at 11:39, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 The proposal is to use `OPTIONS * HTTP/1.1` not `OPTIONS /x HTTP/1.1`.

I missed that.  In which case I'd point out that `OPTIONS *` is very
poorly supported.  Some people (myself included) want it to die a
flaming death.

Evidence for poorly supported would certainly be helpful (web hosting
packages without TLS support, for instance, do not count, though).
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance

2015-02-17 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
With the recent introduction of CSP pinning, I was wondering whether
something like CORS pinning would be feasible. A way for a server to
declare that it speaks CORS across an entire origin.

The CORS preflight in effect is a rather complicated way for the
server to announce that it can handle CORS. We made it rather tricky
to avoid footgun scenarios, but I'm wondering whether that is still
the right tradeoff.

Something like:

  CORS: max-age=31415926; allow-origin=*; allow-credentials=true;
allow-headers=*; allow-methods=*; expose-headers=*

Individual resources should not be able to declare policy for the whole
server, HTTP/1.1 rather has `OPTIONS *` for that, which would require a
new kind of pre-flight request. And if the whole server is fine with
cross-origin requests, I am not sure there is much of a point trying to
lock it down by restricting request headers or methods. I suppose some-
thing like this could be implemented, but I don't think CORS pinning
is quite the right analogy.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance

2015-02-17 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Tue, Feb 17, 2015 at 8:18 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 Individual resources should not be able to declare policy for the whole
 server, ...

With HSTS we gave up on that.

Well, HSTS essentially removes communication options, while the intent
of CORS is to add communication options. I don't think you can compare
them like that. HSTS is more like a redirect and misconfiguration may
result in denial of service, while CORS misconfiguration can have more
far-reaching consequences like exposing user information.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: Allow custom headers (Websocket API)

2015-02-05 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Thu, Feb 5, 2015 at 2:48 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 A Websocket connection is established by making a HTTP Upgrade request,
 and the protocol is HTTP unless and until the connection is upgraded.

Sure, but the server can get away with supporting a very limited
subset of HTTP, no? Anyway, perhaps a combination of a CORS preflight
followed by the HTTP Upgrade that then includes the headers is the
answer, would probably be best to ask some WebSocket library
developers what they think.

I think that is the most obvious solution, yes. And no, I do not think
you can support less HTTP for Websockets than what you need for minimal
web servers (but a minimal web server does not do much beyond message
parsing, you also do not need much more to support Websockets); either
way, this should not be a problem because you can already reference any
Websocket endpoint with img and XMLHttpRequest and whatever else, so
it's unlikely there are notable risks there.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: Allow custom headers (Websocket API)

2015-02-05 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Thu, Feb 5, 2015 at 2:29 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 It seems to me that pre-flight requests would happen prior to opening
 a Websocket connection, i.e. before requirements of the Websocket proto-
 col apply, so this would have to be covered by the API specification in-
 stead. I do not really see why the Websocket on-the-wire protocol would
 have to be changed here.

Wouldn't that require the endpoint to support two protocols? That
sounds suboptimal.

A Websocket connection is established by making a HTTP Upgrade request,
and the protocol is HTTP unless and until the connection is upgraded.
Websocket endpoints already have to be robust against XHR pre-flight
requests and other HTTP requests and custom headers would be an opt-
in feature anyway, so it's not obvious that there would be any problem.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: Shadow tree style isolation primitive

2015-02-05 Thread Bjoern Hoehrmann
* Dimitri Glazkov wrote:
Shadow DOM and Web Components seem to have what I call the Unicorn
Syndrome. There's a set of specs that works, proven by at least one
browser implementation and the use in the wild. It's got warts
(compromises) and some of those warts are quite ugly. Those warts weren't
there in the beginning -- they are a result of a hard, multi-year slog of
trying to make a complete system that doesn't fall over in edge cases, and
compromising. A lot.

So there's a temptation to make new proposals (unicorns) that are
wart-free, but incomplete or not well-thought-out. Don't get me wrong. I
like new ideas. What I would like to avoid is judging a workhorse against a
unicorn. Armed with your imagination, unicorn will always win.

There has never been much of a consensus on the problems that need to be
solved, so it is not really surprising that a consensus-solution is not
forthcoming; instead we have continous scope creep and eternal delays.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: Defining a constructor for Element and friends

2015-01-13 Thread Bjoern Hoehrmann
* Domenic Denicola wrote:
From: Bjoern Hoehrmann [mailto:derhoe...@gmx.net] 
 I know that this a major concern to you, but my impression is that few 
 if any other people regard that as anything more than nice to have, 
 especially if you equate explaining with having a public API for it.

How do you propose having a private constructor API?

How do you propose instances of the objects even existing at all, if
there is no constructor that creates them?

This is one of those only makes sense to a C++ programmer things.

I think it is misleading to describe something as a design goal if it
is not widely accepted as a design goal, and my impression is that this
is not widely accepted as a design goal. I also think it is entirely
normal to deal with objects you have no way of creating on your own.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: Defining a constructor for Element and friends

2015-01-13 Thread Bjoern Hoehrmann
* Domenic Denicola wrote:
That kind of breaks the design goal that we be able to explain how 
everything you see in the DOM was constructed. How did the parser (or 
document.createElement(NS)) create a HTMLUnknownElement, if the 
constructor for HTMLUnknownElement doesn't work?

I know that this a major concern to you, but my impression is that few
if any other people regard that as anything more than nice to have,
especially if you equate explaining with having a public API for it.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: [editing] Responsive Input Terminology

2014-12-11 Thread Bjoern Hoehrmann
* Ben Peters wrote:
There has been a lot of debate [1][2] about the correct name for device 
independent events [3] as a concept*. We have considered Intention 
Events, Command Events, and Action Events among others. I believe we now 
have a good name for them- Responsive Input Events. The reason for this 
name is that it is the corollary to Responsive Layout: for input instead 
of output. Together these two concepts can help form the basis of 
Responsive Design going forward. 

Responsive Layout responds to geometric changes in the environment or,
if you will, adapts to different geometric environments. I do not really
see how device independent events respond or adapt. They are independent
of their environment already. Instead of Responsive (Input Events), it
is possible that some people read it as (Responsive Input) Events, but
I do not really see how the input responds or adapts either. The input
is what it is, and does not really interact with anything on its own.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: I-D Action: draft-lnageleisen-http-chunked-progress-00.txt

2014-04-08 Thread Bjoern Hoehrmann
* internet-dra...@ietf.org wrote:
Abstract:
   This document describes Chunked Progress, an extension to
   Transfer-Encoding: Chunked as defined in RFC2616 [RFC2616].  Chunked
   Progress introduces a backwards-compatible, RFC2616 compliant method
   to notify the client of transfer advancement in situations where the
   server has knowledge of progress but cannot know the resource size
   ahead of time.

http://tools.ietf.org/html/draft-lnageleisen-http-chunked-progress-00

FYI. Ideas like this have been discussed in the past on the HTTP WG
list, and the Webapps list in contexts related to progress events.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [request] Download Event for HTMLAnchorElement

2014-03-25 Thread Bjoern Hoehrmann
* Si Robertson wrote:
The problem that this new event would solve is this - when using a
temporary object URL (blob) for the file data, e.g. programmatically
generated content, there is currently no way of knowing when that file data
has been written to disk, therefore there is no reliable way of knowing
when it is safe to revoke the object URL in order to release resources.

That something has been written to disk does not make destryoing data
safe. It is not unsual, for instance, to expect that data can be saved
more than once, and invalidating such expectations can lead to catas-
trophic data loss. I think release-after-first-save-action is not a
pattern to be encouraged, at least not without secondary safeguards.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: On starting WebWorkers with blob: URLs...

2014-03-17 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Fri, Mar 14, 2014 at 10:40 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 14 Mar 2014, Arun Ranganathan wrote:
 http://dev.w3.org/2006/webapi/FileAPI/#originOfBlobURL

 LGTM. Assuming that UAs implement this, that makes Workers automatically
 support blob: URLs, too.

I don't think this is the way we should go about this. I don't
understand why a blob URL would have an origin. Simply fetching a blob
URL will work and the response won't be tainted and therefore it
should work. Trying to couple origins with strings seems like a bad
idea.

The way RFC 6454 defines the concept, an origin is a string derived from
a resource identifier and one would expect 'blob' identifiers to have a
globally unique identifier as their origin, so having one is fine. But
it seems as proposed here, it would not be possible to derive the origin
based only on the 'blob' identifier, you rather need specific knowledge
about individual identifiers so that `blob:A` and `blob:B` end up as two
same-origin blobs. I think that is incompatible with the concept as it's
defined in RFC 6454, but then again, given the other problems with that,
https://mailarchive.ietf.org/arch/msg/websec/ln55YxWM-uRNdxRLu9-7YH9mN2k
the idea that URLs have origins may be the actual problem and we may
have to make some bigger conceptual changes to explain the rules.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [Bug 24823] New: [ServiceWorker]: MAY NOT is not defined in RFC 2119

2014-02-26 Thread Bjoern Hoehrmann
* bugzi...@jessica.w3.org wrote:
The section Worker Script Caching uses the term MAY NOT, which is not
defined in RFC 2119.  I'm assuming this is intended to be MUST NOT or maybe
SHOULD NOT.

If an agent MAY $x then it also MAY not $x. It is possible that the
author meant must not or should not in this specific instance, but
in general such a reading would be incorrect. If course, specifications
should not use constructs like may not.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-15 Thread Bjoern Hoehrmann
* Alex Russell wrote:
So you've written off the massive coordination costs of adding a uniform to
all code across all of Google and, on that basis, have suggested there
isn't really a problem? ISTM that it would be a multi-month (year?) project
to go patch every project in google3 and then wait for them to all deploy
new code.

Perhaps you can imagine a simpler/faster way to do it that doesn't include
getting owners-LGTMs from nearly every part of google3 and submitting tests
in nearly every part of the tree??

This is a vendor-neutral international standards development forum, not
some Google product development mailing list.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-13 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Thu, Feb 13, 2014 at 12:04 AM, Alex Russell slightly...@google.com wrote:
 Until we can agree on this, Type 2 feels like an attractive nuisance and, on
 reflection, one that I think we should punt to compilers like caja in the
 interim. If toolkits need it, I'd like to understand those use-cases from
 experience.

I think Maciej explains fairly well in
http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1364.html
why it's good to have. Also, Type 2 can be used for built-in elements,
which I thought was one of the things we are trying to solve here.

The desire to retrofit built-ins into cross-browser component technology
has not been very helpful to deliver component technology into the hands
of authors.

I also note that Encapsulation against deliberate access would make it
quite difficult to automate components for testing and other purposes;
in many cases you would be unable to make a reduced test case that shows
some defect in a third party component that others can load in their web
browser without difficulty; and automation tools would need a privileged
API to break the encapsulation.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-13 Thread Bjoern Hoehrmann
* Maciej Stachowiak wrote:
Type 2 is not meant to be a security mechanism. It is meant to be an 
encapsulation mechanism. Let me give a comparison. Many JavaScript 
programmers choose to use closures as a way to store private data for 
objects. That is an encapsulation mechanism. It is not, in itself, a 
hard security mechanism. If the caller can hook your global environment, 
and for example modify commonly used Object methods, then they may force 
a leak of your data. But that does not mean using closers for data 
hiding is a fig-leaf or attractive nuisance. It's simply taking 
access to internals out of the toolset of common and convenient things, 
thereby reducing the chance of a caller inadvertently coming to depend 
on implementation details.

An analogy for the above would be a programming environment where your
access to a private data member via `example.private_member` is denied
but you can still use the runtime environment's reflection API to get
to it. In a sense, marking a data member private then is just hard-to-
miss documentation of the API contract. I think that is sensible, but
it is also quite far away from your description of Type 2

  Encapsulation against deliberate access - no API is provided which
  lets code outside the component poke at the shadow DOM. Only internals
that the component chooses to expose are exposed.

since the reflection API is being provided in my analogous example, and
it leaves out mention of common and convenient which are central to
your description above. I can easily agree that use classes maybe? is
an insufficient answer to requests for this kinder, gentler version of
Type 2.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Extending Mutation Observers to address use cases of

2014-02-11 Thread Bjoern Hoehrmann
* Olli Pettay wrote:
We could add some scheduling thing to mutation observers. By default 
we'd use microtask, since that tends to be good for various performance 
reasons, but normal tasks or nanotasks could be possible too.

This sounds like adding a switch that would dynamically invalidate
assumptions mutation observers might make, which sounds like a bad
idea. Could you elaborate?
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: I need some guidance.

2014-01-20 Thread Bjoern Hoehrmann
* a...@flyingsoft.phatcode.net wrote:
Please refer to the following bug report: 
http://code.google.com/p/chromium/issues/detail?id=336292

In summary, all Webkit-derived browsers (excluding Safari 5.1.7 on 
Windows) do not do in-process (in-instance?) caching when the header is 
expired. Firefox, IE11 (but not IE10, I think), and Safari 5.1.7 do.

I take it this is about ordinary HTTP caching behavior, not about, say,
appcache, correct? Also, it seems the issue is that you tell browsers
not to cache a resource, and then expect it to be cached anyway. Could
you elaborate on that? In any case, a better forum for the problem might
be a group specialising in HTTP caching questions, e.g. if you want to
know what the HTTP specification has to say on this situation.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [manifest] HTTP-based solution for loading manifests

2013-12-11 Thread Bjoern Hoehrmann
* Julian Reschke wrote:
On 2013-12-11 19:59, Marcos Caceres wrote:
 https://github.com/w3c/manifest/issues/98#issuecomment-30293586

I see the comment but I have no idea what he's talking about.

The spec is generic; and the IANA registry 
(http://www.iana.org/assignments/link-relations/link-relations.xhtml) 
has a stylesheet entry.

Firefox implements this for CSS and XSLT, Opera (classic) did for CSS.

It seems pretty clear to me that he is disagreeing with the argument
made earlier in the ... erm, flat list of comments, that because some
web browsers do not support `Link` for stylesheets, it should not be
used for anything at all.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] HTML Imports

2013-12-04 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Wed, Dec 4, 2013 at 9:21 AM, Brian Di Palma off...@gmail.com wrote:
 I would say though that I get the feeling that Web Components seems a
 specification that seems really pushed/rushed and I worry that might
 lead to some poor design decisions whose side effects will be felt by
 developers in the future.

I very much share this sentiment.

The growth of HTML with scripting as an application platform has
exploded recently. One limiting factor of this growth is that there is
no way to formalize the services that an HTML application can provide,
or to allow them to be reused as components in another HTML page or
application. -- http://www.w3.org/TR/NOTE-HTMLComponents

That was 15 years ago. You might be able to appreciate that this may be
a case where the past is a lot longer than the future and in another 15
years Web Components as they are being proposed currently have moved
to museums that exhibit them as important evolutionary step that finally
gave web developers some kind of robust re-usable component technology.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] HTML Imports

2013-12-04 Thread Bjoern Hoehrmann
* Brian Di Palma wrote:
Neither did I mean it to be taken to mean This work is rushed. I said,

I get the feeling that Web Components seems a specification that
seems really pushed/rushed,

by that I meant it seemed as if the current spec is being pushed as
fast as possible toward standardization.

As far as I can tell, the Web Components proponents have been very clear
in the past that they want something very soon, including that they are
willing to live with issues that cannot be solved soon. So, sure, they
are pushing this.

I was not commenting on the amount of time put into making the spec
but more the amount of time given to interested parties to digest,
implement, and comment on it.

I believe the general sentiment in the Working Group is that feedback
coming from people developing applications running in web browsers is
extremely sought after. They do not, however, have much of an interest
in creating an environment where such people can easily and do gladly
provide such feedback when it would be most useful. Ordinarily there
would be mandatory procedures Working Groups and Working Group parti-
cipants are held to that should provide such an environment.

But as you can see in a nearby thread, it's already too much to ask of
the Chairs that they make sure Apple and Mozilla have finished their
final review of the Custom Elements draft and are satisfied that all
their comments have been addressed before considering it ready for Last
Call. That has destroyed Last Call as a synchronisation mechanism, you
cannot use it to prioritise reviews because you do not know whether any
given Last Call will have one, two, or six more following it. That re-
sults in late comments and late changes which make temporal planning
impossible. People get frustrated that things take so long, that they
cannot keep up with the pace, stuff falls through the cracks, and so on.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: RfC: LCWD of Custom Elements; deadline November 21

2013-12-04 Thread Bjoern Hoehrmann
* Ryosuke Niwa wrote:
Now we know that there has been an effort to decouple the various Web 
Components
features and specifications, and the Custom Elements specification was going to
the Last Call on its own.

Unfortunately, we didn't know about this until fairly recently, which is why 
our
thorough review of these specifications did not happen until mid-September
(by which time this spec had already reached the Last Call).

To ensure fair application of section 3.5 of the W3C Process document in
the future, I would like to note that I consider this a failure to meet
obligations as per section 6.2.1.7 of the current W3C Process document.
This also demonstrates that the CfC process fails to meet requirements
under section 3.3 of the current W3C Process document in matters related
to section 7 of the same.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [HTML Imports]: Sync, async, -ish?

2013-12-03 Thread Bjoern Hoehrmann
* Bryan McQuade wrote:
Steve Souders wrote another nice post about this topic:
http://www.stevesouders.com/blog/2013/11/26/performance-and-custom-elements/which
I recommend reading (read the comments too).

That should be

  http://www.stevesouders.com/blog/2013/11/26/performance-and-custom-elements/

See

  http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0702.html

Your mail client has issues.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Regarding: Making the W3C Web SQL Database Specification Active

2013-09-27 Thread Bjoern Hoehrmann
* Michael Fitchett wrote:
Since lack of definition is the issue, I would like to recommend a remedy.
I know SQL experts and great documentation writers who I would gladly hire
to further define the Web SQL Database specification and fill in the
missing SQL definition. Is this something that would be possible to help
revive the specification and get the remaining vendors on board?

(Having a good specification of the SQL syntax and semantics supported
by current versions of SQLite 3.x that can easily be forked, including
for the purposes of WebSQL, would be nice to have regardless of
whether that makes any web browser vendor want to implement WebSQL,
especially if it can be done in cooperation with the SQLite developers.)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents]: Changing names of custom element callbacks

2013-07-13 Thread Bjoern Hoehrmann
* Steve Orvell wrote:
These callbacks specifically mean the element has entered or left the
*document*.

We felt that entered/leftDocument was better than
insertedInto/removedFromDocument but the key bit is *Document. This has
caused enough confusion in discussion that being explicit seems justified.

I would assume that the entering and leaving of an element with re-
spect to a Document has to do with the .ownerDocument attribute and not
the .parentNode chain.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: jar protocol (was: ZIP archive API?)

2013-05-07 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Tue, May 7, 2013 at 7:29 AM, Robin Berjon ro...@w3.org wrote:
 This isn't very different from JAR but it does have the property of more
 easily enabling a transition. To give an example, say that the page at
 http://berjon.com/ contains:

 link rel=bundle href=bundle.wrap

 and

 img src=bundle.wrap/img/dahut.png alt=a dahut

You need a new URL scheme here. Otherwise the URL will be parsed
relative to the node's base URL.

Robin seems to address that in the parts of his mail you didn't quote.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: jar protocol (was: ZIP archive API?)

2013-05-07 Thread Bjoern Hoehrmann
* Robin Berjon wrote:
I wonder if we couldn't have a mechanism that would not require a 
separate URI scheme. Just throwing this against the wall, might be daft:

We add a new link relationship: bundle (archive is taken, bikeshed 
later). The href points to the archive, and there can be as many as 
needed. The resolved absolute URL for this is added to a list of bundles 
(there is no requirement on when this gets fetched, UAs can do so 
immediately or on first use depending on what they wish to optimise for).

After that, whenever there is a fetch for a resource the URL of which is 
a prefix match for this bundle the content is obtained from the bundle.

There have been many proposals over the years that would allow for some-
thing like this, http://www.w3.org/TR/DataCache/ for instance, allows to
intercept certain requests to aid in supporting offline applications,
and `registerProtocolHandler` combined with `web+`-schemes go into a si-
milar direction. Those seem more worthwhile to explore to me than your
one-trick-strawman.

Also, it is not clear to me that avoiding a special scheme is a useful
design constraint (not to mention that bundling is something the com-
puter is supposed to do for me, so I would want to get that out of my
face). But I can see value in a more generic feature that allows me to
implement and reference IO objects as I see fit, which would provide for
bundling features.

This means no URL scheme to be supported by everyone, [...]

Well, `rel='bundle'` would have to be supported by everyone, because
past critical mass there would be too many nobody noticed the fallback
is not working until now cases, so that seems rather uninteresting in
the longer term.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: ZIP archive API?

2013-05-07 Thread Bjoern Hoehrmann
* Florian Bösch wrote:
It can be implemented by a JS library, but the three reasons to let the
browser provide it are Convenience, speed and integration.

Convenience is the first reason, since browsers by far and large already
have complete bindings to compression algorithms and archive formats,
letting the browser simply expose the software it already ships makes good
sense rather than requiring every JS user to supply his own version.

Speed may not matter to much on some platforms, but it matters a great deal
on underpowered devices such as mobiles.

Integration is where the support for archives goes beyond being an API,
where URLs (to link.href, script.src, img.src, iframe.src, audio.src,
video.src, css url(), etc.) could point into an archive. This cannot be
done in JS.

If we all agreed that such functionality should be provided by libraries
rather than browser cores, and browser vendors could be expected to make
things convenient, fast, and well-integrated, then using libraries would
be convenient, libraries would be fast, and they would integrate well.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Fetch: HTTP authentication and CORS

2013-05-04 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
On May 4, 2013 1:29 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Fri, May 3, 2013 at 7:00 PM, Jonas Sicking jo...@sicking.cc wrote:
  We also don't reuse keep-alive http connections.

 Are we talking about persistent connections as per
 http://tools.ietf.org/html/rfc2616#section-8.1 or the obsolete
 HTTP/1.0 feature?

In the sense of the keep-alive header. I'm not sure, but I think it was
defined in HTTP 1.1.

It's extremely unlikely that the `Keep-Alive` header is special here.
It rather seems to me you meant We also don't reuse http connections.
A HTTP connection has to be persistent, has to be kept alive, in order
for it to be re-used, and how or why a connection is kept alive does,
most probably, not affect whether Firefox will re-use it in your sense
above. And no, HTTP/1.1 as defined in RFC 2616 does not use the `Keep-
Alive` header.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-17 Thread Bjoern Hoehrmann
* Rick Waldron wrote:
On Tue, Apr 16, 2013 at 9:51 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 * Rick Waldron wrote:
 If I want to make a new button to put in the document, the first thing my
 JS programming experience tells me:
 
   new Button();

 And if you read code like `new A();` your programming experience would
 probably tell you that you are looking at machine-generated code.

I'm not sure what your own experience is, but I completely disagree.

I think it is easy to agree with your analogy above. My purpose was to
offer reasons why it is a bad analogy that does not hold when you take
into account various other constraints and problems. For the specific
example, I think it is unreasonable for humans to define single-letter
global names in a shared namespace, and even more unreasonable for some
standards organisation to do so. With `A` in particular, there is also
the problem that `a` might be HTML or it might be SVG, so mapping
`new Button()` to `button` is not an analogy that works all the time.

 And between

   new HTMLButtonElement();

 and

   new Element('button');

 I don't see why anyone would want the former in an environment where you
 cannot rely on `HTMLHGroupElement` existing (the `hgroup` element had
 been proposed, and is currently withdrawn, or not, depending on where
 you get your news from).

The latter is indeed a much nicer to look at then the former, but Element
is higher then HTMLButtonElement, so how would Element know that an
argument with the value button indicated that a HTMLButtonElement should
be allocated and initialized? Some kind of nodeName = constructor map, I
suppose...? (thinking out loud)

As above, `new Element('a')` does not indicate whether you want a HTML
`a` element or a a SVG `a` element. When parsing strings there is,
in essence, such a map, but there is more context than just the name.
That may well be a design error, perhaps HTML and SVG should never
have been separate namespaces.

 in contrast to, if you will,

   var button = new Button();
   button.ownerDocument.example(...);

I would expect this:

  var button = new HTMLButtonElement();
  button.ownerDocument === null; // true

  document.body.appendChild(button);

  button.ownerDocument === document; // true

Indeed. But browser vendors do not think like that.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-16 Thread Bjoern Hoehrmann
* Rick Waldron wrote:
Of course, but we'd also eat scraps from the trash if that was the only
edible food left on earth. document.createElement() is and has always been
the wrong way—the numbers shown in those graphs are grossly skewed by a
complete lack of any real alternative.

If I want to make a new button to put in the document, the first thing my
JS programming experience tells me:

  new Button();

And if you read code like `new A();` your programming experience would
probably tell you that you are looking at machine-generated code. And if
you read `new Time();` you would have no idea whether this creates some
`new Date();`-like object, or throw an exception because the browser you
try to run that code on does not support the `time /` element yet or
anymore (the element was proposed, withdrawn, and then proposed again)
and if it's something like

  var font = new Font(Arial 12pt);
  canvas.drawText(Hello World!, font);

The idea that you are constructing `font /` elements probably wouldn't
cross your mind much. And between

  new HTMLButtonElement();

and

  new Element('button');

I don't see why anyone would want the former in an environment where you
cannot rely on `HTMLHGroupElement` existing (the `hgroup` element had
been proposed, and is currently withdrawn, or not, depending on where
you get your news from). Furthermore, there actually are a number of
dependencies to take into account, like in

  var agent = new XMLHttpRequest();
  ...
  agent.open('GET', 'example');

Should that fail because the code does not say where to get `example`
from, or should it succeed by picking up some base reference magically
from the environment (and which one, is `example` relative to from the
script code, or the document the code has been transcluded into, and
when is that decision made as code moves across global objects, and so
on)? Same question for `new Element('a')`, if the object exposes some
method to obtain the absolute value of the `href` attribute in some
way.

But I live in the bad old days (assuming my children won't have to use
garbage APIs to program the web) and my reality is still here:

  document.createElement(button);

That very clearly binds the return value to `document` so you actually
can do

  var button = document.createElement(button);
  ...
  button.ownerDocument.example(...);

in contrast to, if you will,

  var button = new Button();
  button.ownerDocument.example(...);

where `button.ownerDocument` could only have a Document value if there
is some dependency on global state that your own code did not create.
I would expect that code to fail because the ownerDocument has not been
specified, and even if I would expect that particular code to succeed,
I would be unable to tell what would happen if `example` was invoked in
some other way, especially when `example` comes from another global.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Fixing appcache: a proposal to get us started

2013-03-26 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
There has been a lot of debating about fixing appcache. Last year
mozilla got a few people together mostly with the goal of
understanding what the actual problems were. The notes from that
meeting are available at [1].

(I take it the fixing-appcache mailing list has since been closed in
http://www.w3.org/community/fixing-appcache/ favour of discussion here.)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-14 Thread Bjoern Hoehrmann
* Alex Russell wrote:
My *first* approach to this annoyance would be to start adding some async
primitives to the platform that don't suck so hard; e.g., Futures/Promises.
Saying that you should do something does not imply that doubling up on API
surface area for a corner-case is the right solution.

http://lists.w3.org/Archives/Public/www-archive/2008Jul/0009.html was my
first approach. Workers of course have it much easier, they just need
a single waiting primitive to make an asynchronous API synchronous. I've
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1016.html
argued against duplicating APIs for workers as proposed here, but so far
without much success, it would seem...
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



FYI: JSON mailing list and BoF

2013-02-18 Thread Bjoern Hoehrmann
* Joe Hildebrand (jhildebr) wrote:
We're planning on doing a BoF in Orlando to discuss starting up a JSON
working group.  The BoF is currently planned for Monday afternoon at 1300
in Carribean 6.  A very preliminary version of a charter can be found here:

http://trac.tools.ietf.org/wg/appsawg/trac/wiki/JSON

But obviously we'll need to build consensus on what it should actually
contain.  Please discuss on the j...@ietf.org mailing list:

https://www.ietf.org/mailman/listinfo/json

(http://www.ietf.org/mail-archive/web/apps-discuss/current/msg08912.html)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



FYI: Possible RFC 6455 (WebSocket) throttling erratum

2013-02-14 Thread Bjoern Hoehrmann
Hi,

  http://www.ietf.org/mail-archive/web/hybi/current/msg09970.html is re
http://www.ietf.org/mail-archive/web/hybi/current/msg09961.html about an
ambiguity in RFC 6455 regarding how implementations are to limit con-
current WebSocket connections. A particular point that has been raised
is http://www.ietf.org/mail-archive/web/hybi/current/msg09971.html that
the requirement should perhaps apply only to web browsers, which would
mean the WebSocket API specification might be a better place for the re-
quirement.

regards,
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Shadow DOM: events that are stopped

2013-02-07 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
Instead of having a fixed list of events that are stopped, maybe
instead we can pass a flag to the dispatch algorithm with respect to
whether or not the event being dispatched should exit the shadow
boundary it started in, if any. That way you can have your own private
event handling in the shadow tree and for components implemented by
the user agent they can implement certain user actions as private to
the shadow tree as well, but if I want I could still dispatch a
synthetic scroll event that goes through the boundary.

Such behavioral oddities would have to be exposed on the event objects
anyway because authors would otherwise have a hard time debugging this,
much like event objects expose whether they bubble. I assume the current
list is only a band-aid for discussion.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [Clipboard API] Add a flag to indicate paste-as-text to beforepaste and paste events

2012-08-23 Thread Bjoern Hoehrmann
* Ryosuke Niwa wrote:
*Proposal*
Add a boolean flag to beforepaste and paste events indicating whether the
user had intended to paste as text or rich text. (pasteAsText IDL
attribute?)

There are many more intents here than text and not text, like they
may mean to paste vector paths to preseve font characteristics, or they
may have a SVG image on the clipboard and wish to paste the code as text
or as pre-rendered bitmap; in any case, rich text isn't the opposite
of text, so it seems incorrect to use a boolean attribute for this.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-20 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
I can't think of any cases where you would *not* want these to fire
for parser mutations.

For example if you are building an XBL-like widget library which uses
the DOM under a node to affect behavior or rendering of some other
object. If you attach the widget before the node is fully parsed you
still need to know about modifications that happen due to parsing.

And you would typically want to attach early to avoid cannot click
button while page is loading-style issues in such a scenario. I also
note that making page load a special case likely means authors have
to write special code to handle it, and that does not seem desirable.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Updates to Selectors API

2012-06-14 Thread Bjoern Hoehrmann
* Lachlan Hunt wrote:
At this stage, we should be able to publish v1 as a revised CR, or 
possibly move it up to PR.  We can also publish v2. as a new WD.

It does not seem that additional implementation experience is required
to make sure no major changes are needed, so, Proposed Recommendation.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: www-dom vs public-webapps WAS: [DOM4] Mutation algorithm imposed order on document children

2012-06-13 Thread Bjoern Hoehrmann
* Ojan Vafai wrote:
This confusion seems to come up a lot since DOM is part of public-webapps
but uses a separate mailing list. Maybe it's time to reconsider that
decision? It's the editors of the specs who have the largest say here IMO.

The confusion is not going to go away by changing the proper mailing
list again, the case is a good example, since the commenter references
the right document and that document says to post to www-dom, but he
sent it elsewhere. Others will assume www-dom is the right list for
various reasons and so you will end up with discussions on both lists.

The main thing that let's use some other list does where participants
are not very well synchronized is annoying people with You posted to
the wrong list! mails. An option might be to merge them, but that may
be a first for the W3C, so it's unclear that the infrastructure would
support this.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] HTML Parsing and the template element

2012-06-11 Thread Bjoern Hoehrmann
* Rafael Weinstein wrote:
I think looking at this as whether we are breaking the correspondance
between source and DOM may not be helpful -- because it's likely to
be a matter of opinion. I'd like to suggest that we look at more
precise issues.

There are several axes of presence for elements WRT to a Document:

-serialization: do the elements appear in the serialization of the
Document, as delivered to the client and if the client re-serializes
via innerHTML, etc...
-DOM traversal: do the elements appear via traversing the document's
childNodes hierarchy
-querySelector*, get*by*, etc: are the element's returned via various
document-level query mechanisms
-CSS: are the element's considered for matching any present or future
document-level selectors

And one might take the position that all of these should be defined in
terms of what you call DOM traversal, making them all the same, with-
in the confines of a DOM View, a concept that has fallen out of favour.

The goal of the template element is this: the page author would like
a declarative mechanism to author DOM fragments which are not in use
as of page construction, but are readily available to be used when
needed. Further, the author would like to be able to declare the
fragments inline, at the location in the document where they should be
placed, if  when they are needed.

Thus, template require that its contents be present for only
serialization, and not for DOM traversal, querySelector*/etc..., or
CSS.

I do not see the thus here.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [Process] Publishing use cases and requirements as official docs

2012-06-06 Thread Bjoern Hoehrmann
* Tobie Langel wrote:
Hi,

(Starting a new thread by replying to a mail and then changing the
subject and quoted text is not a good idea; just start a new mail.)

I recently stumbled upon a number of use case and requirements docs (such
as MediaStream Capture Scenarios[1] or HTML Speech XG[2]) that were
published as officially looking W3C documents (for whatever that means, at
least, it's not a page on a Wiki).

Only documents under http://www.w3.org/TR/ are official publications
as far as Working Group's Technical Reports go. The documents above
should follow policy http://www.w3.org/2005/03/28-editor-style.html 
for unpublished drafts, like not using Working Draft branding, but
currently don't.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [Process] Publishing use cases and requirements as official docs

2012-06-06 Thread Bjoern Hoehrmann
* Tobie Langel wrote:
On Jun 6, 2012, at 8:46 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 Only documents under http://www.w3.org/TR/ are official publications
 as far as Working Group's Technical Reports go.

Can't WG release notes?

Working Groups can publish Working Group Notes as Technical Report, they
would go under http://www.w3.org/TR/ aswell. And it can publish postings
on a blog or publish some position statement on a mailing list and so on
my point was mainly that if an address is not under http://www.w3.org/TR
odds are you have stumbled on something that's long since been forgotten
and links and dates and other things in and on them might be misleading.

(The same is sometimes true for documents under http://www.w3.org/TR but
there you should at least be able to follow the latest version links to
discover the current status of the work, if that has been published re-
cently.)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [webcomponents] Custom Elements Spec

2012-05-08 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
I don't think that's really the argument. The argument is about
whether the long tail is going to be accessible (even if only a little
bit) or not at all.

That is, do we get

select is=restricted-color-pickeroption value=Redoption
value=Blue/select

styled as a restricted color picker or

restricted-color-picker options=red blue/

styled as a restricted color picker but with no fallback semantics whatsoever.

Humans capable of writing the latter should never write the former.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: CfC: Add warnings to old DOM specifications; deadline April 18

2012-04-04 Thread Bjoern Hoehrmann
* Arthur Barstow wrote:
Msger's (Mozilla) proposed text is in the following document and this is
a Call for Consensus to agree on this text. If this CfC passes, the text
will be added to the top of the Recommendations as was done with [D2V]:

   
 http://lists.w3.org/Archives/Public/public-webapps/2012AprJun/att-0044/warnings.html

DOM Level 3 Core essentially replaced DOM Level 2 Core for the features
relevant to both specifications, especially as far as unversioned imple-
mentations are concerned. Since making such status update edits is not
currently a widely adopted W3C practise, the pointer should go to the
DOM Level 3 Core specification, to avoid that people make assumptions a-
bout the absence of such notices in other specifications. At least this
would have to be explained in the note.

It does not seem like the people interested in DOM4 actually mean to
make a specification that could meaningfully supersede DOM Level 3
Core for all its current users; there are no Java bindings for instance.
As such I would regard the note as proposed as rather misleading.

For DOM Level 3 Load and Save the statement have not seen adoption in
web browsers is misleading, and Working Groups cannot declare certain
Recommendations as obsolete and no longer recommended. The Process for
that is to formally Rescind them, and that is what the group should do
if it wants to communicate anything more than that the specifications
are no longer being maintained. I note, again, that there are legal pro-
blems with rescinding Recommendations informally as proposed.

All the notes should reference formal publications of the W3C, not un-
published documents. The reason for that should be quite obvious, you
want to be able to counter any criticism of things in the editor drafts
by pointing out they are just that. That defense will not work very well
if referencing editor drafts becomes ubiquitous, even in formal settings
like the SotD. The press for instance might feel justified to report on
bad proposals in editors' drafts, and report them as the real deal with
support by member organizations if the formally published documents are
hard to find or are often out of date. Similarily, courts may find that
promoting informal documents in this manner is a form of deception. If
X, Y, and Z are in the Working Group, X puts something into an editor's
draft, Y implements it and Z sues Y over it, Y might argue they assumed
Z will commit to RF licensing, and Z might argue they never knew about
what X put into the draft because it was never formally published. Who
will win the argument if there is confusion about the standing of these
editor drafts? And who will pay for the damages to the W3C's reputation
this might cause, whatever the outcome? We avoid this kind of stuff by
simply accepting that referencing the formally published works is the
right, normal, proper, expected thing to do. If people want to change
that, they will have to talk to whoever will suffer the consequences. As
that is not the Working Group as such, it's the wrong forum to propose
this kind of change. Hence the first sentence of this paragraph.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: (aside) MIME type

2012-02-21 Thread Bjoern Hoehrmann
* Mark Baker wrote:
I wish they did, consistently. See RFC 4288 (just media type) and
the registry itself (MIME media type)
http://www.iana.org/assignments/media-types/index.html.  Plus
they're still routinely referred to as MIME types in many IETF
contexts, including the ietf-types list!

Changing the name on the IANA page is a matter of doing as asked in the
footer, namely mailing the webmaster about it. I did that some time ago
for some broken link I think, worked very nicely.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: CG for Speech JavaScript API

2012-01-31 Thread Bjoern Hoehrmann
* Glen Shires wrote:
We at Google propose the formation of a new Community Group to pursue a
JavaScript Speech API. Specifically, we are proposing this Javascript API
[1], which enables web developers to incorporate speech recognition and
synthesis into their web pages, and supports the majority of use-cases in
the Speech Incubator Group's Final Report [2]. This API enables developers
to use scripting to generate text-to-speech output and to use speech
recognition as an input for forms, continuous dictation and control. For
this first specification, we believe this simplified subset API will
accelerate implementation, interoperability testing, standardization and
ultimately developer adoption.

Looking at HTML Speech Incubator Group Final Report, there is a propo-
sal for a reco element. Let's say the Community Group adopts this idea
and several browser vendors implement it. Is the assumption that Mozilla
would implement a mozReco element while Microsoft would implement some
msReco element if they choose to adopt this, or would they agree on a
experimentalReco element? Or would they implement a reco element? If
they implement plain reco, there is not much room for a Working Group,
where this might be standardized in the future, to make major changes,
meaning they would be mostly rubber-stamping the Community Group output.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Obsolescence notices on old specifications, again

2012-01-24 Thread Bjoern Hoehrmann
* Glenn Adams wrote:
That doesn't really work for me. What would work for me is something like:

Although DOM Level 2 continues to be subject to Errata
Managementhttp://www.w3.org/2005/10/Process-20051014/tr.html#errata,
it is no longer being actively maintained. Content authors and implementers
are encouraged to consider the use of newer formulations of the Document
Object Model, including DOM4 http://www.w3.org/TR/dom/, which is
currently in process for Advancing a Technical Report to
Recommendationhttp://www.w3.org/2005/10/Process-20051014/tr.html#rec-advance
.

The point is to say something along the lines of If this document
contains errors, or text that is often misunderstood, do not expect
corrections or clarifications to appear here or in the associated
errata document, you are more likely to find them $elsewhere. The
W3C Process requires Working Groups to keep the errata document up
to date and to keep their Recommendations up do date by applying
errata to the Recommendations and publishing them through the PER
process. That is Errata Management as far as I would understand
the term, and the Working Group wishes to convey they won't do so.

The document would be subject to Errata Management only in so far
as that publishing such a note would not remove the option for the
Working Group to change its mind, but that is not useful information
for people the note would be addressed to: if the group did change
its mind, it can just update the note, new readers would get up to
date information, and people who read the note long ago would require
a note at $elsewhere to learn about new developments at the old lo-
cation. That the Working Group might change its mind they would al-
ready know due to the note being there in the first place.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Obsolescence notices on old specifications, again

2012-01-23 Thread Bjoern Hoehrmann
* Ms2ger wrote:
The recent message to www-dom about DOM2HTML [1] made me realize that we 
still haven't added warnings to obsolete DOM specifications to hopefully 
avoid that people use them as a reference.

If you want to say more than that the specifications are no longer being
maintained and which newer specifications might contain more recent de-
finitions for the features covered you will have to create a process for
that first (it would require Advisory Committee review for instance, as
otherwise you are likely to create unnecessary drama).

I propose that we add a pointer to the contemporary specification to the 
following specifications:

* DOM 2 Core (DOM4)
* DOM 2 Views (HTML)
* DOM 2 Events (D3E)
* DOM 2 Style (CSSOM)
* DOM 2 Traversal and Range (DOM4)
* DOM 2 HTML (HTML)
* DOM 3 Core (DOM4)

As far as I am aware, CSSOM is an unmaintained and incomplete set of
ideas, and what of it reflects actually implemented behavior and what
may change tomorrow is anyone's best guess so that is clearly not a
suitable replacement.

DOM4 fails to define many widely implemented features and makes many
backwards-incompatible changes, I don't see how someone who wants to im-
plement the DOM in Java would use the draft rather than the Recommen-
dations as a starting point, especially not now as it's rather unclear
how much consensus behind all the various proposed changes is outside a
rather narrow group of people around the draft's editors. It's a stretch
to call it a specification at all, it's more like a reference implemen-
tation in an ad-hoc assembly language that's not very useful for people
who are not Web browser DOM implementation maintainers.

If you want to know if setAttributeNS changes the namespace prefix, you
go to the DOM Level 3 Core specification which will tell you If an
attribute with the same local name and namespace URI is already present
on the element, its prefix is changed ... as the second sentence; you
don't go to DOM4 and debug the code there to combine the facts that it
sets `prefix` to a certain value in step #4, does not change it in the
steps 5-9, and then does not use the value in step ten, into the
conclusion that unless you missed some subtlety in the code, or the many
definitions it relies on, that it does not change it. And a reviewer is
unlikely to even notice the proposal would change the behavior.

The proposal pretends that .createElement creates elements in the XHTML
namespace. That is not what all current browsers do, it's not what non-
browser implementations currently do, and probably not what they should
do or are likely to do in the future. So how does that help people who
do not already know about DOM4 and would not use DOM Level 2 Core as a
reference anyway?

For DOM Level 2 HTML the proposed alternative is indeed better at least
in some regards like coverage, so a pointer would make some sense.

and a recommendation against implementing the following specifications:

* DOM 3 Load and Save
* DOM 3 Validation

You will have to use the Rescinding process for that, and this would re-
quire a legal analysis of the impact of rescinding the Recommendations
(Validation does not seem to indicate under which Patent Policy it has
been produced, and Load and Save was produced under the transitional
rules; under the 2004 Patent Policy rescinding a Recommendation has im-
plications on licensing requirements, and it is not clear to me whether
people who wish to implement either specification in a year from now to
replace some legacy product with backwards-compatibility would be worse
off if the documents were rescinded).

I also note that you have made no argument why these should be rescinded
beyond perhaps that web browser developers might not want to implement
it currently if they haven't already done so. That is not something to
capture in the status of these documents.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [XHR] responseType json

2012-01-06 Thread Bjoern Hoehrmann
* Jarred Nicholls wrote:
This is an editor's draft of a spec, it's not a recommendation, so it's
hardly a violation of anything.  This is a 2-way street, and often times
it's the spec that needs to change, not the implementation.  The point is,
there needs to be a very compelling reason to breach the contract of a
media type's existing spec that would yield inconsistent results from the
rest of the web platform layers, and involve taking away functionality that
is working perfectly fine and can handle all the legit content that's
already out there (as rare as it might be).

You have yet to explain how you propose Webkit should behave, and it is
rather unclear to me whether the proposed behavior is in line with the
existing HTTP, MIME, and JSON specifications. A HTTP response with

  Content-Type: application/json;charset=iso-8859-15

for instance must not be treated as ISO-8859-15 encoded as there is no
charset parameter for the application/json media type, and there is no
other reason to treat it as ISO-8859-15, so it's either an error, or
you silently ignore the unrecognized parameter.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [XHR] upload progress events (successfully uploaded)

2011-12-23 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
I plan on making these two changes:

* Instead of saying If the request entity body has been successfully  
uploaded I will say If the request entity body has been fully  
transmitted to make it clear we do not need to wait for the response. I  
think that was the original intent, I just did not know how to word it  
properly. I think fully transmitted should be clear, but if anyone has a  
better idea that would be more than welcome.

With HTTP over TCP you would need a response on the TCP level to tell
whether the server actually received the upload completely, and con-
ceptually you want to deliver the whole Request message successfully;
the entity body is enveloped in the Request message. I am not sure why
you would even discuss HTTP Request message separately from Response
messages for progress events, but in any case, if you want it to be
clear that this is about HTTP message delivery on the transport level,
and not about HTTP application level success, you should make this
more explicit than through subtle choice of words.

Something like when the Request message has been delivered, even if
the server has not yet begun transmitting a Response message, and re-
gardless of the status code ... does not leave much room for inter-
pretation.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [XHR] responseType json

2011-12-05 Thread Bjoern Hoehrmann
* Glenn Adams wrote:
What do you mean by treat content that clearly is UTF-32 as
UTF-16-encoded? Do you mean interpreting it as a sequence of unsigned
shorts? That would be a direct violation of the semantics of UTF-32, would
it not?

Consider you have

  ...
  Content-Type: example/example;charset=utf-32

  FF FE 00 00 ...

Some would like to treat this as UTF-16 encoded document starting with
U+ after the Unicode signature, even though it clearly is UTF-32.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [XHR] responseType json

2011-12-04 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
I tied it to UTF-8 to further the fight on encoding proliferation and  
encourage developers to always use that encoding.

The fight here is for standards. You know, you read the specification,
create some content, and then that content works in all implementations
that claim to implement the specification as you would assume based on
reading the specification. You want to know how JSON content is handled
by reading the JSON specification, and not the documentation for each
and every JSON processors.

That said, there are a number of media types by now that use the +json
convention that's not actually defined anywhere authoriatively, and it
is common to use other media types than application/json for JSON con-
tent, like application/javascript, and the rules there vary. Should it
be possible to use the UTF-8 Unicode Signature? Types differ on that and
it seems likely that implementations do aswell.

I did not reverse-engineer the current proposal, but my impression is it
would handle text and json differently with respect to the Unicode
signature. I do not think that would be particularily desirable if true.

Anyway, given that it's difficult to tell which rules apply without some
specification for +json and other things, I can't find much wrong with
forcing the encoding to be UTF-8, especially because the other options
that the JSON specification allows would result in a fatal error, which
would be the same if implementations tried to detect the encoding, but
then decided they do not support, say, UTF-32 encoded JSON. But it's not
clear to me, that the Unicode signature should result in a fatal error,
if you ignore what the JSON specification says about encodings.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [XHR] responseType json

2011-12-04 Thread Bjoern Hoehrmann
* Henri Sivonen wrote:
Browsers don't support UTF-32. It has no use cases as an interchange
encoding beyond writing evil test cases. Defining it as a valid
encoding is reprehensible.

If UTF-32 is bad, then it should be detected as such and be rejected.
The current idea, from what I can tell, is to ignore UTF-32 exists,
and treat content that clearly is UTF-32 as UTF-16-encoded, which is
much worse, as some components are likely to actually detect UTF-32,
they would disagree with other components, and that tends to cause
strange bugs and security issues. Thankfully, that is not a problem
in this particular case.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and Selectors are identical, and shouldn't be co-developed

2011-11-30 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
Disclaimer: I'm a CSSWG member, was previously a web developer by
trade (now am a webkit engineer/spec author), and was only vaguely
aware of XPath a week ago.

XPath support has been available in web browsers for over decade. If you
have so far been unaware of it, that likely means you rarely if ever run
into problems where XPath offers a good solution, and have basically no
insight into the community that does. I on the other hand regularily run
into such problems.

Just this weekend I wrote an MP3 player. It hosts a web browser for the
user interface to make it easy to adjust it for people who are familiar
with technologies implemented by web browsers. One feature it has is na-
vigation through the playlist using arrow keys. You have a track that is
currently selected, up should go to the previous track, down to the
next one, in playlist order.

While there were various initiatives to standardize focus navigation in
2006 if I recall correctly, they have not quite materialized yet, so it
takes scripting to implement this. Tracks in the playlist are table rows
and so pressing up should move the cursor to the single node that is
the previous table row from the perspective of the current one. That is

  var previous = current.selectSingleNode(previous::tr)

as it has been available in Internet Explorer for over a decade. There
is no Selectors equivalent and the JavaScript code for this is rather
involved. There is also no proposal for Selectors extensions, or Java-
script or DOM or whatever extensions to make this considerably easier. 
Yes, those could be implemented, that argument is from the 1990s and it
does not really result in actual implementations.

Also note in particular that you can read the code above even without
knowing anything about XPath and yet you will fully understand from the
context it is used in what it does. It is how code should be written.

In the recent discussions about XPath, I keep hearing a particular
theme that I believe is untrue, namely that XPath and Selectors
address different use-cases.  For example, Liam recently sent an email
containing the following:

XPath selectors give a different way of looking at finding things than
CSS selectors and probably appeal in differing amounts to different
people.

XPath has different goals from CSS selectors, and there's not actually a
battle between them.

Neither of these are true.  The second could have been defended as
true several years ago, but not today.  I will defend my statement,
and then make the further argument that, due to the two being
identical, it is a bad idea to develop both of them.

If you are actually interested in discussing why some people have some
need for good XPath support, then it would be helpful if you could cut
down on the rhetoric. If you make this about what is true, then you
are likely to create dynamics in the discussion that are unhelpful, in-
cluding that people assume you care mostly about revealing The Truth,
and don't actually care about views that differ from your own.

I note that I find the first quote quite accurate and important, but I
doubt you can actually appreciate how turning your thoughts into XPath
syntax is different from turning them into Selectors syntax with no ex-
perience in actually using XPath, in addition to you being a man on a
mission. Add that you have chosen to express yourself in a manner that
actively discourages discourse, I won't bother to try.

Both Selectors and XPath take some arbitrary notion of nodes with
certain aspects and a tree structure and, starting from the set of all
relevant nodes, repeatedly filter and transform the set until arriving
at a result.  They both do this in effectively identical ways; this
isn't like some concept of turing equivalence which can easily be
meaningless.

No.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and Selectors are identical, and shouldn't be co-developed

2011-11-30 Thread Bjoern Hoehrmann
* Yehuda Katz wrote:
Most people would accomplish that using jQuery. Something like:

var previous = $(current).closest(tr).prev()

I'm not exactly sure what `current` is in this case. What edge-cases are
you worried about when you say that the JavaScript is quite involved?

It is unlikely that your code is equivalent to the code I provided, and
sure enough, you can point out in all discussions about convenience APIs
that people could use a library. I don't see how that is relevant here.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and Selectors are identical, and shouldn't be co-developed

2011-11-30 Thread Bjoern Hoehrmann
* Yehuda Katz wrote:
Out of curiosity, I'd like to see the DOM in question, the starting element
and the element you were trying to select. I think how people do it in the
real world is actually relevant.

There is a table for each album and in an album each tbody/tr element is
a track in that album.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and find/findAll methods

2011-11-25 Thread Bjoern Hoehrmann
* Robin Berjon wrote:
If no one else steps up to it I can, but I was under the impression that
our good friends from Opera had a solution they could contribute — I 
would hope in the shape of an editor :)

I started it some years ago, but then figured if anybody brought it up
the selectors shirts would have yet another trollfest, followed by name-
space mobbing, and I am too young to watch only the re-runs. I'm happy
to contribute some tests though -- if the specification won't be a goto
assembly with eye-burning formatting and inhuman popup boxes...
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and find/findAll methods

2011-11-22 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
I know you're being somewhat hostile because you like XPath and we're
essentially saying ignore XPath, it's dead, but still, you're
arguing badly.

The web platform has a single selection syntax that has won without
question.

When Robin starts referring to himself in the third person, pretends to
represent some newspeak web platform and claims it's without question
that he is right, then you could probably say he is arguing badly. Such
sad attempts at manipulating the debate, and discouraging participation
by people who might disagree, usually come from elsewhere though.

If it lacks some abilities, extending it is almost
certainly better for both implementations and authors than pulling in
a completely different selection syntax that is *almost* identical in
functionality but happens to include those abilities that were
lacking.  If this were any other pair of technologies, I highly doubt
you'd be able to make yourself argue that having two gratuitously
different syntaxes that authors have to regularly switch between based
on the exact property they want, and which can't be used together in
any simple way, is a good situation for us to create.  That's almost a
textbook example of valuing spec authors over everyone else.

Selectors are even less expressive today than what was proposed at the
time Robin brought this issue up the first time on www-style, over a de-
cade ago, as far as document structure is concerned. The main thing the
CSS Working Group has done since was printing some Selectors Fan shirts.
I am not sure who that is valuing, but it's neither authors nor users.

Your argument about languages is interesting of course. If you want to
set styles statically, you use CSS syntax, but if you want to do so dy-
namically, you have to use JavaScript syntax once you leave the trivial
feature set of CSS syntax. Maybe using JavaScript syntax for both would
be better, so authors don't have to learn a whole new language? Authors
might actually agree if they see future style sheets full of variables,
mixins, media queries, feature detection rules, plus their jQuery codes
to fill the styling gaps, that JSSS should have been the way to go.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XPath and find/findAll methods

2011-11-22 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
Come on, Robin, you're being unreasonable.  You and I both know
there's a huge difference between new features require small amounts
of new syntax and oh look, it's an entirely different language.

I can't make heads or tails of the 'nth' syntax despite implementing it
a number of times, I always have to look up which direction '~' goes, I
can't imagine what `a! div` or `:matches(:scope a)  div` might match,
and `:has(a)  div` requires a good bit of meditation. If I go to the
draft, it tells me that `:matches(:scope a)  div` isn't even valid.

When I learned XPath, I already knew . from file paths, it stands for
the context you are in, .. is for the context one level above, / is
the root, parent and child are separated by /, and I knew from many
programming languages that + is addition, | is union, list[3] refers
an element in a list, * is a wildcard, [@price  30] is a Where query.

It doesn't really matter that the languages are different, what matters
is how they differ, how they are constructed, what experience learners
may already have, what tools are available that could aid them, and so
on. The underlying argument that `:nth-last-of-type(+3n - 2)` is easier
because authors have used `#header` in the past is rather silly.

Robin, you're familiar with XPath.  You look at it and go Oh, neat,
it has all the features I need!.  I'm not.  I look at it and go Oh,
look, somebody took a subset of Selectors, gave it a different syntax
for some reason, and then added some new stuff.  I can't use the new
Selectors stuff with the new XPath stuff, though, and now I have to
remember two different syntaxes for the basic stuff.  I am a sad
tab..  I am quite certain that my position is closer to that of the
average author than yours.

This is rather a matter of I need to make this query but browsers I
care about don't all support making it with css selectors. I am glad
I can't use XPath either because that option would make me sad.

I don't think there is much interest in browsers adopting XPath 2.0,
it would be a slow and buggy mess that would take many years to sta-
bilize across browsers and platforms, as is usual for everything they
attempt to implement, but you are not going to beef up selectors so
they could somehow compete with the expressiveness of XPath without
making the syntax even worse than it already is; and I rather doubt it
would be a good idea to have advanced selectors only for scripting.

For authors it would be much better to standardize the .selectNodes
and .selectSingleNode methods and then find scripting means to combine
them, like being able to compute the difference between two node sets
with a simple call. Microsoft, with LINQ to XML, language additions to
C#, and the generic IEnumerable extension methods have made a rather
palatable DOM and JavaScript alternative in this spirit, with XPath
support of course, but you wouldn't need it so often there. In some
ways, selectors there can look more like JSSS...
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Firefox bug: Worker load ignores Content-Type version parameter

2011-11-19 Thread Bjoern Hoehrmann
* Simon Pieters wrote:
Workers ignore the MIME type.

Yes, this thread was initially about the problems that causes.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Synchronous postMessage for Workers?

2011-11-18 Thread Bjoern Hoehrmann
* Joshua Bell wrote:
Jonas and I were having an offline discussing regarding the synchronous
Indexed Database API and noting how clean and straightforward it will allow
Worker scripts to be. One general Worker issue we noted - independent of
IDB - was that there are cases where Worker scripts may need to fetch data
from the Window. This can be done today using bidirectional postMessage,
but of course this requires the Worker to then be coded in now common
asynchronous JavaScript fashion, with either a tangled mess of callbacks or
some sort of Promises/Futures library, which removes some of the benefits
of introducing sync APIs to Workers in the first place.

I note that you can overcome manually coded continuation passing style
by exposing the continuations. In JavaScript 1.7 Generators do that in
some limited fashion: a generator can yield control to the calling code,
which can then resume the generator at its discretion, allowing you to
implement Wait functions that wait for whatever you want them waiting
for, for however long you might want them to wait for it. Some years ago
I made http://lists.w3.org/Archives/Public/www-archive/2008Jul/0009.html
as a quick example of using generators in this manner.

The generator there essentially schedules resource loads and yields con-
trol until the resource finishes loading in which case the event handler
would resume the generator. A more realistic example would schedule all
the resources needed to proceed and watch out for load failures and so
on, but it demonstrates that cooperative multitasking is quite feasible.

Obviously, even if ES.next provides better support for this, it is a
good bit more complicated to use directly, and easier to get wrong, than
a dedicated solution with limited scope as you propose for this special
case, but adding blocking APIs here and there to address a few use cases
also doesn't strike me as the best approach.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: TAG Comment on

2011-11-15 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
On Tue, Nov 15, 2011 at 5:28 PM, Glenn Adams gl...@skynav.com wrote:
 Perhaps. But widely implemented does not necessarily imply widely used. In
 any case, support for or use of a feature of a WD or CR does not imply it
 must be present in REC.

Use of a feature does, in fact, imply that, unless there are *very*
good reasons why not.  Specs and implementations advance together, and
both constrain the other.

Well, they advance from Working Draft to Working Draft and then it's
too late to make changes before there is a Call for Implementations as
the implementations have already been shipping for years. The Last Call
is meant to avoid that, providing an opportunity to build a consensus
even with people and organizations that cannot follow the day-to-day
working group and implementation progress to prioritize their reviews.
Which the Last Calls relevant to this thread obviously do not provide.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: What type should .findAll return

2011-11-11 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
Could you point me to an explanation of what [[Class]] represents in
ecmascript?  It's a little hard to search for.

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf
Section 8.6.2. for instance.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Enable compression of a blob to .zip file

2011-10-30 Thread Bjoern Hoehrmann
* Cameron McCormack wrote:
On 30/10/11 10:54 AM, Charles Pritchard wrote:
 One reason I've needed inflate is for svgz support. Browser vendors have
 consistently left bugs and/or ignored the spec for handling svgz files.
 SVG is really intended to be deflated.

All major browsers have support for gzipped SVG documents through 
correct Content-Encoding headers, as far as I'm aware.  gzipped SVG 
documents served as image/svg+xml without Content-Encoding:gzip should 
be rejected, according to the spec.

Then he probably means file system files and not HTTP files, and support
there has indeed been spotty in the past.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: QSA, the problem with :scope, and naming

2011-10-25 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
Did you not understand my example?  el.find(+ foo, + bar) feels
really weird and I don't like it.  I'm okay with a single selector
starting with a combinator, like el.find(+ foo), but not a selector
list.

Allowing + foo but not + foo, + bar would be really weird.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: QSA, the problem with :scope, and naming

2011-10-25 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
On Tue, Oct 25, 2011 at 4:56 PM, Ojan Vafai o...@chromium.org wrote:
 On Tue, Oct 25, 2011 at 4:44 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 * Tab Atkins Jr. wrote:
 Did you not understand my example?  el.find(+ foo, + bar) feels
 really weird and I don't like it.  I'm okay with a single selector
 starting with a combinator, like el.find(+ foo), but not a selector
 list.

 Allowing + foo but not + foo, + bar would be really weird.

 Tab, what specifically is weird about el.find(+ foo, + bar)?

Seeing a combinator immediately after a comma just seems weird to me.

A list of abbreviated selectors is a more intuitive concept than a
list of selectors where the first and only the first selector may be
abbreviated. List of type versus special case and arbitrary limit.
If one abbreviated selector isn't weird, then two shouldn't be either
if two selectors aren't weird on their own.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Bjoern Hoehrmann
* Alex Russell wrote:
I strongly agree that it should be an Array *type*, but I think just
returning a plain Array is the wrong resolution to our NodeList
problem. WebIDL should specify that DOM List types *are* Array types.
It's insane that we even have a NodeList type which isn't a real array
at all.

It is quite normal to consider lists and arrays to be different things.
In Perl for instance you can use list operations like `grep` on arrays,
but you cannot use array operations like `push` on lists. For JavaScript
programmers it actually seems common to confuse the two, like with

  var node_list = document.getElementsByTagName('example');
  for (var ix = 0; ix  node_list.length; ++ix)
node_list[ix].parentNode.removeChild(node_list[ix]);

which would remove all the children if node_list was an array like any
other. Pretending node lists are arrays in nomenclature would likely add
to that.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [DOM4] XML lang

2011-10-05 Thread Bjoern Hoehrmann
* Marcos Caceres wrote:
1. I need to find elements of a particular type/name that are in a
particular language (in tree order), so that I can extract that
information to display to a user.

  .selectNodes(//type[lang('language')])

2. I need to check what the language of an element is (if any),
without walking up the tree to look for an xml:lang attribute.
Walking the tree is expensive, specially when XML says that xml:lang
value is inherited by default.

  .selectSingleNode(ancestor-or-self::*[@xml:lang][1]/@xml:lang)

Where these methods are not supported you can use DOM Level 3 XPath,
and in the first case you can use the CSS Selectors API aswell.

//using BCP 47 [lookup] algorithm 
var listOfElements = document.getElementsByLang(en-us); 

If you really want to write code, you might want to check out Silver-
light which has an object model not stuck in the mid-1990s. There you
can use the lazy LINQ to XML methods, the latter case would be e.g.

  .AncestorsAndSelf().Attributes(XNamespace.Xml + lang).First()

I note that your getElementsByLang method does not actually do what
you want as it does not filter by element type, you would end up with
possibly an enumeration of all elements in the document and would then
have to filter them all by element type. Similarily, if you filter by
element type and then filter by comparing the language as in

listOfElements[1].lang == en; 

you might end up with rather inefficient code: the implementation may
for instance redundantly walk the tree for each element. That's also
true when evaluating query language expressions, but it's easier for
the implementation to recognize you want to filter by both than if it
has to comprehend your custom filtering code. Also note problems such
as comparing en-us and en. It may well be wiser to have a testing
method like .isLanguage('en') instead or in addition, but even then
it's reasonable to expect authors to use this incorrectly and perform
their own substring matches and so on.

In principle there is a point in exposing some of this at a low level
like the DOM as there may actually be many sources for an element's
language, but having support for this in the query languages, there is
not much to be gained by supporting additional APIs for it.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Adding Web Intents to the Webapps WG deliverables

2011-09-20 Thread Bjoern Hoehrmann
* Ian Fette wrote:
I don't get it. The overhead of getting all the other browsers to join the
WG you mention is just as high, especially when there's no indication that a
number of browsers intend to join that group. I don't think it's a random
process question, I think it's rather fundamental issue. If we agree that
the right way forward is to create a new WG for each API [...]

The idea is to record in charters what working groups are expected to
deliver to allow proper review, planning, allocation of resources, and
so on. A coarse characterization will make those things difficult, and
a very narrow characterization will make taking up new work difficult.
Surprising new requirements come with more overhead than things that
have been anticipated well in advance; but that's not one WG per API.

I note that these processes are not designed to minimize hurdles for a
handful of browser vendors, but for broad consensus and high quality
deliverables that arrive reasonably predictably. That requires parties
to synchronize. Removing synchronization pressure leads to what we have
with the Web Applications Working Group, which after six years has a
XMLHttpRequest test suite consisting of nothing but There is a good
chance a test suite for XMLHttpRequest will be placed around here and
no XMLHttpRequest specification to show.

There are drafts and submissions, but it would seem people regard this
state of affairs as good enough and so there is little pressure to pull
things together and push out a proper release. And I note the same goes
for earlier stages aswell, the process being largely unpredictable as
it is driven by when one or more of a handful people feel like it, it
becomes very difficult for out-of-the-loop parties to meaningfully en-
gage in the process. If you mostly want to sanity-check reasonably com-
plete documents, which should you review? The first Last Call? Second?
Third? Fourth? Fith Last Call? Third Candidate Recommendation? When can
we expect a reasonable test suite to verify everything is interoperable
enough to remove vendor prefixes? Seven years? Oh there is a couple of
Webkit specific tests somewhere in the source tree, we think this is
good enough so we just throw it out there with no proper test suite?

It's not a matter of this has more overhead than that, but a matter
of what we, all of us, want to get out of the standards process, and
what of that can reasonably be achieved within constraints we have. It
may be for instance that we are moving too fast, and should not work
at this point on Web Intents and instead use what limited resources
there are to finish some of the pending work first. It may also be we
don't care about this silly business with Recommendations and Tests so
this can easily be added. Or we might find that we need to invest more
resources so we can do more things in parallel. Or something else. But
without an understanding what the process should produce and support,
which we quite clearly lack, at least collectively, there is no point
in arguing that process A is better than process B.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [DOM4] Remove Node.isSameNode

2011-09-09 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
It's a completely useless function. It just implements the equality
operator. I believe most languages have a equality operator already.

It's quite normal for object models to not guarantee that the equality
operator works for object identity comparison, COM being a prime example
where this is only guaranteed for IUnknown pointers. Leading to issues
like http://www.mail-archive.com/mozilla-xpcom@mozilla.org/msg05045.html
but that's life. It is also useful to have this as function available in
environments where the object identity operator is not available as a
function. In JavaScript, if function arity is ignored as is typical,

  [].some.call(nodelist, node.isSameNode, node)

can be used to check whether a node is in a node list, the equivalent

  [].some.call(nodelist, function(elem) { return elem === node; })

is a good bit worse if you want to use this as condition in an if block
as you would run past line length restrictions easily and would have to
put this on several lines. Using proxying wrappers is quite a common
practise http://code.activestate.com/lists/python-list/399236/ so I do
not see why everybody should spend resources removing this method. It's
widely implemented, used, solve an actual problem, has a spec and test
cases, I am not aware of bugs in widely used implementations, if all DOM
methods were like that we'd all be rejoicing.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Reference to the HTML specification

2011-09-06 Thread Bjoern Hoehrmann
* Julian Reschke wrote:
I do see that it's a problem when people use outdated specs; but maybe 
the problem is not the being dated, but how they are published. As far 
as I can tell, there's not nearly as much confusion on the IETF side of 
things, where Internet Drafts actually come with an Expiration Date.

Publication and document status procedures are meant to support working
towards a goal, like to publish the XMLHttpRequest proposal as a Recom-
mendation. But there is no expectation that the XMLHttpRequest proposal
will be published as a Recommendation within some reasonable timeframe. 
There is nothing surprising about people getting confused by this.

You wouldn't find much support within the IETF to make a XMLHttpRequest
Working Group chartered with nothing but to fiddle with some draft over
the course of five years with no timline or expectation to produce RFCs
or test suites or anything else, so you can't really compare the two.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: From-Origin FPWD

2011-07-31 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
   http://www.w3.org/TR/from-origin/

The proposed `From-Origin` header conveys a subset of the information
that is already available through the Referer header. As it is, it is
very rare for the Referer header, or coressponding interfaces that are
available to scripts, to be absent and the draft does not even attempt
to argue how the reasons for the Referer header's abscence don't apply
to `From-Origin`. It also lacks a description of the problem domain.

As an example, it is unclear that there are important use cases that
require telling a.example.com apart from b.example.com (assuming that
example.com is a public suffix while the subdomains are not). That's
important if one were to design the mechanism where it's easy for a
site to verify if the from-origin is acceptable, but hard to learn
the actual from-origin when it is not.

(Consider as a trivial example you send the MD5 of the from-origin
instead of the actual value: verification would be easy, but learning
the actual value would require to go through a possibly long list of
possible origins; of course, brute force against MD5 is trivial, and
if you use the whole origin you couldn't check for *.example.com,
without trying all possibilities for the wildcard, but cryptography
offers other options, and knowing constraints is important to find and
evaluate them.)

Similarily, it is unclear whether there is actually an important need
to know more than same 'public suffix' if Referer is there anyway
most of the time (you can obviously make up examples to illustrate a
difference, but there are usually design alternatives that render such
examples unimportant).

Clearly Referer information would be available more often if there
had always been a policy to only strip path and query information in-
stead of stripping it entirely, and the only difference between such
a policy and the proposed header -- assuming the idea is that it'd be
available 'always' -- that there wasn't such a policy. But that is
about all there is to this proposal at the moment, and I don't find
that very convincing.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Mutation events replacement

2011-07-20 Thread Bjoern Hoehrmann
* Dave Raggett wrote:
Perhaps we need to distinguish auto generated attributes from those that 
are set by markup or scripts. Could you please clarify for me the 
difference between the html style attribute and the one you are 
referring to?  My understanding is that the html style attribute is set 
via markup or scripts and *doesn't* reflect all of the computed style 
properties for this DOM node.

You can manipulate the style attribute using DOM Level 2 Style features
like the ElementCSSInlineStyle interface instead of setting the value
as a string as you would when using .setAttribute and similar features.

  p.../p
  script
  onload = function() {
var p = document.getElementsByTagName('p').item(0);
p.style.margin = '0';
alert(p.getAttribute('style'))
  }
  /script

This would alert something like `margin-top: 0px; margin-right: 0px;
margin-bottom: 0px; margin-left: 0px` or `margin: 0px;`.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Mutation events replacement

2011-07-20 Thread Bjoern Hoehrmann
* Boris Zbarsky wrote:
It's pretty common to have situations where lots (10-20) of properties 
are set in inline style, especially in cases where the inline style is 
being changed via CSS2Properties from script (think animations and the 
like, where the objects being animated tend to have width, height, 
various margin/border/padding/background properties, top, left, etc all 
set).  Those are precisely the cases that are most performance-sensitive 
and where the overhead of serializing the style attribute on every 
mutation is highest due to the large number of properties set.

Depending on the design of the mutation notification system and what
level of complexity people find palatable, it would naturally also be
possible to serialize lazily and additionally limit when the values
are available (further style changes made by the listener could in-
validate the information and you'd get an exception on access, for
instance). So the information being available as part of the API does
not necessarily imply performance problems.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Mutation events replacement

2011-07-20 Thread Bjoern Hoehrmann
* Boris Zbarsky wrote:
On 7/20/11 2:19 PM, Bjoern Hoehrmann wrote:
 Depending on the design of the mutation notification system and what
 level of complexity people find palatable, it would naturally also be
 possible to serialize lazily

The only way to do that is to make sure the pre-mutation data is kept 
somewhere.  Doing that is _expensive_.  We (Gecko) have been there, done 
that, and moved away from it.

Simple example: you get a notification whenever a script could observe
the .getAttribute value changes, and you get it before the change is
applied. Then you have all the data you need without expending effort on
that: you have the old state directly, and you know what change you're
about to make; the serialization code would just have to be able to pro-
duce a string assuming certain changes were made (which may be easy or
hard depending on implementation details).

Not a suggestion, but with the idea being to re-design the system from
scratch, it does seem important to understand that the cost here is not
coming from offering old and new values while notifying about changes,
but from the combination of doing that and other design decisions like
notifying after applying changes, allowing notifications to trigger new
changes, and so on. We got here from confusion about why it's expensive.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [websockets] Reminder: review Web Socket Protocol v10; deadline July 25

2011-07-19 Thread Bjoern Hoehrmann
* Arthur Barstow wrote:
A reminder to review the Web Socket Protocol v10 spec by July 25:

   https://datatracker.ietf.org/doc/draft-ietf-hybi-thewebsocketprotocol/
   http://www.ietf.org/id/draft-ietf-hybi-thewebsocketprotocol-10.txt

Individual WG members are encouraged to provide individual feedback 
directly to the Hybi WG:

   h...@ietf.org

http://www.ietf.org/mail-archive/web/hybi/current/msg07725.html as is
customary, the IESG asks for comments to be sent to i...@ietf.org, or
exceptionally to i...@ietf.org.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XHR using user and password parameters

2011-07-11 Thread Bjoern Hoehrmann
* Hallvord R. M. Steen wrote:
Many implementations don't send the Authorize: header even if the script  
supplies user name and password, unless they have seen a 401 response.  
This seems a bit counter-intuitive to authors - if they supply a user name  
and a password, why isn't the browser actually sending it to the server? I  
think it would be simpler to author for if we sent Authorize: whenever a  
user name and password is supplied. Are there any particular reason we  
don't? Would it be seen as violating the HTTP standard's text about 401  
and Authorize: if we did spec something like that?

You need to know the authentication method in order to form the header,
you don't know whether it's Basic or Digest or some other method, and if
you did, you might still need information from the server such as the
realm. So, you need to make a failing request first, unless you limit
yourself to Basic authentication.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [websockets] IETF HyBi current status and next steps

2011-07-11 Thread Bjoern Hoehrmann
* Arthur Barstow wrote:
Is there a deadline for protocol comments?

Based on the e-mail below, it appears the deadline is July 25. Please 
clarify.

The IESG is asking for comments by that date. There will be some time
between the date and the IESG making its decision, and there will be
some time between the IESG's decision and publication by the RFC Editor,
but it's certainly best to send comments within the review period.

Also, for those of us not familiar with IETF process, what is the 
relationship between the IETF's LC review and v10's Expires: January 
12, 2012?

There is no relationship beyond the fact that all internet drafts have
an expiry time.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Test suites and RFC2119

2011-07-10 Thread Bjoern Hoehrmann
* Aryeh Gregor wrote:
The difference is that if you have must requirements that are
specific to a single conformance class, you can write a test suite and
expect every implementation in that class to pass it.  For should
requirements, you're saying it's okay to violate it, so test suites
don't make a lot of sense.

And if I make an implementation that does not fit in any of the classes
I can just argue that the specification did not anticipate the class my
implementation falls in. You would have to explain how arguing about a
missing conformance class is better than arguing about whether should
level requirements have been met. With your model you would have more
clarity, but you would also be more wrong, and require more effort to
make things right, in addition to inhibiting innovation. I think that's
a very difficult argument to make and we have should because of that.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Publishing From-Origin Proposal as FPWD

2011-07-05 Thread Bjoern Hoehrmann
* Marcos Caceres wrote:
On Tue, Jul 5, 2011 at 5:50 PM, Hill, Brad bh...@paypal-inc.com wrote:
 I feel that the goals of this draft are either inconsistent with the
 basic architecture of the web, cannot be meaningfully accomplished
 by the proposed mechanism, or both, and I haven't seen any discussion
 of these concerns yet.

I note that the Web Applications Working Group's Charter, if Brad Hill
is a member, does require the rest of the Working Group to duly consider
his points before moving on without consensus. If not, then the group is
not required to wait with publication, but not discussing the points in
a timely manner, without an argument how publication is urgent in some
way, does not inspire confidence that the arguments will be heard and
duly handled.

Publication will enable wider discussion - particularly wrt the issues
you have raised. Not publishing it is tantamount to saying I OBJECT
TO PROGRESS!. If you are correct, more people will see it and the
proposal will be shot down. Otherwise, other opinions will flourish
that may sway your position (or a new perspective will emerge all
together). In any case, calling for a spec not to be published, no
matter how bad it is, is not the right way to do this. Publishing a
spec is just a formality which can lead to discussion.

The more invested people are into something, the less likely they are
to cut their losses; by doing things, you frame the discussion in favour
of doing more. You get people to think more about how something can be
fixed rather than thinking about whether to abandon the work, or use a
very different approach. If you just propose an idea to me, we can talk
about it more freely than if you had already invested a lot of effort
on implementing the idea and asked me to review the idea after the fact.

(~ Die normative Kraft des Faktischen)

Realizing something is a bad idea early is therefore very important and
not objecting to progress. Not wasting time on bad ideas is certainly
progress, even if only indirectly as you'd work on other things instead.
As such it is quite important to react timely to design critique with
care and detail. Psychologically, if you press ahead, you communicate
that you care more about moving on than discussing details, which is
likely to turn away the people more interested in details and quality;
and the same is of course true for draft of genuinely bad quality.

Which is just to say this is actually an important matter; sometimes it
is best to go ahead and put your ideas into practise whatever others may
be saying, other times it turns out that you should have listened more.
That is why we allow people to block actions, not necessarily progress,
but only up to the point where arguments have been duly considered. And
here we have yet to do that. Until that happens, short of someone making
the case for urgency, I would agree the group should not publish and
talk about this instead.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Reminder: RfC: Last Call Working Draft of Web Workers; deadline April 21

2011-04-21 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
Please correct me if I'm missing something, but I don't see any new
privacy-leak vectors here.  Without Shared Workers, 3rdparty.com can
just hold open a communication channel to its server and shuttle
information between the iframes on A.com and B.com that way.

That does not seem to be the right way to think about privacy problems.
We know that you can, in some sense, create cookies that are difficult
to delete through conventional means, like Evercookie does, but that's
not really relevant when discussing adding a .cookieLifetime(long) me-
thod that does the same things. For one thing, the former method relies
on very many old and complicated methods with known design flaws, the
other would be a new feature that accomplishes this easily by design.

(You would also seem to be mistaken; holding a connection does not help
if the two iframes cannot share the connection, and traditionally they
cannot do that reliably; the problem is rather a matter of one iframe
generating or obtaining a secret and getting the other iframe to learn
that same secret. As has been noted in the thread, that is possible to
some degree, but that is not much of a metric to judge a design.)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [IndexedDB] Spec changes for international language support

2011-02-18 Thread Bjoern Hoehrmann
* Pablo Castro wrote:
We discussed international language support last time at the TPAC and I
said I'd propose spec text for it. Please find the patch below, the
changes mirror exactly the proposal described in the bug we have for
tracking this: http://www.w3.org/Bugs/Public/show_bug.cgi?id=9903

You should anticipate objections to that; collation is not a property of
language, for instance, for de-de you typically have dictionary sorting
and phone book sorting (and of course you have de-de, de-ch, and so
on, so de alone would be rather meaningless). So far the W3C and the
IETF have used resource identifiers to specify collations (see XPath 2.0
and RFC 4790) where the IETF allows shorthands like i;ascii-casemap.

I do understand that Microsoft uses an extension of language tags for
the `CultureInfo` in the .NET Framework, where, say, `de-DE_phoneb` is
used to refer to german phone book sorting, but BCP 47 does not allow
for that, neither could you devise a language tag to define something
like i;ascii-casemap (which simply defines A-Z = a-z).

I would expect that if browsers offer collations, there would be an in-
terface for that so you can use them in other places, as such it might
be wiser to accept something other than a language identifier string. As
above, URIs, or RFC 4790 values plus URIs, or, in anticipation of some
such interface, some other object, might be a better choice. And the
method and attribute should probably not use language in their names.

I also note that collation often involves equivalence testing, but it
is not clear from your proposal whether that is the case here. It might
also be a good idea to clearly spell out interoperability expectations;
if two implementations support some collation, will they behave the same
for any and all inputs as far as collation is concerned, or should one
be prepared for slight differences among implementations?
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Structured clone in WebStorage

2010-12-02 Thread Bjoern Hoehrmann
* Tab Atkins Jr. wrote:
I won't be the person implementing it, but fwiw I highly value having
structured clones actually work.  Any time I talk about localStorage
or similar, I get people asking about storing non-string data, and not
wanting to have to futz around with rolling their own serialization.

For most who'd consider rolling their own, JSON.stringify would seem to
be a viable alternative.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [ProgressEvents] How to deal with compressed transfer encodings

2010-11-23 Thread Bjoern Hoehrmann
* Anne van Kesteren wrote:
On Tue, 23 Nov 2010 22:41:00 +0100, Jonas Sicking jo...@sicking.cc wrote:

 A) Set total to 0, and loaded to the number of decompressed bytes
 downloaded so far
 B) Set total to the contents of the Content-Length header and
 loaded the number of compressed bytes downloaded so far
 C) Like A, but also expose a percentage downloaded which is based on
 the compressed data

When compression does not come into play they will only match for certain  
encoding / byte streams anyway. E.g. for a UTF-8 encoded character stream  
with characters that take up more than one byte they will not match. I  
think it should be B.

That is what the draft already requires, if by compressed Jonas means
you remove all transfer encodings but retain the content encodings, and
you set .total to zero if the total length is not specified. (There are
even more layers of compression to consider if you don't speak plain
HTTP but, say, HTTP over TLS, since TLS has its own compression layer;
that would be removed aswell under the current draft.)
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [ProgressEvents] How to deal with compressed transfer encodings

2010-11-23 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
How should ProgressEvents deal with compressed transfer encodings? The
problem is that the Content-Length header (if I understand things
correctly) contains the encoded number of bytes, so we don't have
access to the total number of bytes which will be exposed to the user
until it's all downloaded. I can see several solutions:

Well, you have some information, you encode that using a media type,
then you possibly encode that using a content encoding, and then you
possibly encode that using a transfer encoding. HTTP uses transfer
encodings for both message framing (chunked) and transformations,
they are property of the transfer, while content encodings are part
of the content.

I would suggest to ask this question in terms of what .loaded should
be when the download has finished. Should that be how much data has
been recieved after the header, or how much data has been recieved
except for framing information, or what the content developes thinks
the size is, or how many bytes you will ultimately feed to, say, the
HTML parser.

That would be respectively the length of the message body, the length
of the message body after removing the chunked transfer encoding, the
length of the entity body, and the length of the entity body after
removing content encodings. Note that you can apply compression as
both content encoding and as transfer encoding, although the latter
is only supported by good HTTP implementations, like Opera's, but hey,
https://bugzilla.mozilla.org/show_bug.cgi?id=68517 isn't ten years old
yet.

I note that the draft actually defines this already, and I am pretty
sure we discussed this already back in the day.

B seems spec-wise the simplest, but at least gecko doesn't expose the
compressed number of bytes downloaded, not sure about other HTTP
libraries. It also has the downside that .loaded doesn't match
.responseText.length

Well, to get to the length of the content in terms of UTF-16 code
units you have to remove transfer encodings, content encodings, and
transcode from whatever character encoding the content is in to said
UTF-16 code units, that's yet another layer and not a useful one in
most cases here.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: requestAnimationFrame

2010-11-23 Thread Bjoern Hoehrmann
* Robert O'Callahan wrote:
Couldn't is never an issue here. You can always use setTimeout as a
backstop to ensure that your end-of-animation code runs even if
requestAnimationFrame never fires a callback. My concern is that authors are
likely to forget to do this even if they need to, because testing scenarios
are unlikely to test switching to another tab in the middle of the
animation.

The question was about what if you are guranteed that your script is run
if the tab resumes, but not necessarily before that. In that case the
code will always run, unless you did something crazy to detect whether
to run the end of animation code (for instance, instead of checking if
you are past 20 seconds, you check for whether you are between 20s and
30s to make your animation stop after 20s).

That seems much less a concern than that the model here is unclear. If
the model is you can this and then the browser will magically do one
thing or another, we won't tell you the details then people would have
a rougher time writing proper code than if the model was, say, Upon
each call the callback must completely redraw everything and must not
depend on state modifications made from within the callback in any way.

It seems to me that end-of-animation problems are just another case of
state transition problems, you might aswell change something between
5s and 10s and depend on that having been executed at 13s (say you may
add a new element between 5s and 10s and fade it out again from 13s.)

Personally I am already horrified by the hoops one has to jump through
if one tries to synchronize audio playback and mozRequestAnimationFrame.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [cors] 27 July 2010 CORS feedback

2010-11-23 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
other person: Hmm.. we might want to disable cross-site posting for
forms some day, so is it such a good idea that cors enables it?
me: If we do disable it for forms we'll just disable it for cors too.
So much content will break for forms that the cors breakage won't be
what we're concerned about.
other person: Yeah, true.

At the point where browser vendors actually disable cross site form
posts it won't break a lot of sites, since browser vendors are not in
the habit of making changes that break a lot of sites. At best we'd
have a vendor like Microsoft less concerned with having only one code
path for everything who'd disable them in certain modes or based on
certain headers or something like that, so they will slowly be phased
out, alongside efforts to change major sites and educating developers.

If not doing cross site posts without authorization is a goal, teaching
authors it's fine to make cross site posts without authorization
undermines that goal. It means more work for everyone to get to a point
where browser vendors would even have this discussion. What you are
saying amounts to telling authors Hey, here is a new way to do cross
site posts; btw, if you use this, we are planning on breaking your site
and thousands of others. That's not very reasonable.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [cors] 27 July 2010 CORS feedback

2010-11-23 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
On Tue, Nov 23, 2010 at 7:36 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 At the point where browser vendors actually disable cross site form
 posts it won't break a lot of sites, since browser vendors are not in
 the habit of making changes that break a lot of sites.

This sounds like a very hypothetical world.

Could you give a couple of examples of changes that have recently been
made to the implementation you are working on that made a whole lot of
web sites dysfunctional (such as people being unable to submit a form)
where the people making the decision to break those sites were clearly
aware of the impact, that would compare to disabling cross site form
posts right now?
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: requestAnimationFrame

2010-11-19 Thread Bjoern Hoehrmann
* Ojan Vafai wrote:
On Fri, Nov 19, 2010 at 2:54 PM, Cameron McCormack c...@mcc.id.au wrote:
 Darin Fisher:
 I can imagine a situation where you have an animation that goes for,
 say, 10 seconds, and once the animation finishes something else happens.
 The 1 second maximum period seems useful in this case, because you might
 make the tab visible again for a long time, but you expect the
 “something else” to happen.  It’s pretty natural to do the checking for
 whether the animation has gone past its end time within the callback.

What's an acutal example where you might want this that couldn't just wait
until the tab was visible again? This use-case doesn't seem very common. As
you say, it's also probably not well met due to throttling.

Well, if every event handler has If time is past my end time, clean up
and unregister the handler logic, and you do not care about possible
wait times when resuming the tab (for instance, the cleanup may load
additional resources from the network which one might expect to load
in the background), and find shared state manipulation far fetched
(say, on disposal the handler may change cookies or local storage to
indicate, say, you've already watched a full-screen ad on the page),
then the only thing that could leak out of tabs, as far as my browser
is concerned, is audio.

To make up a simple case, you might have a visual progess indicator
where the end time is determined by some network activity: when it
finishes you get an audible done indicator and you bind the done 
logic to the animation handler, not the network handler. Obviously
that does not address your question, since couldn't never applies
here, you could always just use setTimeout and setInterval and burn
cycles, or whatever else gurantees your script runs even when the tab
is in the background, and implement the logic as you see fit.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: requestAnimationFrame

2010-11-19 Thread Bjoern Hoehrmann
* Robert O'Callahan wrote:
Those are good goals, except I think we need to drill down into (c). Are
people changing stuff at 15Hz for crude performance tuning, or for some
other reason?

There are many kinds of animations where you cannot easily interpolate
between frames, so drawing in Ones requires a lot more effort than
drawing in Twos. Add to that the abysmal performance of graphics in
browsers and, sometimes worse, their plugins, and I would be more
wondering why some set the framerate any higher (beyond Oh I thought
that's what PAL/NTSC does, I read that in a forum). The default in
Flash is 12 frames per second, which is your typical film rate in Twos.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

2010-11-19 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
The question is in part where the limit for ridiculous goes. 1K keys
are sort of ridiculous, though I'm sure it happens.

By ridiculous I mean that common systems would run out of memory. That
is different among systems, and I would expect developers to consider it
up to an order of magnitude, but not beyond that. Clearly, to me, a DB
system should not fail because I want to store 100 keys á 100KB.

 Note that, since JavaScript does not offer key-value dictionaries for
 complex keys, and now that JSON.stringify is widely implemented, it's
 quite common for people to emulate proper dictionaries by using that to
 work around this particular JavaScript limitation. Which would likely
 extend to more persistent forms of storage.

I don't understand what you mean here.

I am saying that it's quite natural to want to have string keys that are
much, much longer than someone might envision the length of string keys,
mainly because their notion of string keys is different from the key
length you might get from serializing arbitrary objects.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: requestAnimationFrame

2010-11-15 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
On Mon, Nov 15, 2010 at 5:01 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 The frame rate is a number in the swf header that cannot be set to a as
 fast as possible value.

Ah, so that also means that different animations can't run with
different frame rates?

That's the basic model and there are dependencies on that, for instance,
you can ask the player to move to a specific frame, which would break if
the frame rate changes unexpectedly. There may be ways to get a higher
actual rate through forced redraws and by dynamically changing the rate,
which more recent versions permit.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: Making non-cookie requests to another domain... possible DoS attack by forcing session expiration?

2010-11-10 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
 It was brought up by Billy Hoffman (http://zoompf.com) that some web
 applications have very sensitive sessions and they are set up to expire the
 session (ie, log the person out) if a request is received that has no
 session cookie header in it, etc. The assertion was that this type of thing
 would be a potential DoS attack vector, by allowing an unrelated website to
 include a hidden img rel=anonymous request in their markup that made a
 request to a site known to log out on such non-cookie requests, and thus
 effectively logging users out of the app without their control/knowledge.

How will they know which session to expire given that no cookies are
sent and so they can't who the request is coming from?

You can expire the client-side part of the session without knowing which
session it is, so long as the browser reads the Set-Cookie header in the
response. You could simply respond with an expired Set-Cookie header to
any request without a Cookie header. The server-side part of the session
would remain active, of course, but that makes no difference to users.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-10 Thread Bjoern Hoehrmann
* David Flanagan wrote:
Is this a fair summary of this thread?

Chris (Apple) worries that having to support both responseText and 
responseArrayBuffer will be memory inefficient because implementations 
will end up with both representations in memory.

James (Google) worries that synchronously reading bytes from the browser 
cache on demand when responseArrayBuffer is accessed will be too 
time-inefficient.

In most cases you do not need to store the bytes in order to get them
back, you can just apply the character encoding scheme used to decode
the bytes to the string and you'll have the original byte string, so
long as the character encoding scheme is bijective, which is true for
most of the relevant schemes like UTF-8 and UTF-16. You'd just need a
flag that tells you when that is not possible, like with UTF-8 encoded
strings that are not-wellformed, and for encodings like UTF-7.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-10 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
 In most cases you do not need to store the bytes in order to get them
 back, you can just apply the character encoding scheme used to decode
 the bytes to the string and you'll have the original byte string, so
 long as the character encoding scheme is bijective, which is true for
 most of the relevant schemes like UTF-8 and UTF-16.

It's not true for UTF-8/UTF-16 if the original streams contain illegal
surrogates, right? We usually convert those to the replacement
character in firefox, which is an information destroying operation.
(I'm not sure if the stream converters do this, but they should)

Yes, I noted that in my message as an example. But using problematic
character encodings and having character encoding errors is becoming
less and less common, so that isn't much of a concern.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-10 Thread Bjoern Hoehrmann
* Boris Zbarsky wrote:
On 11/10/10 4:39 PM, Bjoern Hoehrmann wrote:
 In most cases you do not need to store the bytes in order to get them
 back, you can just apply the character encoding scheme used to decode
 the bytes to the string and you'll have the original byte string, so
 long as the character encoding scheme is bijective, which is true for
 most of the relevant schemes like UTF-8 and UTF-16.

Neither of those is bijective.

In particular, both encoding schemes are not surjective as functions 
from Unicode strings onto byte streams (that is, there are such things 
as invalid byte sequences for both of them).  Therefore they can't 
possibly be bijective.  Specifically, invalid byte sequences typically 
lead to U+FFFD ending up in the Unicode string no matter what the 
particular values of the invalid bytes were.

 like with UTF-8 encoded strings that are not-wellformed

Right.  See above.  Note that most cases when the data is really desired 
as a byte array will in fact not be valid UTF-8.

The objection that I would expect is that your decoder does not inform
higher level code whether there was an error in the stream, you do not
like to change its API, and scanning for the replacement character un-
conditionally is objectionable too; or that it's hard to come up with
good and simple rules for when to dispose of a redundant objects. That
you can't do this optimization in every last edge case, well, that does
not necessarily justify adding new XHR-like interfaces, or complicating
things for authors with new parameters and surprising behavior.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



  1   2   >