Re: W3C's version of XMLHttpRequest should be abandoned

2015-08-06 Thread Robin Berjon

Hi Hallvord,

I don't have a specific opinion on where what should be done, speaking 
personally I certainly don't have an issue with XHR being at the WHATWG, 
but just some notes below in case it helps.


On 06/08/2015 14:07 , Hallvord Reiar Michaelsen Steen wrote:

And that is mostly my fault. I intended to keep the W3C fork up to date
(at least up to a point), but at some point I attempted to simply apply
Git patches from Anne's edits to the WHATWG version, and it turned out
Git had problems applying them automatically for whatever reason -
apparently the versions were already so distinct that it wasn't
possible.


Yes, once differences grow too much, even if you make use of 
cherry-picking, at some point there isn't much that git (or diff/patch) 
can do to merge two documents that are too far apart.



Since then I haven't found time for doing the manual
cut-and-paste work required, and I therefore think it's probably better
to follow Anne's advice and drop the W3C version entirely in favour of
the WHATWG version. I still like the idea of having a stable spec
documenting the interoperable behaviour of XHR by a given point in time
- but I haven't been able to prioritise it and neither, apparently, have
the other two editors.


Depending on how involved the differences between L1 and the LS are, one 
option is to do this with code. If L1 is a subset and the subsetting 
doesn't require editing things mid-sentence (e.g. just dropping sections 
and a few odds and ends) then you can simply keep pulling the LS and 
apply code that filters out what you don't want.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Exposing structured clone as an API?

2015-04-24 Thread Robin Berjon

On 24/04/2015 02:18 , Anne van Kesteren wrote:

On Thu, Apr 23, 2015 at 3:02 PM, Ted Mielczarek t...@mozilla.com wrote:

Has anyone ever proposed exposing the structured clone algorithm directly as
an API?


There has been some talk about moving structured cloning into
ECMAScript proper and exposing the primitives. But TC39 does not seem
particularly receptive unless it comes with a way for someone to
participate in the structured cloning algorithm with custom objects.


Does this have to be any more complicated than adding a toClone() 
convention matching the ones we already have?


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: template namespace attribute proposal

2015-03-12 Thread Robin Berjon

On 12/03/2015 11:07 , Anne van Kesteren wrote:

On Thu, Mar 12, 2015 at 4:32 AM, Benjamin Lesh bl...@netflix.com wrote:

What are your thoughts on this idea?


I think it would be more natural (HTML-parser-wise) if we
special-cased SVG elements, similar to how e.g. table elements are
special-cased today. A lot of template-parsing logic is set up so
that things work without special effort.


Or even go the extra mile and just slurp all SVG elements into the HTML 
namespace. There are a few name clashes, but we ought to be able to iron 
those out.


And ditto MathML.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: PSA: publishing new WD of URL spec

2014-09-11 Thread Robin Berjon

Hi Marcos,

On 11/09/2014 17:19 , Marcos Caceres wrote:

Only once I have clear answers to the following (and see actual proof).
I know you already addressed some of this in your previous email to
Dominic.


I will address your points below, but I will repeat what I told Domenic: 
I don't think progress can be made by talking about stuff in the 
abstract. I believe in iterated progress. To put it differently, I think 
this should be a living commitment to a better relationship and not some 
finalised thing before any action is taken.


Based on that I would like to get, and I think it is quite reasonable, 
agreement that we can go ahead and publish something better than what 
there was before (surely better than what *is* there) and iterate on 
that (as fast as possible) to get it all good.


Makes sense?


1. How will the spec be kept up to date? i.e., what technical means will
be put in place by the w3c to assure that the latest is always on TR.


As announced on spec-prod and discussed with CSS recently, Philippe has 
been working on an automated publisher. My understanding is that he 
hopes to have a prototype by TPAC, and to ship in early 2015 (likely 
with some guinea pigs having earlier access).


Please provide input to that project (in its own thread).


2. How will the W3C determine when a spec is ready for LC/CR?


Is there any reason to use anything other than tests + implementations?


3. How will the W3C cope with changes occurring to the living document
after CR? (See Boris' emails)


I have been advocating a software model for specs for so long that 
you're probably tired of hearing it; but I think we can apply the 
release/development branching here.



4. Will the W3C prevent search engines from finding the copy/pasted
document? Particularly any static snapshots.


Why would you restrict that to imported snapshots?

We're looking at blanket-preventing that for dated TR; anyone can add 
meta robots noindex to TR drafts. I'm certainly happy to do that for 
URL, DOM, and likely a bunch of others when they next get published.



5. What indicators (e.g., the big red box) will be put into the spec to
indicate that the WHATWG version is the canonical version?


Do you want something better than the big red box?


6. Out of respect for both the Editor and the WHATWG as a standards
consortium, how will the W3C attribute authorship of the documents and
well as show that the document originates from the WHATWG?


So what's been done for DOM and URL has been to just list those editors. 
I'd be happy to remove the snapshotting editors but I think that's not 
possible *yet* if the original authors aren't on the WG.


Apart from that, it should be included in the SotD and in the big red box.

So?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal for a credential management API.

2014-08-01 Thread Robin Berjon

Hi Mike,

On 31/07/2014 09:48 , Mike West wrote:

It's not clear to me that WebApps is the right venue from a process
perspective,
but this is almost certainly the right group of people to evaluate the
proposal.
Thanks in advance for your feedback, suggestions, and time. :)


As you know I think that a solution in this space is absolutely needed 
and I like your approach, I think it's on to the right set of use cases. 
There are some paper cuts with your proposal but nothing I've seen that 
can't be ironed out.


Concerning the process part, I'd like to only worry about that as much 
as needed, which shouldn't be a lot. We can work something out and come 
back to you with a solution to make this happen.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Leading with ContentEditable=Minimal

2014-06-30 Thread Robin Berjon

On 30/06/2014 07:22 , Johannes Wilm wrote:

Another use case: Create a track changes function within an editor (like
https://github.com/NYTimes/ice ) that really should be run MVC in order
to keep the code somewhat readable. Currently ICE breaks whenever any of
the browser makers decide to change anything about contenteditable.


Oh yeah, anything involving tracking, OT, or whatever temporal really, 
really can't use the markup as its model. I'm surprised ICE went that 
way, it must be terribly painful.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Leading with ContentEditable=Minimal

2014-06-26 Thread Robin Berjon

On 24/06/2014 20:09 , Ben Peters wrote:

Works for me. Should I just scare up a draft? It is likely to be a pretty short
spec :)


I'm really looking forward to getting things sorted out! But I think
we may want to take a step back and make sure we all agree on the
problem, goals, and use cases, as Ryosuke has been pushing for. We
have several different proposed solutions. Does it make sense to very
clearly state all of this before we press on too quickly?


Sure, but this is just one of the moving parts we need, and I think it 
is well established that it is required. The existing contentEditable 
has many built-in behaviours that cannot be removed by browser vendors 
without breaking existing deployed code. This includes both native UI 
and default actions for many events.


It's a small spec, it's just what is needed in order to enable the 
baseline behaviour. The meat is elsewhere :) I was proposing to start 
putting it together not because it's hard but to get a bit of momentum 
going.



Problems:
* ContentEditable is too complex and buggy to be usable as-is
* ContentEditable does not easily enable the wide range of editing scenarios


Complex and buggy aren't necessarily show-stoppers. With hard work it 
should be possible to take the current Editing APIs draft and 
progressively iron out most of the kinks. It's difficult, but difficult 
things have been done before.


The main problem here is that even if we did that we still wouldn't have 
a usable system. And I would say that the issue isn't so much that it 
doesn't enable scenarios more than that it works actively hard to make 
them hard to implement :)


Maybe this can be captured as Does not support 
http://extensiblewebmanifesto.org/;.



Goals:
* Make it easy to disable browser behavior in editing scenarios


I don't think that we should make it easy to disable behaviour; 
behaviour should be minimal and well-contained by default. Put 
differently, maybe this should be Editing behaviour should be opt-in 
rather than opt-out?



* Enumerate available actions in a given context before and after javascript 
adds/modifies behavior


I'm not sure I understand what that is?


Use Cases:
* Create a js framework that enables a WYSIWYG editor and works the same in all 
browsers with little browser-specific code


s/little/no/ ;-)


* Use a js framework to insert a customized editor into an email client
* Use a js framework to insert a customized editor into a blog
* Use a js framework to insert a customized editor into a wiki


Aren't those the same as the previous one?


* Create a document editor that uses an HTML editor as a frontend but a 
different document model as a backend


I don't know if we want to mention MVC and other such things? Perhaps 
just the general sanity of not using your rendering view as your model :)


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] selection across editing host boundaries

2014-06-24 Thread Robin Berjon
 to select or delete the 
non-editable content (direction previous).

B: a range containing non-editable. Presumably deleted.
C: same as B with some extra content on both sides. Presumably deleted.
D: empty range, the script can decide what makes most sense. (Stabbing 
the user in the face sounds good.)

E: empty range, the script decides which is best.

For F, F2, G, and an awful lot of other cases (dt/dd, td, etc.) I think 
we should take the minimalist approach: just produce a deletion event 
indicating its direction but with an empty range. Scripts can decide if 
they wish to merge elements, delete empty ones, outdent, etc.



This is getting complicated enough (and I haven't mentioned other cases 
such as br, script, hr, td, img, video...) that I wonder if the Deletion 
event shouldn't have its own spec.


Other question: when there's a selection and an input event takes place, 
should it be preceded by a deletion event? I think we need to because 
the alternative is to have the input event handler have to perform its 
own logic equivalent to deletion, which would be painful. But it comes 
with its own interesting challenges.


Thoughts?



Use cases for this:

1. We use it for footnotes which we float of to the right of the text in
a span class=footnote contneteditable=falsespanspan
contenteditable=true[FOOTNOTE TEXT]/span/span/span. If one has
access to CSS regions, one can even float them to be under the text on
each page. The other span class=footnote is used to draw the
footnote number in the main text. If one hits backspace behind it, the
entire footnote should disappear. as it is now, instead the back wall
of the footnote is removed  which means that the rest of the paragraph
is being added to the footnote.


A question for you: how would you like selection to work in this case, 
notably for copy/pasting? As a user, I would tend to expect that if I 
select from before the sup1/sup to after it and copy, I would get a 
buffer containing the sup1/sup but *not* the footnote content 
(otherwise you get the famed PDF effect with lots of junk in your 
buffer). It also looks visually weird if you have the footnote to the 
side of the page being selected. But with the logical document order you 
use, it would get selected. Do you make use of selection-preventing tricks?


These likely have their own amusing interactions with deletion: if you 
make the footnote non-selectable but wish to drop it when a selection 
encompassing it is deleted, you're facing a fun new challenge.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Editing with native UI

2014-06-24 Thread Robin Berjon

On 24/06/2014 00:38 , Ben Peters wrote:

Also, if the browser includes a bold command by default and I
don't support bolding and therefore cancel the event, the user who
has been relying on the native UI is getting the worst possible
experience: native controls that do nothing at all.



This doesn't seem like an insurmountable problem. We can provide a
way for sites to indicate that they support certain commands and not
others, similar to queryCommandEnabled(), which has a return value
that could be modified by javascript (perhaps by an event, say
QueryCommandEvent). Then the browser could choose not to show buttons
for commands that are disabled.


Yes, this is possible, but I see problems with it.

First, it has to be white-list based. An option to just disable things 
would not be resilient in the face of browser vendors adding stuff.


Second, it doesn't address the point made below about native UI exposing 
a likely non-natural subset of the commands I wish to support.


Third, I sense the sort of list that tends to acquire proprietary 
extensions over time (apple-touch-your-nose, ms-insert-clippy-wisdom, 
etc.) that leads developers to have to keep adding new values if they 
want a half-sane native UI that matches their usage (for reference, see 
the current favicon mess).


Finally, it feels an architecturally bad idea to require developers to 
specify the same information more than once. If my own toolbar has bold 
and italic, and I tell the browser that I want its native UI to expose 
bold and italic, then go ahead and add underlining to my toolbar I can 
easily forget to also tell the UA (granted, libraries can paper over 
that, but it becomes a tools-will-save-us situation).


Building on top of the infrastructure that HTML is providing to define 
menus and commands, we can get something that is:


  • White-list based;
  • Has the exact set of commands I wish to expose;
  • Does not lend itself to proprietary drift;
  • Is specified once.

Note that if what HTML 5.1 is currently doing in this area isn't good 
enough, we can (and should) definitely improve it. Right now it's not, 
to the best of my knowledge, implemented broadly enough that we can't 
change it.



Conversely, if I support something that the native UI does not
expose (say, superscripting) it makes for a weird UI in which
some things are exposed and others aren't.



Good point.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Leading with ContentEditable=Minimal

2014-06-24 Thread Robin Berjon

On 23/06/2014 18:25 , Julie Parent wrote:

Well stated.  I like contentEditable=cursor.


Works for me. Should I just scare up a draft? It is likely to be a 
pretty short spec :)


--
Robin Berjon - http://berjon.com/ - @robinberjon



Editing TF and list

2014-06-23 Thread Robin Berjon

Hi all,

this email is to announce the creation of the task force working on 
editing, jointly between WebApps and HTML, based on the decision made 
previously[0].


The mailing list's address is public-editing...@w3.org and signing up is 
at mailto:public-editing-tf-requ...@w3.org?subject=subscribe.


The rationale for having this specific list is in the charter[1]. 
Feedback and suggestions are welcome on that document, which is informal 
and mostly means to document the group's purpose.


I suggest that for a short while (~a week, mostly covering existing 
threads) people cross-post. After that, though, all discussion ought to 
have been redirected there.


Thanks!


[0] http://lists.w3.org/Archives/Public/public-webapps/2014AprJun/0842.html
[1] http://w3c.github.io/editing-explainer/tf-charter.html

--
Robin Berjon - http://berjon.com/ - @robinberjon



DIsjoint ranges (was: contentEditable=minimal)

2014-06-23 Thread Robin Berjon

On 06/06/2014 18:52 , Ryosuke Niwa wrote:

On Jun 6, 2014, at 6:40 AM, Robin Berjon ro...@w3.org wrote:

On 05/06/2014 09:02 , Ryosuke Niwa wrote:

I agree visual selection of bidirectional text is a problem
worth solving but I don't think adding a generic multi-range
selection support to the degree Gecko does is the right
solution.


I'd be interested to hear how you propose to solve it in another
manner. Also note that that's not the only use case, there are
other possibilities for disjoint selections, e.g. a table
(naturally) or an editable surface with a non-editable island
inside.


Supporting disjoint range is probably necessary but adding the
ability to manipulate each range separately seems excessive because
that'll lead to selections with overlapping ranges, ranges in
completely different locations that are not visually disjoint, etc...
We might need to do something like exposing readonly multi-range
selection.


Readonly multiranges may be an option, but I can think of some issues 
(which perhaps we can address).


Several people have mentioned the use case in which a script wants to 
observe selection changes in order to ensure that selections conform to 
certain constraints. Consider the following:


  abc 2 defg
  ABC 1 defg

Let's imagine that the script wishes to constrain the selection to only 
that second line, that the user clicks at 1 and drags towards 2. You'd 
want the script to constrain the range such that it just selects ABC . 
If you only cancel the selection change, presumably it doesn't select 
anything at all here (and I'm also presuming that with such a gesture 
you don't get a selection change event for each character in between the 
two drag points — that would be a lot).


What is weird in this scenario is that so long as the text is 
unidirectional you can manipulate the range, but the second B is a 
character in a different direction you can't. (And then again, *only* in 
browsers that support visually continuous selection across bidi 
boundaries — in others it would still work.)


I don't think that this variability is good; it is likely to surprise 
developers.


Another issue is that in some cases I believe that *visually* disjoint 
selections are the right thing to do. If you have an editing host that 
contains a readonly island, it should be possible for the author to make 
that island non-selectable so that you can select text from one side to 
the other but not the island itself. (Typically this enables the 
inlining of affordances.)


Reconsidering your objection, I wonder if it really is a problem? 
Overlapping ranges: well, it would be weird, but basically it strikes me 
as a doctor it hurts when I do this problem, unless I'm missing 
something. Ranges in completely different locations that are not 
visually disjoint: well, if you do that, maybe you have a reason? Just 
because you can do something stupid with an API doesn't mean that it's a 
stupid API.



For starters, most of author scripts completely ignore all but
the first range, and applying editing operations to a multi-range
selection is a nightmare.


I don't disagree that it can be hard to handle, but I'm not sure
that that's indicative of anything. Most scripts only handle one
selection because AFAIK only Gecko ever supported more than one.


Given Gecko itself doesn't handle applying editing operations to
multiple ranges well from what I've heard, I'm not certain we can
expect web developers to get them right especially in the context
where disjoint multi-range selection is needed; e.g. bidirectional
text, exotic layout model.


I don't think that what is supported in any browser today in terms of 
contentEditable should be seen as a limitation on what Web developers 
can achieve. I'm very much certain that they can do better.


Thinking about this some more, I wonder if the problem is not less 
common than I initially thought, though. If you consider the following text:


  ltr rtl ltr

You definitely need multiranges while the selection is in progress if 
you want visual continuity:


  [ltr rt]l ltr

But when both ends of the selection are in text with the same direction, 
you can revert to having a single range:


  [ltr rtl ltr]

The problem being less common does not decrease the need to support for 
it, but it does decrease the odds that people will shoot themselves in 
the foot over relatively common cases.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Composition, IME, etc.

2014-06-23 Thread Robin Berjon

On 06/06/2014 19:13 , Ryosuke Niwa wrote:

On Jun 6, 2014, at 7:24 AM, Robin Berjon ro...@w3.org wrote:

In order to handle them you have two basic options:

a) Let the browser handle them for you (possibly calling up some
platform functionality). This works as closely to user expectations
as a Web app can hope to get but how do you render it? If it
touches your DOM then you lose the indirection you need for
sensible editing; if it doesn't I don't know how you show it.

b) Provide the app with enough information to do the right thing.
This gives you the indirection, but doing the right thing can be
pretty hard.

I am still leaning towards (b) being the approach to follow, but
I'll admit that that's mostly because I can't see how to make (a)
actually work. If (b) is the way, then we need to make sure that
it's not so hard that everyone gets it wrong as soon as the input
is anything other than basic English.


I'm not convinced b is the right approach.


As I said though, it's better than (a) which is largely unusable.

That said, I have a proposal that improves on (b) and I believes 
addresses your concerns (essentially by merging both approaches into a 
single one).



If the browser doesn't know because the platform can't tell the
difference between Korean and Japanese (a problem with which
Unicode doesn't help) then there really isn't much that we can do
to help the Web app.


This predicates on using approach b.  I'm not convinced that that's
the right thing to do here.


No, it doesn't. If the browser has no clue whatsoever how to present 
composition then it can't offer the right UI itself any more than it can 
help the application do things well. I am merely ruling that situation, 
which you mentioned, out as unsolvable (by us).



However if the browser knows, it can provide the app with
information. I don't have enough expertise to know how much
information it needs to convey — if it's mostly style that can be
done (it might be unwieldy to handle but we can look at it).


The problem here is that we don't know if underlining is the only
difference input methods ever need.  We could imagine future new UI
paradigms would require other styling such as bolding text, enlarging
the text for easier readability while typing, etc...


I never said that the browser would only provide underlining 
information. I said it can convey *style*. If it knows that the specific 
composition being carried out requires bolding, then it could provide 
the matching CSS declaration. If there is an alien composition method 
that requires red blinking with a green top border, it could convey that.


Having said that, having the browser convey style information to the 
script with the expectation that the script would create the correct 
Range for the composition in progress and apply that style to it, even 
though possible, seems like a lot of hoops to jump through that are 
essentially guaranteed to be exactly the same in every single instance.


I think we can do better. It's a complicated-sounding solution but the 
problem is itself complex, and I *think* that it is doable and the best 
of all options I can think of.


To restate the problem:

  • We don't want the browser editing the DOM directly because that 
just creates madness
  • We want to enable any manner of text composition, from a broad 
array of options, while showing the best UI for the user.


These two requirements are at odds because rich, powerful composition 
that is great for the user *has* to rely on the browser, but the logical 
way for the browser to expose that is to use the DOM.


The idea to ally both is to use a shadow text insertion point. 
Basically, it is a small DOM tree injected as a shadow at the insertion 
point (with author styles applied to it). The browser can do *anything* 
it wants in there in order to create a correct editing UI. While 
composition is ongoing, the script still receives composition events but 
can safely just ignore them for the vast majority of cases (since you 
can't generally usefully validate composition in progress anyway). When 
the composition terminates, the input event contains the *text* content 
of the shadow DOM, which is reclaimed.


I guess that the shadow text insertion point would participate in the 
tree in the same way that a pseudo-element does. (Yes, I realise this 
basically means magic.)


I believe this works well for the insertion of new text; I need to mull 
it over further to think about editing existing content (notably the 
case that happens in autocorrect, predictive, and I believe Kotoeri 
where you place a cursor mid-word and it will take into account what's 
before it but not after). But I think it's worth giving it some thought; 
particularly because I don't see how we can solve this problem properly 
otherwise.


This has the advantage that it is also a lot simpler to handle for authors.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] selection across editing host boundaries

2014-06-23 Thread Robin Berjon
 to the new public-editing-tf 
list which has much lower traffic.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Editing with native UI (was: [editing] CommandQuery Object and Event)

2014-06-23 Thread Robin Berjon

On 06/06/2014 18:39 , Ryosuke Niwa wrote:

On Jun 6, 2014, at 4:29 AM, Piotr Koszuliński
p.koszulin...@cksource.com wrote:

1. That we need any native UI related to cE at all. We don't. We
can display our own toolbars, with our own buttons, with our own
icons and implementing our own logic. So the easiest solution to
the problem with irrelevant native UI is to not display it at all.


You may not need native UI working at all in your app, but that
doesn't mean all other developers don't want it at all.  Furthermore,
enabled-ness of items in desktop browser's edit menu should reflect
the current state of the editor; otherwise, it would degrade the user
experience.

Furthermore, we shouldn't design our API only for existing platforms.
We need to make it so that new, completely different paradigm of UIs
and devices could be built using new API we design.

Another important use case for browsers to know the state of the
editor is for accessibility.  AT may, for example, want to enumerate
the list of commands available on the page for the user.


All of these are good points, but the fact remains that if a browser 
unilaterally decides to exposes a new editing behaviour that I as author 
don't know about, it could very easily break my script.


Also, if the browser includes a bold command by default and I don't 
support bolding and therefore cancel the event, the user who has been 
relying on the native UI is getting the worst possible experience: 
native controls that do nothing at all.


Conversely, if I support something that the native UI does not expose 
(say, superscripting) it makes for a weird UI in which some things are 
exposed and others aren't.


There is an option that:

  • Can be styled in the page according to author wishes.
  • Can interact with native controls.
  • Can integrate with accessibility.

It relies on using all bits of new stuff in HTML: commands, contextMenu, 
and friends. I would *strongly* suggest that contentEditable=minimal 
would *only* have native UI based on things specified with this and not 
anything else by default. Native UI atop custom editing is really a 
solution for breakage.


We can also make it smart and able to tap into higher-level intention 
events such as knowing the platform's localised shortcut for a given action.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] CommandEvent and contentEditable=minimal Explainer

2014-06-23 Thread Robin Berjon

On 17/06/2014 02:39 , Julie Parent wrote:

I certainly understand the concern that it would be impossible to
properly catch and cancel all events.  But I think that is somewhat the
point - it forces browser vendors to get these parts right.  All changes
to an editable dom must fire an event before the modifications are made,
and must be cancelable. Further, I'd say that if the number of events
you would need to preventDefault on grows larger than selection,
command, and maybe clipboard then that implies that we are not building
the right APIs.


Apart from other problems with building on top of contentEditable=true 
(notably that you keep getting the native browser UI, which is likely 
very wrong) I'd be really worried that we'd be painting ourselves into a 
corner with this approach.


If we realise a year or two from now that the design we picked isn't 
ideal and that we'd really need a new event type, the reliance on 
prevent everything I don't know about could severely constrain our 
options, and force us to shoehorn new functionality into existing events 
just to make sure we don't break existing content.


I don't think that the (limited) benefits are really worth it.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Leading with ContentEditable=Minimal

2014-06-23 Thread Robin Berjon

On 17/06/2014 02:12 , Julie Parent wrote:

If Intention events are (temporarily) moved out of scope, I think this
leads us back to the question of what would contentEditable='minimal' do
exactly?  Enable collapsed selections and default handling of cursor
movement ... anything else?  If this is all it would do, then perhaps
what we really want is an explicit API to enable cursors?


The way I see it, that is indeed *all* it would do (and serve as a 
sanity flag so that browsers know how to handle this cleanly).


It *is* an explicit API to enable cursors. It has the advantage of 
reusing an existing name so that we don't have to worry about what 
happens when you specify both; and it's declarative because that's what 
you want for such a case (notably so that CSS can style what's editable 
cleanly).


We could rename it contentEditable=cursor if that's cleaner — the idea 
is the same (and I certainly won't argue bikeshedding :).


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: impact of new gTLDs

2014-06-12 Thread Robin Berjon

On 12/06/2014 12:02 , Anne van Kesteren wrote:

On Thu, Jun 12, 2014 at 11:14 AM, Akihiro Koike ko...@jprs.co.jp wrote:

We are a registry operator of new generic top-level domain(gTLD).
Therefore we are interested in the impact of new gTLDs.
I'd like your thoughts on what kind of impact the appearance of new
gTLDs has on software implementation.


The continued growth of https://publicsuffix.org/ would be annoying.


Can you think of an alternative? Because it's looking like we're going 
to keep getting a lot of new TLDs.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] CommandEvent and contentEditable=minimal Explainer

2014-06-06 Thread Robin Berjon

On 28/05/2014 01:39 , Julie Parent wrote:

The discussion of which minimal default handling to include with
contenteditable=minimal makes me wonder if contentEditable=minimal
is necessary at all.  It quickly becomes a can of worms of *which*
default handling should be included, and it seems likely to not satisfy
every use case no matter which decisions are made.  However, minimal
is proposed as a building block because currently, disabling all default
functionality of contentEditable=true is difficult/impossible.  But
with CommandEvents, shouldn't contentEditable=minimal be equivalent to:

// Let editRegion be div contentEditable id='editRegion'

var editRegion = document.getElementById(editRegion);
editRegion.addEventListener(command, handleCommand);
function handleCommand(evt){
   evt.preventDefault();
}

No default actions would be taken, but selection events would still fire
and be handled.  There would be no ambiguity.  If implementing
contentEditable=minimal on top of CommandEvents could just be a few
lines of code, why complicate things by spec'ing another property?


I like the simplicity of this approach, but I have some concerns.

As Travis points out, this implies that all events have to be 
cancellable in this context. This can lead to problems (like we had with 
mutations), it can also have performance implications.


Another aspect that may be problematic is the case of UAs that provide a 
UI (like Safari Mobile) whenever they see contentEditable. I was hoping 
that contentEditable=minimal would serve as a sanity flag not to include 
that. In the above it's not possible.


I am not sure that we can have this work properly without specifying the 
default behaviour, which as we all know is a terrible mess. If you don't 
have reliable default behaviour, can you really rely on the browser to 
DTRT for the cases you don't wish to handle? Won't you end up having to 
use a library anyway?


Again, I'm all for the simplicity; I'm just worried about the snoring 
dragon.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: CommandEvent for user intentions

2014-06-06 Thread Robin Berjon

On 21/05/2014 20:51 , Ben Peters wrote:

I’m not sure an extra event type is necessary here though: why
not use beforeinput for the input events, and selectionchange for
selection events?  Ryosuke’s selection spec currently has a
placeholder for selectionchange, and seems like the right place
and timing to work this in?

My current thought is that Selection events should be used for
selection, and CommandEvent for things that would be in a toolbar or
context menu. I think the design should make it easy to create and
style toolbars based on the available commands and their state.


Right. I agree with the architecture you described at the beginning of 
the thread, but I was a bit worried about your usage of a select-all 
command event as an example.


There are many, many ways of affecting selection that vary across tools 
and locales, and representing all of them would IMHO be painful.


Do you ever need a select-all event? I would think that a selection 
change event that happens to give you a selection object containing 
everything might suffice? (Which sort of seems to be what you're saying 
here — hence checking where you stand.)


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-06-06 Thread Robin Berjon

On 05/06/2014 09:02 , Ryosuke Niwa wrote:

I agree visual selection of bidirectional text is a problem worth
solving but I don't think adding a generic multi-range selection
support to the degree Gecko does is the right solution.


I'd be interested to hear how you propose to solve it in another manner. 
Also note that that's not the only use case, there are other 
possibilities for disjoint selections, e.g. a table (naturally) or an 
editable surface with a non-editable island inside.



 For
starters, most of author scripts completely ignore all but the first
range, and applying editing operations to a multi-range selection is
a nightmare.


I don't disagree that it can be hard to handle, but I'm not sure that 
that's indicative of anything. Most scripts only handle one selection 
because AFAIK only Gecko ever supported more than one.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-06-06 Thread Robin Berjon

On 02/06/2014 23:01 , Ben Peters wrote:

From: Robin Berjon [mailto:ro...@w3.org] I think that the latter is
better because it gives the library the computed range that matches
the operation, which as far as I can imagine is what you actually
want to check (e.g. check that the newRange does not contain
something unselectable, isn't outside a given boundary, etc.).

The former requires getting a lot of details right in the spec, and
those would become hard to handle at the script level. On some
platforms a triple click (or some key binding) can select the whole
line. This not only means that you need direction: both but also
that the script needs a notion of line that it has no access to
(unless the Selection API grants it). What makes up a word as a
step also varies a lot (e.g. I tend to get confused by what Office
apps think a word is as it doesn't match the platform's idea) and
there can be interesting interactions with language (e.g. is
passive-aggressive one word or two? What about co-operation?).

But maybe you have a use case for providing the information in that
way that I am not thinking of?


This seems like it's getting pretty close to the browser just doing
the selection. A browser would still have to figure out what the
selection should look like in the version you suggest. Instead, maybe
each site could decide what is thinks is a word (passive or
passive-agressive). The line example is good, so maybe we should
have a 'line' level selection just like the 'word' level?


Yes, the way I see it the browser *always* figures out what the 
selection is; but the developer gets a chance to cancel (or modify) it.



Yes my understanding is that today you get both. I'm not arguing
against that as the events stand today, but when we talk about
'Intention Events' as an abstract type with certain properties like
commandName, I think you should only get one of those (paste or
command or beforeinput), and I'm suggesting that it should be paste
in this case.


Agreed, that's sanity.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Should minimal contentEditable default text input

2014-06-06 Thread Robin Berjon

On 30/05/2014 08:25 , Anne van Kesteren wrote:

On Fri, May 30, 2014 at 12:50 AM, Julie Parent jpar...@gmail.com wrote:

Or, rather than tying this concept to contentEditable, with all the
assumptions and complications that brings up, why not expose this building
block as a completely separate attribute?


Just a DOM API perhaps as you're not going to get far without
scripting anyway?


I prefer it to be possible to say [contentEditable] { outline: dotted 
1px; }. There's :read-write but it also matches input/textarea.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Should minimal contentEditable default text input

2014-06-06 Thread Robin Berjon

On 30/05/2014 00:50 , Julie Parent wrote:

Without default text input, the current proposal for
contentEditable=minimal is essentially just enabling cursors (drawing
them, dispatching events, performing default actions).  Rather than
calling the mode minimal, which is ill-defined, why not explicitly
call it what it is: cursor-only?


Sure, but (assuming we go this route) I'm not too worried about the name 
at the moment. We can define what it does, then pick a good name for it. 
That's the easiest bit to change.



 Or, have contentEditable take a list
of features to turn enable: contentEditable=enable-cursors
enable-CommandEvents.


I'd rather we agreed on the pieces we're going to have before we see if 
this can make sense.



Or, rather than tying this concept to contentEditable, with all the
assumptions and complications that brings up, why not expose this
building block as a completely separate attribute?


We can, but this isn't necessarily simpler. Saying we add an editable 
attribute, we then have to define what happens when you have div 
contentEditable editable (and various other niceties). It's not the end 
of the world, but it's a bit of extra complexity. Reusing the attribute 
name and giving it a value that triggers new behaviour doesn't bring in 
the complications, but it does give us a relatively clean syntax entry 
point.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Composition, IME, etc. (was: contentEditable=minimal)

2014-06-06 Thread Robin Berjon

On 05/06/2014 09:09 , Ryosuke Niwa wrote:

On May 23, 2014, at 1:37 PM, Robin Berjon ro...@w3.org wrote:

Semantically, autocorrect and compositing really are the same
thing.


They are not.  Word substations and input method compositions are
semantically different operations.


Ok, I'll accept that depending on the level of abstraction at which 
you're looking at the problem they may or may not be the same thing.


The core of the problem is this: there is a wide array of situations in 
which some form of indirect text input (deliberately going for a new 
term with no baggage) takes place. This includes (but is not limited to):


  • dead key composition (Alt-N, N - ñ)
  • assumed international composition (',e - é, if you just want an 
apostrophe you have to compose ',space)

  • inline composition for pretty much everything
  • popup composition
  • autocorrect
  • speed-typing input (T9, swiping inputs)

In order to handle them you have two basic options:

  a) Let the browser handle them for you (possibly calling up some 
platform functionality). This works as closely to user expectations as a 
Web app can hope to get but how do you render it? If it touches your DOM 
then you lose the indirection you need for sensible editing; if it 
doesn't I don't know how you show it.


  b) Provide the app with enough information to do the right thing. 
This gives you the indirection, but doing the right thing can be 
pretty hard.


I am still leaning towards (b) being the approach to follow, but I'll 
admit that that's mostly because I can't see how to make (a) actually 
work. If (b) is the way, then we need to make sure that it's not so hard 
that everyone gets it wrong as soon as the input is anything other than 
basic English.



Note that if there is a degree of refinement such that we may want
to make it possible for authors to style compositing-for-characters
and compositing-for-autocorrect, then that ought to go into the
styling system.


In older versions of Windows, for example, the browser itself can't
figure out what kind of style is used by IME.  Korean and Japanese
IME on Windows, for example, use bolded lines and dotted lines for
opposite purposes.  And we get bug reports saying that WebKit's
rendering for Korean IME is incorrect because we decided to follow
Japanese IME's convention.


Right. In this case we need to distinguish between the browser not 
knowing and the Web app not knowing.


If the browser doesn't know because the platform can't tell the 
difference between Korean and Japanese (a problem with which Unicode 
doesn't help) then there really isn't much that we can do to help the 
Web app.


However if the browser knows, it can provide the app with information. I 
don't have enough expertise to know how much information it needs to 
convey — if it's mostly style that can be done (it might be unwieldy to 
handle but we can look at it).



We /could/ consider adding a field to compositing events that would
capture some form of ontology of input systems. But I think that's
sort of far-fetched and we can get by with the above. (And yes, I'm
using ontology on purpose. It wouldn't look good :)


In my opinion, it's a requirement that input methods work and look
native on editors that use this new API.  IME is not a nice-to-have
feature.  It's a feature required for billions of people to type any
text.


That is *exactly* my point. At this point I believe that if we just 
added something like a compositionType = deadkey | kr | jp | t9 | 
autocorrect | ... field and leave it at that we're not helping anyone. 
The script will need to know not just how to render all of these but how 
they are supposed to look on each platform. That's why I am arguing for 
primitives that enable the script to do the right thing *without* having 
to know everything about all the possible IMEs.


Having said that, I was initially hoping that a mixture of composition 
events plus IME API would cover a lot of ground already. Thinking about 
it some more, it's not enough.


Can you help me come up with a list of aspects that need to be captured 
in order to enable the app to render the right UI? Or do you have 
another proposal?


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] CommandQuery Object and Event

2014-06-06 Thread Robin Berjon

On 06/06/2014 13:29 , Piotr Koszuliński wrote:

1. That we need any native UI related to cE at all.
We don't. We can display our own toolbars, with our own buttons, with
our own icons and implementing our own logic. So the easiest solution to
the problem with irrelevant native UI is to not display it at all.

2. That we need any native architecture for commands.
We don't. What a command is? It's a name, function and a state+value
refreshed on selection change. A command repository can be implemented
in JavaScript in few lines of code. CKEditor has one (and I guess that
all advanced editors have), because it is a necessary component on which
we must have full control. What it does, when it does, how a command is
executed, what arguments it accepts, which commands are available for
specific editor instance, etc.


FWIW I completely agree with Piotr. We need to be thinking about 
primitives that are as low-level as possible. We don't need to have any 
built-in support for things like bolding. If it somehow turns out that 
farther down the line there is a common set of commands that might 
somehow benefit from getting the native treatment we should cross that 
bridge then, but the v1 of this project should IMHO really, really not 
do more than what's needed for a script to cleanly implement an 
arbitrary text editor.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Minutes Editing discussion 2014-06-06

2014-06-06 Thread Robin Berjon
: we could just have a commandevent with insert-text as
   its type
   ... we could just use that on the event side as well

Splitting up work

   BenjamP: we definitely have some research, look at frameworks

   Robin: if we can get people involved all the better
   ... we can have a mailing list

   adrianba: that was definitely helpful with the media TF where
   we were trying to appeal to a specific audience

   adrianba +1

   PiotrekKoszulinski +1

   scribe ACTION: Robin to create an Editing TF [recorded in
   [13]http://www.w3.org/2014/06/06-webapps-minutes.html#action01]

   trackbot Created ACTION-731 - Create an editing tf [on Robin
   Berjon - due 2014-06-13].

   BenjamP: we need a normative spec
   ... as well as updates to the explainer
   ... without cE, we need Command Events

   Robin: we need to define the binding with HTML too

   BenjamP: we need an editor, and we need to figure out when we
   make them official

   Robin: is editing already in the WebApps charter

   scribe ACTION: Robin to figure out how we handle the
   chartering business [recorded in
   [14]http://www.w3.org/2014/06/06-webapps-minutes.html#action02]

   trackbot Created ACTION-732 - Figure out how we handle the
   chartering business [on Robin Berjon - due 2014-06-13].

   BenjamP: update the explainer with this information, then write
   some specs
   ... and file bugs, improve work
   ... the current discussion is hard to track

   Robin: we can reuse the Bz for the Editing API or the GH issues

   BenjamP: GitHub it is!

   Robin: BenjamP you're willing to edit?

   BenjamP: yes

   Robin: I'm happy to edit too
   ... we also need the Selection API

   BenjamP: yes, rniwa said he didn't have much time not long ago

   jparent_: my impression was that after WWDC he'd have more time

   scribe ACTION: Robin to ask rniwa how he wants to handle
   Selection [recorded in
   [15]http://www.w3.org/2014/06/06-webapps-minutes.html#action03]

   trackbot Created ACTION-733 - Ask rniwa how he wants to
   handle selection [on Robin Berjon - due 2014-06-13].

   Robin: I would encourage people to start using the tracker

   BenjamP: other call?

   Robin: we could say that Fri 8am PST is always the time, but we
   call it on an ad hoc basis

   RESOLUTION: Fri 8am PST is always the time, but we call it on
   an ad hoc basis

Summary of Action Items

   [NEW] ACTION: Robin to ask rniwa how he wants to handle
   Selection [recorded in
   [16]http://www.w3.org/2014/06/06-webapps-minutes.html#action03]
   [NEW] ACTION: Robin to create an Editing TF [recorded in
   [17]http://www.w3.org/2014/06/06-webapps-minutes.html#action01]
   [NEW] ACTION: Robin to figure out how we handle the chartering
   business [recorded in
   [18]http://www.w3.org/2014/06/06-webapps-minutes.html#action02]

   [End of minutes]
 __


Minutes formatted by David Booth's [19]scribe.perl version
1.138 ([20]CVS log)
$Date: 2014-06-06 15:56:06 $
 __

 [19] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
 [20] http://dev.w3.org/cvsweb/2002/scribe/

Scribe.perl diagnostic output

   [Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11
Check for newer version at [21]http://dev.w3.org/cvsweb/~checkout~/2002/
scribe/

 [21] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found Scribe: darobin
Inferring ScribeNick: darobin
Default Present: BenjamP, PiotrekKoszulinski, Xiaoqian, jparent_, [IPcal
ler], darobin, +1.425.614.aabb, adrianba
Present: BenjamP PiotrekKoszulinski Xiaoqian jparent_ [IPcaller] darobin
 +1.425.614.aabb adrianba Robin_Berjon
Found Date: 06 Jun 2014
Guessing minutes URL: [22]http://www.w3.org/2014/06/06-webapps-minutes.h
tml
People with action items: robin

 [22] http://www.w3.org/2014/06/06-webapps-minutes.html


   [End of [23]scribe.perl diagnostic output]

 [23] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: HTML imports: new XSS hole?

2014-06-03 Thread Robin Berjon

On 02/06/2014 15:08 , Boris Zbarsky wrote:

On 6/2/14, 9:02 AM, James M Snell wrote:

I suppose that If you
needed the ability to sandbox them further, just wrap them inside a
sandboxed iframe.


The worry here is sites that currently have html filters for
user-provided content that don't know about link being able to run
scripts.  Clearly once a site knows about this they can adopt various
mitigation strategies.  The question is whether we're creating XSS
vulnerabilities in sites that are currently not vulnerable by adding
this functionality.

P.S. A correctly written whitelist filter will filter these things out.
  Are we confident this is standard practice now?


I haven't bumped into a blacklist filter in a *long* while. I suspect 
that any that might exist will be hand-rolled and not part of any 
platform. The odds are pretty strong that they're already unsafe if not 
wide open.


So I would say there's a risk, but not a huge one. That said, I still 
prefer Simon's approach.


PS: I've been wondering if adding an HTML sanitiser to the platform 
might make sense.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-27 Thread Robin Berjon

On 27/05/2014 01:47 , Ben Peters wrote:

-Original Message- From: Robin Berjon
On 26/05/2014 05:43 , Norbert Lindenberg wrote:

Were any speakers of bidirectional languages in the room when
this was discussed?


I don't know what languages the others speak. That said, my
recollection was that this was presented along the lines of we've
had regular requests to support selecting text in geometric rather
than logical orders.


I have also heard these requests from the bi-directional experts here
at Microsoft. A single, unbroken selection is what we're told users
want, and multi-selection makes this possible.


Thinking about this a little bit more: I don't imagine that the 
Selection API should prescribe the UI that browsers choose to support in 
order to select bidi text, on the contrary they should be allowed to 
innovate, experiment, follow various platform conventions, etc. But if 
we don't support multi-range selection, then only one model is possible 
which precludes unbroken selections.


I think that this strongly pushes in the direction of supporting 
multiple ranges.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-27 Thread Robin Berjon

Hi Ben,

On 27/05/2014 02:07 , Ben Peters wrote:

From: Robin Berjon [mailto:ro...@w3.org] Even without accounting
for touch screens, you really want the platform to be the thing
that knows what Ctrl-Shift-Left means so you don't have to support
it yourself (and get it wrong often).


Agree. One way to do this would be BeforeSelectionChange having a
commandType indicating select forward and select by word.


I think we agree at the high level but might disagree over smaller
details. You seem to want something that would roughly resemble the
following:

BeforeSelectionChange
{
  direction:  forward
, step:   word
}

whereas I would see something capturing information more along those lines:

BeforeSelectionChange
{
  oldRange:  [startNode, startOffset, endNode, endOffset]
, newRange:  [startNode, startOffset, endNode, endOffset]
}

I think that the latter is better because it gives the library the
computed range that matches the operation, which as far as I can imagine
is what you actually want to check (e.g. check that the newRange does
not contain something unselectable, isn't outside a given boundary, etc.).

The former requires getting a lot of details right in the spec, and
those would become hard to handle at the script level. On some platforms
a triple click (or some key binding) can select the whole line. This not
only means that you need direction: both but also that the script
needs a notion of line that it has no access to (unless the Selection
API grants it). What makes up a word as a step also varies a lot (e.g.
I tend to get confused by what Office apps think a word is as it doesn't
match the platform's idea) and there can be interesting interactions
with language (e.g. is passive-aggressive one word or two? What about
co-operation?).

But maybe you have a use case for providing the information in that way
that I am not thinking of?


Not all of those are separate, though. Voice input is just an input
(or beforeinput) that's more than one character long. There's
nothing wrong with that. So is pasting (though you need cleaning
up). Composition you need to handle, but I would really, really
hope that the platform gives you a delete event with a range that
matches what it is expected to delete rather than have you support
all the modifiers (which you'll get wrong for the user as they are
platform specific). As seen in the code gist I posted, given such a
delete event the scripting is pretty simple.


I agree, except that I don't know why we want paste to fire two
'intention' events (paste and input). Seems like we should make it
clear that the intention is insert text (type, voice, whatever),
remove text (delete, including what text to remove), or paste (so you
can clean it up).


I don't think we want to fire both paste and input, but if my reading is 
correct that is the case today (or expected to be — this isn't exactly 
an area of high interop).


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-27 Thread Robin Berjon

On 25/05/2014 20:40 , Piotr Koszuliński wrote:

Making some things unselectable might also be useful. IE has
unselectable, there's also -moz-user-select and friends. But this is
small fries for later I'd reckon.

There are also nested non-editable islands. We built very important
feature based on them - http://ckeditor.com/demo#widgets. Currently we
block their selection by preventing mousedown and we handle left/right
arrows. But cancelling selectionchange would allow us to control more
cases in a cleaner way.


I'd be curious to know what your take is on the best way to expose this. 
IE has an unselectable attribute, whereas Gecko and WebKit have a CSS 
property. In this thread we've been talking about using cancellable 
events for this (or if not cancellable, ones in which the selection can 
be modified on the fly).


On instinct I would tend to think that this not a great usage of CSS, 
it's much more tied to behaviour at a lower level. But it sort of is one 
of those borderline things (as many of the properties that initially 
came from the CSS UI module).


The scriptable option that we're consider is good in that it enables 
arbitrary cases, but it could be interesting to support a number of 
cases out of the box with a simpler (for developers) approach.


Let's imagine the following DOM:

div contenteditable=minimal
  pblah blah blah/p
  div class=widget unselectable.../div
  pblah blah blah/p
/div

If the cursor is at the beginning of the first p, you hold Shift, and 
click at the end of the second p, we could imagine that you'd get a 
Selection with two Ranges (one for each p) and not containing the 
unselectable widget. I *think* that's the most desirable default 
behaviour, and it's also one that can be pretty painful to control 
through script. In that sense an unselectable attribute would make 
sense. (I reckon that setting the widget to be cE=false would have the 
same effect, but it is nevertheless an orthogonal property.)


WDYT?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Composition events (was: contentEditable=minimal)

2014-05-27 Thread Robin Berjon

On 27/05/2014 01:52 , Ben Peters wrote:

Composition Events for IMEs, CommandEvents with type insertText for
all text input (including after Composition Events for IMEs)


I think we should be careful not to mix up composition events and IMEs. 
They may happen together, but IMEs have their own specific issues (e.g. 
popping up a window) that inline composition does not necessarily have. 
Also, IMEs can happen without composition: you could arguably popup a 
handwriting IME that would just insert text on commit without any 
composition taking place.


To stick to what I think is the simplest case, diacritic composition, 
here is what I believe the current D3E specification says (not that it's 
really clear on the matter, but I'm assuming best case scenario). For ñ 
you basically get:


  compositionstart ˜
  compositionend ñ

From what you're saying above you'd like to replace that with:

  compositionstart ˜
  input ñ

I think we can make that work, it drops on event and moves the code 
around. If you look at the Twitter Box code:



https://gist.github.com/darobin/8a128f05106d0e02717b#file-twitter-html-L102

It basically would need to move what's in the compositionend handler 
inside the beforeinput handler, with a check to see if compoRange exists 
(or the event has isComposing=true).


(I'm assuming that compositionupdate stays as is since we need to update 
the rendering with it.)


Is that the sort of flow you had in mind?

PS: note I just noticed that the code in the Gist was not the latest I 
had and had a lot of TODO bits — I've udpated it to the latest.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-27 Thread Robin Berjon

On 27/05/2014 01:52 , Ben Peters wrote:

From: Robin Berjon [mailto:ro...@w3.org] On 23/05/2014 01:23 , Ben
Peters wrote:

As I said I am unsure that the way in which composition events
are described in DOM 3 Events is perfect, but that's only
because I haven't used them in anger and they aren't supported
much.


My thought is that we can use CommandEvent with
type=insertText. This would be the corollary to
execComamnd(insertText), and the data would be the ñ that is
about to be inserted.


But if you only get one event you can't render the composition as
it is carrying out.


I believe Composition Events are very important for IME input, but we
should fire CommandEvent with Insert text for all text input,
including IME. Are you saying we should use Composition Events even
for non-IME input?


I am not using an IME, and yet I could not type in French on my keyboard 
without composition.


Obviously, if I switch to Kotoeri input, I'll get composition *and* an 
IME popup. But for regular French input (in a US keyboard) I need:


  é - Alt-E, E
  è - Alt-`, E
  à - Alt-`, A
  ô - Alt-I, O
  ü - Alt-U, U
  ñ - Alt-˜, N (for the occasional Spanish)
  (and a bunch more)

Some older apps (you pretty much can't find them anymore) used to not 
display the composition as it was ongoing and only show the text after 
composition had terminated. That was survivable but annoying, and it 
only worked because composition in Latin-script languages is pretty 
trivial (except perhaps for all you Livonian speakers out there!), but I 
don't think it would be viable for more complex compositions. And even 
in simple cases it would confuse users to be typing characters with no 
rendering feedback.


Without composition events you can't render the ongoing composition. See 
what's going on at:



https://gist.github.com/darobin/8a128f05106d0e02717b#file-twitter-html-L81

That is basically inserting text in a range that's decorated to be 
underlined to show composition in progress. Composition updates 
*replace* the text in the range. And at the end the range is removed and 
text is inserted.


The above is for Mac, but I have distant memories of using something 
similar on Windows called the US International Keyboard where you 
could have apostrophes compose as accents, etc.. I don't recall how it 
was rendered though.


--
Robin Berjon - http://berjon.com/ - @robinberjon



contentEditable and forms (was: contentEditable=minimal)

2014-05-27 Thread Robin Berjon

On 27/05/2014 09:19 , Piotr Koszuliński wrote:

Yes, it should be possible to disable whichever feature you don't need.
In some cases you don't need lists (because e.g. you're editing a text
that will become a content of a paragraph). And in some cases you don't
want bold/italic because your use case requires only structured HTML. So
being able to handle such commands is a one thing. But first of all
there should be no assumption that a user needs these buttons, because a
browser just don't know about that. If I think that users need toolbar,
I can render a custom one.


Much agreed. The browser should not show any markup/styling affordance 
for cE=minimal.



There's one more assumption that makes editing on mobile devices
(especially low-res devices) very hard. It's that if user focuses
editable, then he/she wants to type, so native keyboard should pop out.
Very often it's true, but in some cases user may want to select some
text and using toolbar apply styles or lists, etc. And when the keyboard
is visible there's very little space to do that. If there was any API to
control whether keyboard is visible, then we could achieve much better UX.


There are quite a few things from forms that I think could usefully 
become available in an editing context. We could benefit from having the 
inputmode attribute be allowed on any editable piece of text. For the 
specific use case you cite, an additional keyword of none might make 
sense too.


It possibly wouldn't hurt to have the placeholder attribute be available 
on all editable content, too. I'm less sure about the validation 
attributes (except perhaps required) but why not.


Obviously validation attributes only make sense if the editable content 
can contribute to forms. But it would make a lot of sense that it could. 
Today you have to resort to ugly hacks in which you somehow copy over 
the edited content into a textarea. That's pretty daft: in most use 
cases you're going to be submitting the content.


There are several ways in which we could handle this. One is to have any 
element with cE=minimal contribute to the form data set (when inside a 
form, or possibly when using the form attribute if someone remembers 
what the use case for that thing was). That's interesting, but I get a 
sense that it conflates two features. Another approach is to add a 
submittable attribute that can make the innerHTML of any element 
contribute to the form data set.


Thoughts?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-26 Thread Robin Berjon

On 26/05/2014 05:43 , Norbert Lindenberg wrote:

On May 23, 2014, at 5:19 , Robin Berjon ro...@w3.org wrote:

Which brings me to think: when we discussed this at the Summit,
there was some agreement (between all four of us :) that it was a
good idea to support multi-range selections. These are useful not
just for tables, but also for bidi. The reason for the latter is
that when selecting a line with multiple embedded directions (using
a mouse), you want to have the visual selection be an unbroken line
(as opposed to the crazy jumping around you get if you follow
logical order).


Were any speakers of bidirectional languages in the room when this
was discussed?


I don't know what languages the others speak. That said, my recollection 
was that this was presented along the lines of we've had regular 
requests to support selecting text in geometric rather than logical orders.


If that turns out not to be the case and we can stick to single-range 
selections, it would certainly make the Selection API simpler.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Should minimal contentEditable default text input

2014-05-26 Thread Robin Berjon

On 26/05/2014 10:25 , Anne van Kesteren wrote:

On Mon, May 26, 2014 at 4:17 AM, Yoshifumi Inoue yo...@chromium.org wrote:

Range.style is cool idea! I assume Range.detach() removes styles added
Range.style.


detach() is a no-op. http://dom.spec.whatwg.org/#dom-range-detach


You're jumping in without context. In talking about Range.style you need 
a way of clearing that style once attached. Range.detach() was suggested 
as a possible candidate there



To implement text composition with this, I would like to have wave
underline, dotted underline, thick underline etc.


Range.prototype.style seems complex in the context of overlapping
ranges and such. Suddenly you're no longer applying CSS to a tree.


So, since Gecko supports that, do you know how it's done?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-23 Thread Robin Berjon

On 23/05/2014 01:23 , Ben Peters wrote:

As I said I am unsure that the way in which composition events are
described in DOM 3 Events is perfect, but that's only because I
haven't used them in anger and they aren't supported much.


My thought is that we can use CommandEvent with type=insertText.
This would be the corollary to execComamnd(insertText), and the
data would be the ñ that is about to be inserted.


But if you only get one event you can't render the composition as it is 
carrying out.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Should minimal contentEditable default text input (was: contentEditable=minimal)

2014-05-23 Thread Robin Berjon
Starting a new thread for this specific topic as I think it's one of the 
important early points of contention.


On 22/05/2014 12:59 , Piotr Koszuliński wrote:

3. Typing characters. It works in textarea and I think it should work
out of the box in cE=minimal. Otherwise, cE=minimal will be useless for
simple cases (mentioned Twitter), because you'll always need a pretty
complex library to handle text input. Additionally, I don't remember any
problem with typing characters, so this seems to  work well already in
cE=true. There's also the IME which scares me, but I don't have any
experience with it.



I hear your point about essentially making simple things simple, but I 
really want to resist supporting as much built-in behaviour as possible. 
Of course, it's a trade-off, but I think we should strive for the 
smallest possible amount of behaviour. Note that 1) the complexity of 
simple things by and large depends on the quality of the primitives we 
provide and 2) on the interoperability of what is supported. And the 
simpler the functionality, the more easily interoperable.


Inserting text as the default behaviour for text input events has 
implications:


Things get very weird if you support it when you have a caret (i.e. a 
collapsed selection) but not when you have a selection. And a selection 
can have arbitrary endpoints around and into an element. This means that 
typing with an active selection can do more than add some text to a 
node: it can delete or modify elements. Sure enough this can be 
described interoperably, but it does bring us back to issues we dislike.


It also means that the browser needs to handle composition and its 
rendering, which while it is ongoing may produce relatively weird states 
in the DOM.


I agree that the Twitter box is a good very basic example. It basically 
needs:


  1) Words that start with @ or # to be a specific colour.
  2) Links to be a different colour, and to have their characters 
counted as the shortened link rather than the full thing.

  3) Newlines must be taken into account.
  4) Characters beyond 140 are highlighted in red.

I'm ignoring complications with files and the such. In fact, for the 
purpose of our use case it is only useful IMHO to look at how best to 
handle (3) and (4).


I tried to bang together some code that would do the Twitter box, adding 
a few features along the way and documenting assumptions and issues. It 
looks like that (untested, off the top of my head):


https://gist.github.com/darobin/8a128f05106d0e02717b#file-twitter-html

It looks a bit scary, but if you remove the part that handles excess 
text and the wordy comments, you just get:



https://gist.github.com/darobin/8a128f05106d0e02717b#file-like-textarea-html

Granted, that's still a fair bit of boilerplate. But I think we have to 
take into account the following:


  • This is meant to be low-level. I'm happy to make things easier but 
only so long as we don't introduce magic.


  • We can make introduce some convenience methods for the non-obvious 
parts of the boilerplate. Just having Selection.replace(node|text...) or 
something like new Range(sNode, sOffset, eNode, eOffset) would make 
things a lot nicer.


It's likely I've forgotten stuff though (notably paste filtering, which 
I'm unsure how to best handle here — see comments). Please review the 
code so that we have an idea for a baseline of what we'd like to get at 
the end.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-23 Thread Robin Berjon

On 23/05/2014 12:28 , Jonas Sicking wrote:

And on mobile autocorrect of misspelled words is common, though that
can probably be handled by moving the selection to the misspelled word
and then writing the fixed word.


Autocorrect should be handled like composition. Composition is pretty 
much what happens whenever you type some stuff and some other stuff 
comes out. (Technical definition.)



Though one interesting edge case there is what happens if the page
adjusts the selection as it's being moved to the word autocorrect
wants to fix?


I'm sorry, I don't understand the case you're thinking of?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-23 Thread Robin Berjon
 you'll get 
wrong for the user as they are platform specific). As seen in the code 
gist I posted, given such a delete event the scripting is pretty simple.



Some of these things pages will have to deal with no matter what. As
has been pointed out, if the user selects across multiple elements and
presses delete or 'a', then almost certainly the page will have to run
application logic. Likewise if the cursor is placed right after an
element but before a bunch of text, and the user then presses
backspace.

However it seems like if authors generally are ok with the plaintext
editing that input type=text and textarea has, and then only have
them worry about things like inserting elements to do styling or
inserting document structure (lists, headers, paragraphs, etc), then
that means less work for the author, and greater likelihood that text
editing works as the user expects.


I'm sorry, but I'm not sure that the above makes sense :)

If people want plain text and *nothing* else, I have a great solution 
for them: textarea.


If you have a situation that involves markup however, you need to handle 
it. And I really, really don't think that we will be doing anyone a 
service if we end up with a solution in which the browser will handle 
text for you UNLESS you have an element-spanning selection OR MAYBE 
backspace right after an element AND PERHAPS delete right before one, 
etc. in which case you have to run some application logic.


Maybe I'm missing something and there's an easy way to disambiguate 
here, but it seems like the sort of path down which madness lies. Do you 
have some code to illustrate how it would work?



I suspect that the right thing to do here is some experimentation. It
would be very interesting to do a prototype implementation of
contenteditable=minimal which never did any DOM mutations, not even
for IME or text editing. Then see how much code needs to handle all of
the plaintext editing features above.


I think we can get a feel for that without having to polyfill cEmin 
first. That would be neat of course, but given the poor support of even 
very basic things like selections it's not a small project.



Another thing that we should look at is the ability to style ranges
rather than just elements. In Gecko we have an internal feature that
allows us to style DOMRanges. This allows us to render a red dotted
line under misspelled words and black line under composition
characters. And do that without worrying about managing a lot of extra
elements in the DOM.


Yeah, I had the same idea, used it in my example. I think it makes a lot 
of sense.



Right now pages are forced to sprinkle elements all over the DOM in
order to do the same thing, which then makes editing that DOM more
complex. It would be awesome to find ways to enable styling ranges
which would allow them to keep a simpler DOM.


It would actually be pretty awesome.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [clipboard events] click-to-copy support could be hasFeature discoverable?

2014-05-23 Thread Robin Berjon

On 23/05/2014 14:33 , James Greene wrote:

I'm all in favor of a new API as well.


Me too, as discussed in 
http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/0550.html.


I wouldn't put this on window though; why not put it on Selection?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-22 Thread Robin Berjon

On 22/05/2014 00:43 , Julie Parent wrote:

I question whether contentEditable=minimal should actually handle text
input.  If the idea is to provide the base platform on which a developer
can build the editing product of their dreams, isn't text insertion just
another behavior they could potentially need to disable?


Sorry if I was unclear. When I said handling text input, I did not mean 
text insertion. My point is that there's a spectrum of behaviour and we 
want to provide *some* handling for developers (otherwise it's really 
just too hard).


Let us take the relatively simple issue with typing ñ on a keyboard 
setup that does not natively support the character. On my keyboard, that 
is done by first typing Alt-N, then N.


At the more complete end of the spectrum, what we have today, without 
the developer doing anything, when I type Alt-N the DOM is modified to 
include a U+02DC SMALL TILDE (note: *not* U+0303 COMBINING TILDE) and 
that character is underlined in the rendering to let me know that it is 
awaiting a character to combine with. Interestingly, that information is 
not reflected in the DOM — I don't even know how you can handle it. In 
fact, editors that try to take over too much from the platform (in this 
case, Substance for instance) completely fail to allow this sort of text 
entry.


At completely the other end of the spectrum (more or where developers 
find themselves today when they override as much as they can, out in the 
cold), all you get are two entirely independent keyboard events: one N 
with altKey set to true, and another N with altKey set to false.


Unless you know all platform conventions (plus the user's keyboard 
layout) or you manage to enforce your own, which isn't friendly to 
users, you can't do anything useful with that.


What I meant by having the browser handle text input, is that it needs 
to know the platform conventions so as to convey user intent to to the 
application correctly. When I hit Alt-N then N, the UA should relay 
something that looks like the following events (I'm deliberately not 
including keyup/down/etc. for simplicity):


compositionstart \u0303 (combining tilde)
compositionupdate ñ
compositionend ñ

(We might be able to do without the compositionupdate in this case, I'm 
including it because for more elaborate compositions it's needed.)


I believe that this provides the appropriate level of abstraction to 
address the use case. Without the composition events (e.g. if you only 
send a text input event) the developer can't show the composition in 
progress to the user (which for complex compositions like Kotoeri is a 
non-starter); and with these events developers don't need to know about 
the platform's conventions for composition. Orthogonality is where it 
should be.


As I said I am unsure that the way in which composition events are 
described in DOM 3 Events is perfect, but that's only because I haven't 
used them in anger and they aren't supported much.



Stepping back, there are distinct concepts that all combine to form the
current editing environment:

 1. Selections: Enable selections, perform cursor movement, scoping the
boundaries on which the selection can operate.
 2. Input: Perform dom modifications, dispatch events, not limited to
keyboard input, also includes IME, paste, drop, etc.
 3. Spell check: Enable spell check, modify the dom (or dispatch an
event) when the user selects a replacement
 4. Formatting magic: bold when the user hits control + b, change
directionality on Ctrl+LeftShift , etc.

It sounds like contentEditable=minimal as proposed would only enable #1
and #2, and perhaps allow for #3? To break editing down into true
building blocks, I think we need to provide developers a way to
explicitly enable each of these systems separably, and not require the
element to be contentEditable.


My understanding (but this is all up for discussion, hence this thread) 
is that cE=minimal would only enable 1 (which is essentially enabling 
cursor as selections are already there anyway). All the other parts are 
handled by other pieces of functionality that may work elsewhere too 
(and in fact do, e.g. in input elements).


The way I thought of cE=minimal when I pitched it at the Extensible Web 
Summit was basically regular HTML with a caret affordance. The exact 
behaviour of the caret is platform dependent (and may not be visual), it 
is only the way in which it is reported (in terms of the Selection API) 
to the application that matters.


It makes for a pretty short spec — the previous paragraph is more than 
enough ;)



--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-22 Thread Robin Berjon

On 22/05/2014 10:52 , Jonas Sicking wrote:

This sounds like a super promising approach.


\o/


If I understand the proposal correctly, when the user does something
that causes input, like pressing the enter key, we would fire a key
event, but we wouldn't actually modify the DOM. Modifying the DOM
would be the responsibility of the web page.


That is the point yes. Using the DOM as both the model and view does not 
make sense for all editing, and this makes it possible to separate the 
two without hacks that override the browser.


Your example of the enter key is, of course, one of the annoying ones. 
Sometimes you want a new line, sometimes you want the next element, 
sometimes you just want to navigate to the cell below.


Enter may be a case in which a higher-level event is required so that 
you can respect the platform's convention for Enter vs Ctrl-Enter for 
instance.



Likewise, if the user pressed whatever key is platform convention for
paste, we would fire an event which contains the clipboard data, but
not mutate the DOM. Copying data from the event (i.e. from the
clipboard) to the page would be the responsibility of the page.

Is that correct? If so I like it a lot!


Entirely correct. Again, \o/.


I'd like to expand, and clarify, the list of services that you propose
that the UA provides:

* Caret and selection drawing.


Yes. And reporting that information accurately to the application (which 
can be pretty tricky for multi-range selections in tables or at bidi 
boundaries).


cE=minimal enables caret drawing, the rest is done through the Selection 
API.



* Drawing IME UI in response to user typing.


Where applicable, yes. I would expect the IME API to play well here.


* Events for clipboard and drag'n'drop (though the UA would not mutate
the DOM in response to those events).


Yes. ClipOps and DnD APIs.


* Cursor navigation, including reacting to touch events, mouse clicks
and keyboard events. Cursor navigation would likely also fire
cancelable events.


Yes. Cursor navigation can be represented through selections (that may 
be collapsed). In general it is important that selection changes can be 
cancelled so that developers can carry out selection validation before 
accepting it.


Making some things unselectable might also be useful. IE has 
unselectable, there's also -moz-user-select and friends. But this is 
small fries for later I'd reckon.



* Turning keyboard events into events representing text input (but not
mutate the DOM in response to those events).


Yes, possibly in a rather advanced manner.


* The Selection API spec for selection manipulation.


Right.


Can we simply use the same events as we fire in input type=text and
textarea, but don't actually mutate any DOM? Or is it awkward to
fire beforeinput when there is no default action of mutating the DOM
and firing input?


Isn't that just a question of whether to reuse the same event name or 
pick a new one?



And is it too much complexity to ask pages to deal with composition
handling themselves?


I think it's too much to ask for them to deal with composition, but they 
should deal with composition events. See my earlier posts for details.


Dealing with composition events is certainly a bit of effort (not much 
though — the hardest part is knowing they exist) but we want to go 
low-level here. I think it's an acceptable level of complexity for 
library authors.



Another approach would be to allow plain text input events to actually
mutate the DOM as a default action. But allow that action to be
cancelled. Note that we would never do anything more complex than
mutate an existing text node, or insert a text node where the cursor
is located. I.e. no elements would ever get added, removed, split,
have attributes changed or otherwise be mutated.


I don't think you can do that without ending up weird places. No 
elements would ever get mutated - what happens if I have a selection 
that contains an element (or parts of it)? Very quickly you'd end up 
either having to make the browser manipulate the DOM yourself (bad, 
really really bad), or in a situation in which some text events have a 
default action and some don't depending on the current state of the 
selection (possibly even worse).


It's very tempting to try to do more for developer here, but I really 
think we should resist this impulse lest we end up with the a new mess 
that just works differently.



But if we can make the code that a page needs to write in order to
handle the text mutation relatively simple, even when handling
composition, then I think we should leave it up to the page.


I think that we can make it reasonably easy to handle composition by 
using composition events.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: contentEditable=minimal

2014-05-12 Thread Robin Berjon

On 12/05/2014 00:46 , Johannes Wilm wrote:

Also this looks good. There seems to be consensus that contenteditable
is just not going o get fixed, and so removing the faulty behavior
entirely and replacing it with this would be almost as good.


It depends on what you mean by fixed. It is conceivable that given 
enough time and energy the current contentEditable behaviour could be 
made interoperable, but the problem is that even if that happened it 
seems clear from developers' feedback that it wouldn't do what they 
want. A lot of the time you would still want to disable a lot of what it 
does by default and handle it yourself. This is therefore just intended 
as a way of providing developers with primitives for editing.



Intercepting key strokes is already now possible and probably the best
one can do. The one thing where this gets complicated is when typing
characters using more than one key stroke. such as ~ + n to make ñ. I am
not sure if you include that under the Some keyboard input handling.


Yes, text input is a hard problem and you can't get away without solving 
it. We are talking about providing primitives here, so things can be 
expected to be a little bit hairy though.


DOM 3 Events has something called composition events for the example you 
bring up (which can get a whole lot more complicated, notably with 
things like Kotoeri and such). On the face of it it would seem to be the 
needed part but I've never used them (or seen them used) in the real 
world. (The quality of browser support is also unclear at this point.) 
Some cases also require the IME API.


Developers relying on the bare bones cE would probably have to handle 
the rendering of ongoing composition themselves (which isn't the end of 
the world, but you need to think about it or you are guaranteed to mess 
things up). This is probably acceptable at this level, libraries can 
make it easier.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [Editing] Splitting Selection API Into a Separate Specification

2014-03-17 Thread Robin Berjon

On 15/03/2014 18:44 , Johannes Wilm wrote:

yes btw -- where should one go to lobby in favor of the editing spec? I
have been communicating with several other browser-based editor
projects, and there seems to be a general interest of more communication
with the browser creators and spec writers. Currently the situation is
that it's so broken in all the browsers, that one needs to use a 100%
javascript approach, painting the caret manually and creating a separate
system for selections, to circumvent the main problems of
contenteditable (for example:
https://bugzilla.mozilla.org/show_bug.cgi?id=873883 ). Codemirror is a
good example of that.


My understanding from talking to various people is that at least part of 
the problem comes from the type of code that is currently deployed in 
the wild. An awful lot of it works around browser inconsistencies not 
through feature testing but through user agent switching. This means 
that when a given browser fixes a bug in order to become more in line 
with others (and presumably the spec), it actually breaks deployed code 
(some of which is deployed an awful lot).


I've been talking with some editor developers and have heard some 
interesting ideas, notably from the Substance.io people.


One suggestion has been to make at least the selection API 
interoperable, which seems achievable. So I'm very glad to see Ryosuke 
propose it here, I was about to suggest the same.


Another that I've been mulling over is to have something like 
contenteditable=minimal (bikeshed syntax at will). This would give you a 
caret with attendant keyboard motion and selection, but no ability to 
actually edit the content. Editing would happen by having a script 
listen to key events and act directly on the content itself. The hope is 
that not only is this a saner architecture for an editor, but it can 
also bypass most (possibly all, if the selection API is improved 
somewhat) browser bugs to do with editing.


I reckon a spec for that could be put together relatively easily. I'm 
still digging through Web editors' code to get a feel for how much it 
would actually help.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Publishing a new WD of Clipboard API and events spec

2014-02-28 Thread Robin Berjon

On 27/02/2014 09:57 , Hallvord R. M. Steen wrote:

I think so, yes - though I may not have Anolis set up on any computer
I'm carrying right now, so only if somebody else can run Anolis for
me.. (I'm back home in one month or so, so I could presumably get the
draft pubready all by myself then. I guess I could also add the
cross-references manually for such a small spec..).


Are you sure it's an Anolis spec? :)


I'll look at this a bit, but I don't think the differences between a
v.1 and v.2 would make much sense from an editing point of view. I'd
be more inclined to call the spec feature complete at some point
even though it may have to wait a few years for implementations to
catch up before being officially blessed.. -Hallvord


I've been wondering if there isn't something we could do here to speed 
things up a bit for the common case.


The general-purpose API definitely remains useful, but by far the 
majority use case is to just copy something, usually just text. There 
are still lots of sites out there that use Flash for the sole purpose of 
putting some plain text in the clipboard.


I was therefore wondering if we couldn't just add a copy() method to the 
Selection object (or maybe Range), define it as doing whatever the 
browser does when the copy operation is invoke with that given 
selection, and ship.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: File API | lastModified and Date attribute change

2013-12-03 Thread Robin Berjon

On 02/12/2013 23:26 , Arun Ranganathan wrote:

Mozilla is willing to remove lastModifiedDate completely, and migrate
developers to file.lastModified, which is an attribute that returns
an integer (long long) representing milliseconds since the epoch.
The Date API provides syntactic sugar for working with these
integers, so I don't think the developer ergonomics resulting from
the move from a Date object to an integer are too bad.


Well, the developer ergonomics (and many other aspects) of Date in 
general are, well, let's just say pretty bad, but that's not the File 
API's issue to solve so +1.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: LINK only in HEAD?

2013-12-02 Thread Robin Berjon

On 28/11/2013 23:07 , Ian Hickson wrote:

If there are use cases where best practice would involve a link rel in
the body, we can always change the rules here.


I wonder if late loading of secondary style resources (e.g. styles that 
won't get used in the initial rendering of the page) would qualify here. 
I've seen this done in script a while after load to make the initial 
display faster, but I'm not sure how common that is.


I also wonder if it could qualify as a better way of loading styles that 
pull in fonts. I'm not aware that anyone's doing it this way (though 
script loading is relatively common); but it's likely worth checking the 
result.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Making selectors first-class citizens

2013-09-11 Thread Robin Berjon

On 11/09/2013 15:56 , Anne van Kesteren wrote:

On Wed, Sep 11, 2013 at 2:52 PM, Brian Kardell bkard...@gmail.com wrote:

I like the idea, but matches has been in release builds for a long time,
right?  Hitch uses it.


!DOCTYPE html.scriptw(matches in document.body)/script
http://software.hixie.ch/utilities/js/live-dom-viewer/

false in both Firefox and Chrome.


See http://caniuse.com/#search=matches. You do get mozMatchesSelector 
(and variants) in there.



--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Making selectors first-class citizens

2013-09-11 Thread Robin Berjon

On 11/09/2013 17:22 , Boris Zbarsky wrote:

On 9/11/13 9:52 AM, Rick Waldron wrote:

A prime use case: a cache of selector objects that are useful when
matching event.target for event handler delegation patterns.


Note that UAs already do some internal caching of parsed selector
objects used with querySelector.  Of course an explicit cache in the
script would likely be a tiny bit faster.


On IRC Domenic pointed out that the primary apparent usage for this 
mirrors jQuery's .is(). Barring the caching case, it seems unlikely to 
be appealing to do (new Selectors(div)).matches(el) instead of 
el.matches(div).


One thing that /could perhaps/ be interesting with this though would be 
as an extensibility point in which developers could bind parameters and 
functions extending selectors. A selector object would be a logical 
place to hang this off of. But that's a whole other kettle of fish.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: CfC: LCWD of HTML Templates; deadline June 18

2013-06-24 Thread Robin Berjon

Hi Rafael,

sorry for the delay in responding, I've been interrupted by a bay 
delivery :)


On 14/06/2013 18:45 , Rafael Weinstein wrote:

I know that HTML Templates will still cause similar confusion, but
at least template has an actual english definition which is fitting
for the current feature, where as templating is more of a common
description for a development pattern.


I think that makes sense; I've made the change.


Feel free to tell me I'm nuts =-).


I reckon you are, but for different reasons altogether ;)

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: CfC: LCWD of HTML Templates; deadline June 18

2013-06-24 Thread Robin Berjon

Hi,

On 19/06/2013 04:05 , Rafael Weinstein wrote:

Note that this doesn't cover monkey-patches other specs:

https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#node-clone-additions


I believe that's covered. If you look at the last paragraph in:

http://www.w3.org/html/wg/drafts/html/master/templating.html#the-template-element

This plugs into step 5 in:

http://www.w3.org/TR/domcore/#concept-node-clone

which is precisely the extension point that's required. I'm happy for 
suggestions as to how to make this clearer.



https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#innerhtml-on-templates


Yes, that's why I copied Travis. Travis?

One option is that we could have a similar extensibility point in 
innerHTML, rather than change it directly.



https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#parsing-xhtml-documents


I've added the relevant text here:

http://www.w3.org/html/wg/drafts/html/master/the-xhtml-syntax.html#parsing-xhtml-documents

(just below the note on document.write()). Is that okay?


https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#serializing-xhtml-documents


Likewise, I've added the text at the bottom of:

http://www.w3.org/html/wg/drafts/html/master/the-xhtml-syntax.html#serializing-xhtml-fragments


Here are the issues I see:

Section name: Again, I suggest HTML Templates rather than HTML
Templating to minimize confusion.


Yup, done. (Though it's just Templates since pretty much everything in 
there is HTML.)



4.4 Templating

-Typo, 4th paragraph: and its contents be any content = and its
contents CAN be any content


Fixed.


4.4.1 Defs:

-Typo, The template contents are be a DocumentFragment whose = The
template contents must be a DocumentFragment whose


Fixed.


4.4.2 The template element:

-I'm not sure the Contexts defined as metadata and flow content is
sufficient. For example, the children of table are not flow
content, but template is allowed within those contexts.


Indeed, I'm unsure why I changed that. Fixed.


-The NOTE here is trying to prevent DOM hierarchy cycles. The WHATWG
DOM has addressed this here:
http://dom.spec.whatwg.org/#mutation-algorithms by checking the
host-inclusive ancestor. I don't see equivalent language in the W3C
DOM spec. It may still be worth an editorial note, but I think it's
better to point to the pre-insert language which prevents the cycle.


Right, but I've been operating under the assumption that the WHATWG DOM 
and the W3C DOM would be the same, if not now at least soon. That would 
address this concern, right? (In which case we can drop this note.) I'd 
really rather we didnt' make our specs defensive against such 
disparities but instead made sure our dependencies are aligned.


Currently the W3C HTML spec refers to the WHATWG DOM anyway, so I think 
we're covered :)


Or am I missing something?


8.2.5.4 Template Parenting

I think parenting suggests that the template will get a new parent
(e.g. with fosterparenting). How about template content kidnapping
(only half-joking -- we do have call the foster agency). Another
idea Template Content Parenting or Template Content Redirection


Template content kidnapping was very tempting, but there may be such a 
thing as enough of a good joke and I reckon the thread of children jokes 
in the parsing algorithm might fall in that category :)


I went with template content parenting.


8.2.5.3 Foster Parenting

I think the foster parenting description is now complex enough that it
should be factored into an algorithm which selects the foster parent.
As it is right now, it's not clear whether the steps apply in order or
not (if they apply in order, I think they might be wrong).


I agree, but I reckon that's a separate issue. Do you mind filing a bug?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Bringing other Web Components specs into HTML

2013-06-17 Thread Robin Berjon

On 14/06/2013 19:26 , Ian Hickson wrote:

On Fri, 14 Jun 2013, Dirk Schulze wrote:

On Jun 14, 2013, at 6:41 AM, Robin Berjon ro...@w3.org wrote:

now that template is in HTML, I was wondering if some of the other
specs needed the same treatment.


Some of the specs can be relevant for other specifications as well.
Unless you don't want to integrate the whole web stack (SVG, MathML,
...) into the HTML spec, some things should be separated from HTML.


I think the main deciding factor should be who is going to maintain the
text once in the future. With template, presumably that's now us (HTML
spec editors). For most Web component stuff, I assume it's still Dimitri
and company. Thus they should probably stay in separate specs.


That certainly works for me, I'll look at which hooks are needed. It's 
certain that the remain Web Component specs don't have anywhere near the 
level of monkey patching that template has.



If it wasn't for that, I would indeed be arguing for merging the entire
Web stack into a single document (called The Web). That's certainly how
it's implemented, and it would fix a lot of problems with have with things
falling between the cracks. (See, e.g., how much of an improvement we made
to that kind of thing when we merged DOM HTML and HTML.)


Yes, that would be a good idea. In fact I'm convinced the result would 
be more modular than separate documents since it's much easier to 
refactor inside of a given project. I reckon it's doable, too.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: CfC: LCWD of HTML Templates; deadline June 18

2013-06-14 Thread Robin Berjon

On 11/06/2013 17:59 , Anne van Kesteren wrote:

On Tue, Jun 11, 2013 at 8:25 PM, Arthur Barstow art.bars...@nokia.com wrote:

This is a Call for Consensus to publish a Last Call Working Draft of the
HTML Templates spec using the following document as the basis (it does not
yet use the LC template):

https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html


There's an open bug on integrating this into HTML proper which will
clearly happen. Why do we need to continue with this draft?


I agree with Anne. And since this group considered the feature stable 
enough to go to LC I went ahead and imported template into HTML.


You can see the changes here:


https://github.com/w3c/html/commit/2502feb541063a3834f1ef07e2a23d0824d96914

https://github.com/w3c/html/commit/daaf6bc1e76365b6678a14b47954bcf9c5db54c6


The result is live at:

http://www.w3.org/html/wg/drafts/html/master/templating.html

(plus a bunch of other places in the spec, notably the parsing chapter).

I made a number of editorial changes in order to align it with the spec, 
so it could benefit from review.


Anne: Are you going to take care of this bit?


https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#node-clone-additions

Travis: Are you going to take care of this bit?


https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#innerhtml-on-templates

Rafael, Dimitri, Tony: I would appreciate if you could compare your 
document with the integration with a fine tooth comb (using the diffs to 
find where various parts ended up in the spec if needed) and check that 
I didn't break anything.


Templates are a wicked cool brick, thanks!

--
Robin Berjon - http://berjon.com/ - @robinberjon




Bringing other Web Components specs into HTML

2013-06-14 Thread Robin Berjon

Hi,

now that template is in HTML, I was wondering if some of the other 
specs needed the same treatment.


Shadow DOM: I reckon definitely not the case, it doesn't really do much 
monkey patching.


Custom Elements: Does some monkey patching. What do you reckon is the 
best option here? Leave the monkey patching alone (a bit painful)? Just 
import the monkey patching into HTML, remove it from that spec, and have 
HTML refer to CE? Bring everything in?


Imports: This one monkey patches a lot. I reckon it's best to import 
(ha!) it.


WDYT?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Bringing other Web Components specs into HTML

2013-06-14 Thread Robin Berjon

On 14/06/2013 16:05 , Dirk Schulze wrote:

Some of the specs can be relevant for other specifications as well.
Unless you don't want to integrate the whole web stack (SVG, MathML,
...) into the HTML spec, some things should be separated from HTML.


Which is why I included details focusing on the level of monkey patching 
rather than reason on first principles.


When there is monkey patching, especially of the kind in section 
45.2.4.134 right before substep 14 of step 72 of the algorithm, inject 
the following content... then the spec becomes quite brittle, and it 
can be rather confusing to implement.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Editing spec is no longer online

2013-06-06 Thread Robin Berjon

On 06/06/2013 15:08 , Johannes Wilm wrote:

This used to work some days ago:

https://dvcs.w3.org/hg/editing/raw-file/tip/editing.htm


You're missing an l at the end of your link...

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: A very preliminary draft of a URL spec

2013-05-13 Thread Robin Berjon

On 13/05/2013 05:34 , Charles McCathie Nevile wrote:

So far I have done nothing at all about an API, and am waiting for some
formal confirmation from people who implement stuff that they would like
to standardise an API for dealing with URLs. It seems to be a common
task, judging from the number of people who seem to have some scrap of
code lying around for it, so I expect to hear people say Yes, great
idea - although I have been surprised before.


An API manipulate URLs is a common need, and one that's often done in a 
buggy manner by libraries. So it's certainly something that I'd like to 
see happen.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-10 Thread Robin Berjon

On 10/05/2013 03:23 , Jonas Sicking wrote:

On Thu, May 9, 2013 at 9:36 AM, Marcos Caceres w...@marcosc.com wrote:

On Wednesday, May 8, 2013 at 2:05 PM, Robin Berjon wrote:

How do you figure out media types? Is it just sniffing, or do you have
some sort of file extensions mapping as well?


Sniffing would probably sufficient. The types on the web are pretty stable.


I'd probably hard-code at least a default set of extensions as well.
Not sure what gecko does right now.


It's been quite a while since I last hacked on Gecko stuff, so if you 
have a pointer about where to look it's likely to save me some time (I'd 
like to figure out how it works now).


I get a sense that there's interest for this feature, I'll scare up a draft.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-10 Thread Robin Berjon

Hi Brian,

On 10/05/2013 15:32 , Brian Kardell wrote:

Would it be possible (not suggesting this would be the  common story) to
reference a zipped asset directly via the full url, sans a link tag?


Can you hash out a little bit more how this would work? I'm assuming you 
mean something like:


  img src='/bundle.zip/img/dahut.jpg'

Without any prior set up on the client to indicate that /bundle.zip is a 
bundle. This causes the browser to issue GET /bundle.zip/img/dahut.jpg


At that point, the server can:

  a) return a 404;
  b) extract the image and return that;
  c) return bundle.zip with some header information telling the browser 
that it's not an image but that the /bundle.zip part of the URL 
matched something else and it should look inside it for the rest of the 
path.


Neither (a) nor (b) are very useful to us. (c) could be made to work, 
but it's not exactly elegant. The server would also have to know if the 
UA supports (c), and fall back to (b) if not, which means that some 
signalling needs to be made in the request. That's also not entirely 
nice (and it would have to happen on every request since the browser 
can't guess).


It gets particularly nasty when you have this:

  img src='/bundle.zip/img/dahut.jpg'
  img src='/bundle.zip/img/unicorn.jpg'
  img src='/bundle.zip/img/chupacabra.jpg'
  img src='/bundle.zip/img/robin-at-the-beach.jpg'

The chances are good that the browser would issue several of those 
requests before the first one returned with the information telling it 
to look in the bundle. That means it would return the bundle several 
times. Definitely a loss.


Or did I misunderstand what you had in mind?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-10 Thread Robin Berjon

On 10/05/2013 17:13 , Brian Kardell wrote:

Still, same kinda idea, could you add an attribute that allowed for it
to specify that it is available in a bundle?  I'm not suggesting that
this is fully thought out, or even necessarily useful, just fleshing out
the original question in a potentially more understandable/acceptable way...

   img src='/products/images/clock.jpg'
bundle=//products/images/bundle.zip


That's not very DRY!


That should be pretty much infinitely back-compatible, and require no
special mitigation at the server (including configuration wise which
many won't have access to) - just that they share the root concept and
don't clash, which I think is implied by the server solution too, right?


Well it does require some server mitigation since you need to have the 
content there twice. It's easy to automate, but no easier than what I 
had in mind.



Psuedo-ish code, bikeshed details, this is just to convey idea:

link rel=bundle name=products href=//products/images/bundle.zip
   img src='/img/dahut.jpg' bundle=link:products


That just sounds more complicated!


I don't know if this is wise or useful, but one problem that I run into
frequently is that I see pages that mash together content where the
author doesn't get to control the head... This can make integration a
little harder than I think it should be.


Well, if you can't at all control the head, is there any chance that you 
can really control bundling in any useful fashion anyway?



I'm not sure it matters, I  suppose it depends on:

a) where the link tag will be allowed to live


You can use link anywhere. It might not be valid, but who cares about 
validity :) It works.



b) the effects created by including the same link href multiple times in
the same doc


No effect whatsoever beyond wasted resources.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-09 Thread Robin Berjon

On 07/05/2013 22:35 , Bjoern Hoehrmann wrote:

There have been many proposals over the years that would allow for some-
thing like this, http://www.w3.org/TR/DataCache/ for instance, allows to
intercept certain requests to aid in supporting offline applications,
and `registerProtocolHandler` combined with `web+`-schemes go into a si-
milar direction. Those seem more worthwhile to explore to me than your
one-trick-strawman.


I am well aware of this, well acquainted with these proposals, and I 
certainly hope that NavigationController (which is pretty much the 
Return of DataCache) will come to fruition.


That said I do believe that there is value in addressing common use 
cases with a common solution, so long as it is cheap enough and well 
layered. We don't know that yet, but I think it's worth investigating.



Well, `rel='bundle'` would have to be supported by everyone, because
past critical mass there would be too many nobody noticed the fallback
is not working until now cases, so that seems rather uninteresting in
the longer term.


It's valuable in order to get to the critical mass.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-09 Thread Robin Berjon

On 07/05/2013 20:57 , Jonas Sicking wrote:

Will this let us support reading things from blob: URLs where the Blob
contains a zip file? I.e. what Gecko would support as
jar:blob:abc-123!/img/foo.jpg.


Yeah:

var blob = new Blob(zipContent, { type: application/bundle })
,   burl = URL.createObjectURL(blob);
$(link rel='bundle').attr(href, burl).appendTo($(head));
someImg.src = burl + /img/foo.jpg;

It might be a little bit more convoluted than desired. If it's a common 
operation we could add a convenience method for it. That could become:


var burl = URL.registerBundle(zipInABlob);
someImg.src = burl + /img/foo.jpg;

But I'm not sure that's worth it just yet.


Also note that while we're using jar as scheme name, it's simply
just zip support. None of the other pieces of the jar spec is used.


How do you figure out media types? Is it just sniffing, or do you have 
some sort of file extensions mapping as well?


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-09 Thread Robin Berjon

On 08/05/2013 01:39 , Glenn Maynard wrote:

On Tue, May 7, 2013 at 9:29 AM, Robin Berjon ro...@w3.org
mailto:ro...@w3.org wrote:
Have you looked at just reusing JAR for this (given that you support
it in some form already)? I wonder how well it works. Off the top of
my head I see at least two issues:

JARs are just ZIPs with Java metadata.  We don't need metadata, so plain
ZIPs are enough.


I'm looking at JARs because Gecko supports them. We certainly don't want 
the Java metadata, but we might need some metadata (e.g. media type 
mappings).



This depends on a document, so it wouldn't work in workers unless we add
a second API to register them in script.


This isn't initially for Workers, but as indicated previously in this 
thread a method might be useful anyway.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: jar protocol

2013-05-09 Thread Robin Berjon

On 07/05/2013 22:31 , David Sheets wrote:

On Tue, May 7, 2013 at 3:29 PM, Robin Berjon ro...@w3.org wrote:

WDYT?


This is really cool!


Glad you like it :)


Most servers already contain support for this in the form of index files.

If you do

 link rel=bundle href=bundle.wrap/ /

and set your server's file directory resolver to match index.zip, you
don't need any special server-side extraction or handling: just
extract the archive root as sibling to index.zip when you deploy!


Heh, hadn't thought of that — nice hack!


One quirk of this scheme (ha) is its notion of root path. With this
path pattern match, the subresources in the archive exist in the
domain's single top-level path structure. This means that for archives
to be fully self-contained they must only use relative references that
do not escape the archive root. Of course, this is also a feature when
the containment of the archive is not a concern.


Sorry for being thick but I'm having trouble parsing the above. Would 
you mind rephrasing?



How does directory resolution inside a bundle work? i.e. resolve
bundle.wrap/dir/ ? It seems like this (listing) is a key feature of
the API that was being discussed. I support a JSON object without a
well-known name, personally.


I hadn't thought about directory listing. If the use case is just 
bundling it's not needed (we just need well-defined behaviour when 
fetching such paths); but returning something is definitely possible. 
Keeping in mind that this approach does not intend to replace a 
potential Zip API (unless there's overwhelming demand for it), do you 
have use cases for returning a listing of some form?


We don't have to decide this right now, I can keep it as an open 
question for the time being (so no need to rush to UCs).



Can we use

 Link: bundle.wrap/; REL=bundle

for generic resources?


I'm not intimately familiar with RFC 5988, but it would seem logical if 
it Just Worked.



Does
 a href=bundle.wrap/page.htmlGo!/a
make a server request or load from the bundle?


That's an open question, but my first instinct would be no. I'm happy 
to be convinced otherwise though.



Do bundle requests Accept archive media types?


Sorry, I'm not sure what you mean. Are you asking about bundles inside 
bundles?



Do generic requests (e.g. address bar) Accept archive media types?


You mean typing http://example.org/bundle.wrap/kittens.html? I would 
expect the browser not to do anything specific there. Which is to say, 
if the server returns the bundle (ignoring the path info) then the 
browser would likely prompt to download; if the server returns the HTML 
then it's just HTML.


Or do you mean something different with capitalised Accept? The HTTP 
header? I would rather leave it out of this if we can ;)



What if I do
 link rel=bundle href= /
?


Presuming that doesn't resolve to a bundle (which it shouldn't) then 
it's a failure and no bundle gets added to the list of bundles.



Could bundles be entirely prefixed based?


Sorry, I'm not sure what you mean here.


What does

 link rel=bundle href=bundle.wrap# /

with

 img src=bundle.wrap#images/dahut.png / !-- or is it
bundle.wrap#/images/dahut.png ? --


I would expect the URL to drop fragments, I don't think they make sense 
in this context.



do? Or

 link rel=bundle href=bundle.wrap? /

with

 img src=bundle.wrap?images/dahut.png / !-- or is it
bundle.wrap?/images/dahut.png ? --


The ? isn't really special, so with bundle.wrap?/images/dahut.png it 
should just work.


--
Robin Berjon - http://berjon.com/ - @robinberjon



jar protocol (was: ZIP archive API?)

2013-05-07 Thread Robin Berjon

On 06/05/2013 20:42 , Jonas Sicking wrote:

The only things that implementations can do that JS can't is:
* Implement new protocols. I definitely agree that we should specify a
jar: or archive: protocol, but that's orthogonal to whether we need an
API.


Have you looked at just reusing JAR for this (given that you support it 
in some form already)? I wonder how well it works. Off the top of my 
head I see at least two issues:


• Its manifest format has lots of useless stuff, and is missing some 
things we would likely want (like MIME type mapping).


• It requires its own URI scheme, which means that there is essentially 
no transition strategy for content: you can only start using it when 
everyone is (or you have to do UA detection).


I wonder if we couldn't have a mechanism that would not require a 
separate URI scheme. Just throwing this against the wall, might be daft:


We add a new link relationship: bundle (archive is taken, bikeshed 
later). The href points to the archive, and there can be as many as 
needed. The resolved absolute URL for this is added to a list of bundles 
(there is no requirement on when this gets fetched, UAs can do so 
immediately or on first use depending on what they wish to optimise for).


After that, whenever there is a fetch for a resource the URL of which is 
a prefix match for this bundle the content is obtained from the bundle.


This isn't very different from JAR but it does have the property of more 
easily enabling a transition. To give an example, say that the page at 
http://berjon.com/ contains:


link rel=bundle href=bundle.wrap

and

img src=bundle.wrap/img/dahut.png alt=a dahut

A UA supporting this would grab the bundle, then extract the image from 
it. A UA not supporting this would do nothing with the link, but would 
issue a request for /bundle.wrap/img/dahut.png. It is then fairly easy 
on the server side to be able to detect that it's a wrapped resource and 
serve it from inside the bundle (or whatever local convention it wants 
to adopt that allows it to cater to both — in any case it's trivial).


This means no URL scheme to be supported by everyone, no nested URL 
scheme the way JAR does it (which is quite distasteful), no messing with 
escaping ! in paths, etc.


WDYT?

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: ZIP archive API?

2013-05-06 Thread Robin Berjon

On 03/05/2013 21:05 , Florian Bösch wrote:

It can be implemented by a JS library, but the three reasons to let the
browser provide it are Convenience, speed and integration.


Also, one of the reasons we compress things is because they're big.* 
Unpacking in JS is likely to mean unpacking to memory (unless the blobs 
are smarter than that), whereas the browser has access to strategies to 
mitigate this, e.g. using temporary files.


Another question to take into account here is whether this should only 
be about zip. One of the limitations of zip archives is that they aren't 
streamable. Without boiling the ocean, adding support for a streamable 
format (which I don't think needs be more complex than tar) would be a 
big plus.




* Captain Obvious to the rescue!

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: ZIP archive API?

2013-05-03 Thread Robin Berjon

On 03/05/2013 13:40 , Arthur Barstow wrote:

Other than Mozilla, is there any other implementor interest?

I'm wondering if steering this work to a CG would be `best` for now,
especially if no one steps up to be Editor.


I think that this is useful and I would rather it were inside a group 
than pushed off to a CG. It's not experimental in nature.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [admin] Testing and GitHub login names

2013-04-23 Thread Robin Berjon

On 22/04/2013 12:44 , Arthur Barstow wrote:

The only thing that we ask is that pull requests not be merged by
whoever made the request.


Is this to prevent the `fox guarding the chicken coop`, so to speak?


The way you put it ascribes malice, whereas we operate on the assumption 
that people are honest and trustworthy. This is different from the 
previous rules whereby you couldn't review tests from someone working in 
the same company as yourself. I'm pretty sure that people here are 
indeed honest, and at any rate if they aren't the cost in lost 
credibility along with the quasi-certainty of being caught when another 
vendor notices a problem with the tests ought to make them behave as if 
they were :)


What we're trying to prevent is more every terribly sucks at noticing 
their own typos.



If a test facilitator submits tests (i.e. makes a PR) and everyone that
reviews them says they are OK, it seems like the facilitator should be
able to do the merge.


Of course. If you submit a PR with tests and someone who doesn't happen 
to have push powers has okayed them, then you should just merge them 
with a comment to the effect that such and such has found them to be 
good. The point is to get some eyeballs that aren't the author's to look 
at the tests before they go in; whatever process makes that happen is 
good so long as it does not involve bureaucracy.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [admin] Testing and GitHub login names

2013-04-23 Thread Robin Berjon

On 22/04/2013 13:12 , James Graham wrote:

On Mon, 22 Apr 2013, Arthur Barstow wrote:

The only thing that we ask is that pull requests not be merged by
whoever made the request.


Is this to prevent the `fox guarding the chicken coop`, so to speak?

If a test facilitator submits tests (i.e. makes a PR) and everyone
that reviews them says they are OK, it seems like the facilitator
should be able to do the merge.


Yes, my view is that Robin is trying to enforce the wrong condition
here.


No, I'm just operating under different assumptions. As I said before, if 
someone wants to review without having push/merge powers, it's perfectly 
okay. I don't even think we need a convention for it (at this point). I 
do however consider that this is an open project, so that whoever 
reviews tests can be granted push/merge power.


Why? Because the alternative is this: you get an accepted comment from 
someone on a PR. Either you trust that person, in which case she could 
have merge powers; or you don't, in which case you have to review the 
review to check that it's okay. Either way, we're better off making that 
decision at the capability assignment level since it only happens once 
per person.



The problem isn't with people merging their own changes; it's with
unreviewed changes being merged.


Yup.


(as an aside, I note that critic does a much better job here. It allows
reviewers to mark when they have completed reviewing each file in each
commit. It also records exactly how each issue raised was resolved,
either by the commit that fixed it or by the person that decided to mark
the issue as resolved)


You may wish to introduce Critic a bit more than that; I'm pretty sure 
that many of the bystanders in this conversation aren't conversant with it.



Indeed, there are currently 41 open pull requests and that number is not
decreasing. Getting more help with the reviewing is essential. But
that's a Hard Problem because reviewing is both difficult and boring.


I would qualify that statement. If you're already pretty good with web 
standards and you wish to improve your understanding to top levels (and 
gain respect from your peers), this is actually a really good thing to 
work on. Or if you're implementing, it's likely a little bit less work 
to review than to write from scratch (and it can make you aware of 
corner cases or problems you hadn't thought of). Put differently, I 
think it can be a lot less boring if you're getting something out of it.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [admin] Testing and GitHub login names

2013-04-22 Thread Robin Berjon

On 19/04/2013 06:15 , Arthur Barstow wrote:

Test Facilitators, Editors, All,

If you intend to continue to participate in WebApps' testing effort or
you intend to begin to participate, please send your GitHub login name
to Robin (ro...@w3.org) so he can make sure you have appropriate access
to WebApps' test directories.


I would like to point out an important detail here: unless you want to 
review tests or to participate in the general shepherding of the test 
suite, you don't need to send me your GitHub login.


More specifically, if you only plan to contribute tests, you don't need 
to send me anything: you already can.


The way things works for contributors (irrespective of whether they have 
push access or not) is this: all contributions are made through pull 
requests. That's how we organise code review. The only thing that we ask 
is that pull requests not be merged by whoever made the request. So 
anyone with a GitHub account is already 100% set up to contribute.


If you *do* wish to help with the reviewing and organisation effort, 
you're more than welcome to and I'll be happy to add you. I just wanted 
to make sure that people realise there's zero overhead for regular 
contributions.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Move to GitHub: step 1 completed

2013-04-04 Thread Robin Berjon

Dear all,

as you know we're moving all of our tests to the common GitHub 
repository for all web platform tests:


https://github.com/w3c/web-platform-tests/

The first step in that move is, well, actually getting the tests there 
(along with full history). That step has now been completed.


This means that, effective immediately, please stop committing tests to 
hg. In practice we can still bring them in, but I would really, really 
rather not have to do that.


A fair amount of cleanup now needs to happen. In the first pass we'll be 
removing depth by moving all the accepted tests to the root of their 
respective directories. After that, we'll need to process the 
submissions backlog. The way that happens is that each submission is 
moved to the root in a branch, which is then turned into a pull request. 
That PR then gets reviewed for integration.


Please note that I've removed the widgets tests. It's a great test 
suite, but widgets don't seem to be part of the web platform today, and 
so they shouldn't be in there.


I'm going to sunset the hg repository by making it read only as soon as 
I can remember how that works.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Fixing appcache: a proposal to get us started

2013-04-04 Thread Robin Berjon

On 04/04/2013 15:41 , Simon Pieters wrote:

On Wed, 03 Apr 2013 14:50:53 +0200, Robin Berjon ro...@w3.org wrote:

On 29/03/2013 21:08 , Jonas Sicking wrote:

* Cache both files (poor bandwidth)
* We could enable some way of flagging which context different URLs
are expected to be used in. That way the UA can send the normal
content negotiation headers for images vs media files. I'm not sure
that this is worth it though given how few websites use content
negotiation headers.
* Use script to detect which formats are supported by the UA and then
use cacheURL to add the appropriate URL to the cache.
* Use the NavigationController feature.
* Use UA-string detection. You can either send different manifests
that point to different URLs for the media, or use a single manifest
but do the UA detection and serve different media files from the same
media URL. This is a pretty crappy solution though.


Another option: in your list of URLs to cache, instead of just strings
you can also have objects of the form { video/webm: kittens.webm,
video/evil: dead-kittens.mv4 } that would operate in a manner
modelled on source, caching only what's needed.


Is this intended only for video resources, or arbitrary resources?
Non-media elements (and hence, non-media resources) don't have the
source mechanism, so maybe the syntax should make it clear that it's
media-specific.


I thought about that, but I don't think that's needed, really. Assume 
someone did something like:


{
  shiny/new-type: whatever.new
, crummy/old-type: whatever.old
}

Then the UA will pick one to cache. Worst case scenario, this is useless 
but harmless. But it's conceivable that script could then look at what's 
in the cache to know which to use.


Just to be clear: I don't think that using this for non-media cases will 
be incredibly useful. But I don't see any reason to actively prevent it.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Reviewing test submissions

2013-04-04 Thread Robin Berjon

Hi all,

after having moved the test suite to the GitHub repository, we've been 
busy cleaning things up. One item of particular importance is that 
everything that was in submission directories is now in pull requests 
(the very last few of those are right now being finalised).


The changes being in pull requests makes it very easy for people to 
review the tests. You can just poke at the code, and if it looks good 
press a big green button. It's fun, it makes your hair sheen and your 
fresh bright, and you get to show off at cocktail parties about how you 
just made a major contribution to building a better web for all of 
humankind and its descendants to come.


All of that with just the press of a big and friendly green button! (And 
a little bit of code reading.)


So if you're ready for the undying adoration of wild throngs of web 
developers, if you've got the unassumingly humble tone of your I was 
just doing my job line, and if in you feel a hankering to press glossy 
green buttons with a motion of such lissom yet muscular grace that 
bystanders feel like it's happening in slow motion, then waste no time 
and head straight for:


https://github.com/w3c/web-platform-tests/pulls

Let's party!


A big thanks to Ms2ger and Odin for helping a lot with cleaning up the 
moved repo!


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Reviewing test submissions

2013-04-04 Thread Robin Berjon

Hi Julian,

On 04/04/2013 16:44 , Julian Aubourg wrote:

I suppose that there is a hook in place on the github repo so that
manifests are auto-magically re-generated whenever a new test is added
to a main directory? Or is this still manual?


No there isn't, but there is a vast ongoing project to provide tooling 
and infrastructure around the testing effort and listing tests is part 
of it. I doubt that this will take the form of a manifest though, the 
extracted data can go straight into a DB and be reused from there.



Thanks for the hard work btw, seeing tests in github is very
satisfactory and appreciated,


Glad you like it :)

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Reviewing test submissions

2013-04-04 Thread Robin Berjon

On 04/04/2013 17:59 , Arthur Barstow wrote:

On 4/4/13 10:23 AM, ext Robin Berjon wrote:

after having moved the test suite to the GitHub repository, we've been
busy cleaning things up.


Can the mirroring to http://w3c-test.org/web-platform-tests/master/ be
more frequent than every 10 minutes?


Yes, we're working with the System Team to make that happen as soon as 
something is pushed into the repository. I don't have an ETA, but I 
think it'll happen pretty soon.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: fixing appcache...

2013-04-03 Thread Robin Berjon

Hi Chaals,

On 24/03/2013 01:33 , Charles McCathie Nevile wrote:

2. Bundles.
Sometimes we need to load several resources (js/css/json/...) before we
can actually show something to user. Like a dialog, or another complex
control. Or if it's a single page application before change page.
Again, it's often faster to make one request than several, but it would
be even faster if we could then cache them separately:
HttpCache.store(url1, content1);
HttpCache.store(url2, content2);
...
So that later we can use the files as usual (script, link...).


Most of what you list can be handled by NavCon, but I was wondering 
about this specific case.


Do you believe that this would be helped by having some form of simple 
packaging system that's addressable à la JAR? Basically you'd have one 
Zip archive containing your dependencies, and load them with script 
src='/wrapped-content.zip!/foo.js' and friends.


There are a few slightly tricky bits to handle, but nothing 
insurmountable. This sort of stuff has been a small blip on the radar 
for essentially ever but if there's enough implementer interest it could 
be brought alive.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Fixing appcache: a proposal to get us started

2013-04-03 Thread Robin Berjon

On 29/03/2013 21:08 , Jonas Sicking wrote:

* Cache both files (poor bandwidth)
* We could enable some way of flagging which context different URLs
are expected to be used in. That way the UA can send the normal
content negotiation headers for images vs media files. I'm not sure
that this is worth it though given how few websites use content
negotiation headers.
* Use script to detect which formats are supported by the UA and then
use cacheURL to add the appropriate URL to the cache.
* Use the NavigationController feature.
* Use UA-string detection. You can either send different manifests
that point to different URLs for the media, or use a single manifest
but do the UA detection and serve different media files from the same
media URL. This is a pretty crappy solution though.


Another option: in your list of URLs to cache, instead of just strings 
you can also have objects of the form { video/webm: kittens.webm, 
video/evil: dead-kittens.mv4 } that would operate in a manner 
modelled on source, caching only what's needed.


It's a bit verbose, but it's a lot less verbose than loading the content 
twice.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: CfC: move WebApps' test suites to Github; deadline March 22

2013-03-18 Thread Robin Berjon

On 18/03/2013 15:54 , Dimitri Glazkov wrote:

I am a big fan.


Yeah, I kinda like the idea as well.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [editing] Is this the right list to discuss editing?

2013-02-19 Thread Robin Berjon

On 19/02/2013 05:56 , Travis Leithead wrote:

Alex, work on Editing APIs was ongoing in the Community Group
(http://www.w3.org/community/editing/) though their draft is just under
a year old.


My recall is a bit rusty on that one, but I think that the situation was 
that:


• WebApps is not chartered to publish this, so a CG was created.

• But having the discussion on the CG list seemed like a bad idea since 
everyone is here, so the mailing list for discussion was decided to be 
public-webapps.


I actually pinged Aryeh about this a week or two ago, but I haven't 
heard back. I'd be happy to take over as editor for this spec, it's a 
feature I've wanted to have work right forever.


In order to make that happen (assuming that Aryeh agrees, or doesn't 
speak up), I propose the following:


• Since I'm financed to work on HTML, transition this to an HTML 
extension spec (this probably only requires a few changes to the header).


• The discussion can stay here (wherever people prefer that I'm already 
subscribed to — I really don't care).


• The spec gets published through the HTML WG, since I believe it's 
actually viably in scope there already.


All of the above assumes you're all happy with it, and the HTML people 
too. I reckon it could work though.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Fwd: [Bug 20945] New: Specs in dvcs have mixed-content stylesheets

2013-02-11 Thread Robin Berjon

Hi WebApps,

this was directed to completely the wrong Bugzilla, but I believe that 
it is nevertheless true of several of your specs that are on dvcs.w3.


I would recommend someone went through them all to figure out which ones 
are broken by this.


 Original Message 
From: bugzi...@jessica.w3.org
To: ro...@w3.org
Subject: [Bug 20945] New: Specs in dvcs have mixed-content stylesheets
Date: Mon, 11 Feb 2013 07:01:07 +

https://www.w3.org/Bugs/Public/show_bug.cgi?id=20945

Bug ID: 20945
   Summary: Specs in dvcs have mixed-content stylesheets
Classification: Unclassified
   Product: HTML WG
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: Editor tools
  Assignee: ro...@w3.org
  Reporter: jruder...@gmail.com
QA Contact: public-html-bugzi...@w3.org
CC: eocon...@apple.com, erika.do...@microsoft.com,
silviapfeiff...@gmail.com, tra...@microsoft.com

e.g. https://dvcs.w3.org/hg/undomanager/raw-file/default/undomanager.html

The document is served over https but uses a stylesheet served
over http.  This does not work in Chrome and will soon stop working in
Firefox (https://bugzilla.mozilla.org/show_bug.cgi?id=834836).

--
You are receiving this mail because:
You are the assignee for the bug.

--
Robin Berjon - http://berjon.com/ - @robinberjon





Re: Allow ... centralized dialog up front

2013-02-06 Thread Robin Berjon

On 06/02/2013 08:36 , Keean Schupke wrote:

I don't think you can say either an up front dialog or popups do not
work. There are clear examples of both working, Android and iPhone
respectively. Each has a different set of trade-offs and is better in
some circumstances, worse in others.


If by working you mean that it is technically feasible and will 
provide developers with access to features then sure.


If however you mean that it succeeds in protecting users against 
agreeing to escalate privileges to malicious applications then, no, it 
really, really does not work at all.


Security through user prompting is sweeping the problem under the rug. 
Usually this is the point at which someone will say but we have to 
*educate* the users!. No. We don't. Users don't want to be educated, 
and they shouldn't have to be. We're producing technology for *user* 
agents. It is *our* responsibility to ensure that users remain safe, 
even in as much as possible against their own mistakes.


And I'm sorry to go all Godwin on you, but the prompting approach is the 
Java applet security model all over again. Let's just not go back there, 
shall we?


It's not as if this debate hasn't been had time and over again. See (old 
and unfinished):



http://darobin.github.com/api-design-privacy/api-design-privacy.html#privacy-enhancing-api-patterns

That includes a short discussion of why the Geolocation model is wrong. 
All of this has been extensively discussed in the DAP WG, as well as 
IIRC around the Web Notifications work. There have been a few attempts 
to work out the details (tl;dr they don't fly):


http://w3c-test.org/dap/proposals/request-feature/
http://dev.w3.org/2009/dap/docs/feat-perms/feat-perms.html

That's one of the reasons we have a SysApps WG today. As it happens, 
they're working on a security model, too.


This is not to say that declaring required privileges cannot be useful. 
There certainly are cases in which it can integrate into a larger 
system. But that larger system isn't upfront prompting.


--
Robin Berjon - http://berjon.com/ - @robinberjon




Re: Allow ... centralized dialog up front

2013-02-05 Thread Robin Berjon

On 04/02/2013 20:06 , Ian Hickson wrote:

Geolocation can use a similar asynchronous UI:

++
| (+) example.org wants to know your location. [ San Jose (IP)|V]  X|
+---| Mountain View  |---+
 | 1600 Plymouth  |
 | Use GPS|
 ++


Except that this is probably the wrong design for Geolocation as it 
encourages requesting the location outside of the user's logical action 
flow — an input-type approach would have made more sense in this case. 
This sort of UI can sometimes make sense, but people shouldn't just copy 
off Geolocation and expect to the right.


Of course, an upfront dialog would only make that worse.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [admin] If you use `respec`, your ED may be broken

2013-02-04 Thread Robin Berjon

On 31/01/2013 01:27 , Glenn Adams wrote:

btw, it seems that Robin hasn't updated the generated copyright to
include Beihang


Which will happen before the end of the grace period.

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-02-04 Thread Robin Berjon

On 31/01/2013 18:13 , Arthur Barstow wrote:

As I said during one of the testing breakouts in Lyon, ultimately I
suspect the saying beggars can't be choosy will trump. However,
AFAIK, currently, only one of WebApps' thirty active specs actually
has an outside contribution. As such, and without any information
about a relatively high probability we will get contributions from
others, this move still seems like a lot of make work.


Aside from the external contributions that others have already pointed 
out, I think it's worth putting this statement in some literary perspective.



But Mr Dent, the plans have been available in the local planning office 
for the last nine months.


Oh yes, well as soon as I heard I went straight round to see them, 
yesterday afternoon. You hadn't exactly gone out of your way to call 
attention to them, had you? I mean, like actually telling anybody or 
anything.


But the plans were on display ...

On display? I eventually had to go down to the cellar to find them.

That's the display department.

With a flashlight.

Ah, well the lights had probably gone.

So had the stairs.

But look, you found the notice didn't you?

Yes, said Arthur, yes I did. It was on display in the bottom of a 
locked filing cabinet stuck in a disused lavatory with a sign on the 
door saying 'Beware of the Leopard'.


-- Hitchhiker's Guide to the Galaxy, Douglas Adams


Seriously: we've had our test suites locked up on an unknown server, in 
an obsolete version control system that's protected by credentials that 
are hard to get. Shockingly enough, we have seen *some* external 
contribution.


Additionally, we also have to take the productivity of existing 
contributors into account. Even if no external contributor shows up, 
switching to git will already save work from existing contributors fast 
enough that any work involved in transitioning will be made up for 
within weeks.



Before a CfC is started, I would like to hear from Kris and/or PLH re
 how the move went for the HTMLWG. For instance, were there any some
 major gotchas, were there any negative side-effects, etc. Kris,
PLH - would you please provide a short summary of the move?


It went like a breeze. The major difficulty was the submissions backlog, 
but then it's better for it to be a problem than to just linger on as 
had been the case to date.


One thing to watch out for is that it seems to have been relatively 
common for tests in the submissions directory to be *copied* to 
approved without being removed from their original directory. I 
detected quite a lot of that going on.


So overall, zero negative effects, afar better workflow, and new 
contributors we'd never heard of before.



Re section numbers - that seems like make work, especially for
short-ish specs (e.g. Progress Events). I think using section numbers
should be optional (and that metadata be included in the tests
themselves). Are you actually proposing to add section numbers for
every test suite that you copy?


Section numbers don't fly, but using section IDs to produce a tree of 
tests works really well (and you can automate the creation of the 
initial tree trivially).


It's far better than metadata because metadata is copied, goes awry, 
etc. whereas a file's location tends to just be correct. It's metadata 
but with usability taken into account.



What is the expectation for what I will characterize as legacy
specs like Marcos' widget test suites? Marcos?


I would say: whoever wants to include their stuff can include it, so 
long as it's legit content related to a spec.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Updated idlharness.js

2013-01-24 Thread Robin Berjon

On 23/01/2013 19:11 , Glenn Adams wrote:

were you able to incorporate the improvements I suggested at [1]?

[1] https://github.com/darobin/webidl.js/pull/16


Well, it's an entirely different code base, so certainly not as is. But 
at least some of what you describe in there should be supported. (In 
webidl2.js that is, not idlharness.js).


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-01-23 Thread Robin Berjon

On 23/01/2013 00:48 , Julian Aubourg wrote:

The one-repo idea, while much simpler from a maintenance point of view,
could easily be a burden on users that subscribe to it. Even more so for
people who can merge PRs (and thus will receive an email for a PR
initiatedfor any spec).


It *could*. But we don't know that yet. Splitting is easy enough. So I 
reckon we can start with the simple, one-repo approach and if as it 
ramps up we find that produces too much volume in email (or any such 
thing that can be hard to manage) then we can cross the splitting 
bridge. One good thing is that the experiment might give us valuable 
information about what splitting lines make sense to our community. For 
instance, to take a random example, it might be that it makes sense to 
put all APIs together in one repo and all markup in another (I doubt 
that's the case, but it's just an example of a split that doesn't map to 
ours that could possibly emerge).


To put this more shortly: I'd rather only deal with the problems of 
actually getting a community now (for which a single point of rallying 
is helpful). I'll be overjoyed with having to deal with the problems 
that come with having built a successful community later.


And Tobie writed:

It's also worth thinking about which solution will have more chances of
fostering a community of external contributors and reviewers. Strong but
very specialized contributors might not get noticed. Being the biggest
contributor to the XHR test suite might carry a lot more value than being
the 50th biggest contributor to W3C tests in general.


This cuts both ways. Being the top contributor for a dozen smaller, less 
noticed APIs or features (e.g. Vibration, ruby markup) doesn't rate as 
high as being, say, 8th overall.


I certainly don't disagree that having a way of publicly recognising 
contributors (beyond peer recognition from those who track the PRs) 
would likely prove valuable. But again, I think that's something that we 
can shape as we go along. The requisite data is available through the 
API. You can extract overall contribution and you can filter it by root 
directories that it touched. I reckon we can get the same data 
irrespective of which approach we pick.


But, again, I'd rather we focused on getting it off the ground well and 
proper. When the gates get flooded we can reassess. At this point I 
should probably stop belabouring my point because I'm this close to 
using the word agile.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-01-23 Thread Robin Berjon

On 23/01/2013 13:01 , Arthur Barstow wrote:

Before we start a CfC to change WebApps' agreed testing process
[Testing], please make a clear proposal regarding the submission
process, approval process, roles, etc. as is defined in [Testing] and
its references. (My preference is for you to document the new process,
expectations, etc. in WebApps' Public wiki, rooted at
http://www.w3.org/wiki/Webapps/).


I'll leave that to Odin since he's been driving this, but I'll be happy 
to comment and help.



Also, what is the expectation regarding [Framework]? Does your proposal
include still using it? If yes, will there be automagic mirroring of
github changes to w3c-test.org?


We shouldn't mix using w3c-test.org and using the current framework. The 
former we definitely keep on using — we're right now working on the sync 
from our TS to that server.


The framework I think was a proof of concept but has endemic problems 
(one of which being that it is painful and costly to maintain). I know 
we can do much better. I'll bang my head with that of a number of other 
people and I'm sure we'll come up with something. This, however, is 
orthogonal to the GitHub reorg. You can keep using the existing 
framework after the GitHub move, and it would still have the same 
problems if we don't move.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Updated idlharness.js

2013-01-23 Thread Robin Berjon

Hi all,

as you know, one of the tools that we have for testing is idlharness. 
What it does is basically that it processes some WebIDL, is given some 
objects that correspond to it, and it tests them for a bunch of pesky 
aspects that one should not have to test by hand.


One of the issues with idlharness is that it has long been based on 
webidl.js which was a quick and dirty WebIDL parser that I'd written 
because I needed it for a project that petered out. This meant that it 
increasingly didn't support newer constructs in WebIDL that are now in 
common use.


In order to remedy this, I have now made an updated version of 
idlharness that uses webidl2.js, a much better parser that is believed 
to be rather complete and correct (at least, it tests well against the 
WebIDL tests that we have). The newer webidl2.js does bring as much 
backwards compatibility with webidl.js as possible, but in a number of 
cases that simply wasn't possible (because WebIDL has changed too much 
to fit well into the previous model, and also because mistakes were made 
with it).


You can find the updated version of idlharness in this branch:

https://github.com/w3c/testharness.js/tree/webidl2

The reason I'm prodding you is that idlharness, ironically enough, does 
not have a test suite. Because of that, I can't be entirely comfortable 
that the updated version works well and doesn't break existing usage. 
I've tested it with some existing content (e.g. 
http://berjon.com/tmp/geotest/) but that's no guarantee.


So if you've been using idlharness, I'd like to hear about it. If you 
could give the new version a ride to see if you get the same results 
it'd be lovely. Once I hear back from enough people that it works (or if 
no one says anything) I'll merge the changes to the master branch.


Thanks!

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-01-22 Thread Robin Berjon

On 22/01/2013 13:27 , Odin Hørthe Omdal wrote:

I'm not really sure if that is needed. If we can trust someone in one
repository, why not in all?


I'd add to that: the odds are that if someone is screwing things up, 
it's better to have more eyes on what they're doing.



But what wins me over, is really the overhead question. Do anyone really
want to manage lots of repositories?  And for what reason?  Also, we
want more reviewers.  If I'm already added for CORS, I could help out
for say XMLHttpRequest if there's a submission/pull request languishing
there.


I think Odin makes convincing arguments. For me it's really the outreach 
argument. Just one repo, carrying its one setup and one set of docs, can 
easily be pitched as the One True Place to Save The Web. It's a lot 
easier to explain at a conference or such: just go there, and patch stuff.



Anyway, for my part, the how-to-split repository issue is not that
important compared to having the tests at GitHub in the first place :-)


Agreed. But how about we start with just one repo and then split them 
into several if it's a problem?


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-01-22 Thread Robin Berjon

On 22/01/2013 14:48 , Tobie Langel wrote:

Yes, I guess what I want to avoid at all costs is the split per WG which
has boundaries that partially happen at IP level, rather than strictly at
the technology level.


My understanding is that we don't have to care about spec-IP issues in 
test suites because when you contribute to a test suite you're not 
contributing to the spec's essential claims.


You *do* need to make the proper commitments for the test suite, but 
those are much lighter and can easily be extended to all.



Whether we end up as:

 w3c-tests/
 deviceorienation/
 html5/
 pointerevents/
 progressevent/
 xmlhttprequest/

or:

 deviceorienation-tests/
 html5-tests/
 pointerevents-tests/
 progressevent-tests/
 xmlhttprequest-tests/

Doesn't really matter (though I do find the former more readable). What
bothers me however is how had to parse per-WG-organization is for
newcomers.


That's why we're proposing to ditch per-WG anything here. The way 
html-testsuite is set up, we already have subdirectories for html, 
canvas2d, and microdata. Those are all from the HTML WG, but they're 
just listed as the individual specs. We can keep on adding more specs in 
there (the Web Crypto people are looking to do that).


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Proposal: moving tests to GitHub

2013-01-22 Thread Robin Berjon

On 22/01/2013 17:14 , Tobie Langel wrote:

On 1/22/13 4:45 PM, Robin Berjon ro...@w3.org wrote:

You *do* need to make the proper commitments for the test suite, but
those are much lighter and can easily be extended to all.


Moving to GitHub should be an excellent occasion to revisit how the CLA
works and provide better integration, e.g.: by using something like
CLAHub[1].


FYI we're looking at CLAHub as a possible solution for this (either 
directly or with a few modifications to tie it into our systems). No 
promises but it's on the table.



That's why we're proposing to ditch per-WG anything here. The way
html-testsuite is set up, we already have subdirectories for html,
canvas2d, and microdata. Those are all from the HTML WG, but they're
just listed as the individual specs. We can keep on adding more specs in
there (the Web Crypto people are looking to do that).


That sounds good to me. It's the per WG siloing I'm opposed to, not the
one repository to rule them all idea.


Good! Well, it looks like everyone agrees... If we're forging ahead, I 
have admin rights to the repo so you know who to prod.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: Review of the template spec

2012-12-13 Thread Robin Berjon

On 11/12/2012 14:00 , Henri Sivonen wrote:

Interaction with the DOM to XDM mapping isn’t covered per discussion
at TPAC. (Expected template contents not to appear in the XDM when
invoking the XPath DOM API (for consistency with querySelectorAll) but
expected them to appear in the XDM when an XSLT transformation is
being processed (to avoid precluding use cases).)


I don't recall (and can't seem to find) the reasoning behind this 
distinction. It seems rather costly to require two different code paths 
for XPath handling, especially when you consider how much this actually 
gets used.


--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: [XHR] Setting the User-Agent header

2012-09-06 Thread Robin Berjon

On 05/09/2012 06:03 , Mark Nottingham wrote:

That's unfortunate, because part of the intent of the UA header is to identify 
the software making the request, for debugging / tracing purposes.

Given that lots of libraries generate XHR requests, it would be natural for 
them to identify themselves in UA, by appending a token to the browser's UA 
(the header is a list of product tokens).  As it is, they have to use a 
separate header.


Do you have a use case that does not involve the vanity of the library's 
authors? :)


--
Robin Berjon - http://berjon.com/ - @robinberjon




Re: IndexedDB and RegEx search

2012-08-09 Thread Robin Berjon
On Aug 9, 2012, at 01:39 , Jonas Sicking wrote:
 On Wed, Aug 8, 2012 at 1:33 AM, Yuval Sadan sadan.yu...@gmail.com wrote:
 Perhaps it shouldn't be a full-text *index* but simply a search feature.
 Though I'm unfamiliar with specific implementations, I gather that filtering
 records in native code would save (possibly lots of) redundant JS object
 construction (time and memory = money :)), and doing so with a pre-compiled
 regex might improve over certain JS implementation or non-optimizable
 practices, e.g.
 function search(field, s) {
  someCallToIndexedDb(function filter(record) {
var re = new RegExp(s);
return !re.test(record[field]);
  }
 }
 
 Plus it saves some code jumbling for a rather common practice.
 
 The main thing you'd save is having to round-trip between threads for
 each record. I think a more general feature that would be more
 interesting would be to be able to iterate an index or objectStore
 using a cursor, but at the time of constructing the cursor be able to
 provide a javascript function which can be used to filter the data.
 Unfortunately javascript doesn't have a good way of executing a
 function in such a way that it doesn't pull in a lot of context, but
 it's possible to hack this, for example by passing a string which
 contains the javascript code.

Actually, PhantomJS does perform some weird function decontextualisation in 
order to execute part of your code in a different context (that of the page you 
just loaded). But it's weird, surprising to developers, and ugly so I agree 
it's not a model to emulate.

We do, however, have Workers. It seems to me that there could be a way to make 
that work.

 This is somewhat similar to [1] and something we decided was
 out-of-scope for v1. But for v2 I definitely think we should look at
 mechanisms for using JS code to filter/sort/index data in such a way
 that the JS code is run on the IO thread.
 
 [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=1

There's a lot of excellent prior art in CouchDB for what you're describing in 
that bug (or at least parts thereof). I think it's well worth looking at.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: [IndexedDB] Problems unprefixing IndexedDB

2012-08-09 Thread Robin Berjon
On Aug 9, 2012, at 02:28 , Boris Zbarsky wrote:
 On 8/8/12 8:23 PM, Adam Barth wrote:
 If we're telling people to use that pattern, we might as well just not
 prefix the API in the first place because that pattern just tells the
 web developers to unilaterally unprefix the API themselves.
 
 Yep.  The only benefit of the prefixing at that point is to maybe mark the 
 API as experimental, if any web developers pay attention.  Which I doubt.

Trying to evangelise that something is experimental is unlikely to succeed. But 
when trying out a new API people do look at the console a lot (you tend to have 
to :). It might be useful to emit a warning upon the first usage of an 
experimental interface, of the kind You are using WormholeTeleportation which 
is an experimental API and may change radically at any time. You have been 
warned.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: Lazy Blob

2012-08-07 Thread Robin Berjon
On Aug 7, 2012, at 17:06 , Glenn Maynard wrote:
 A different option, equivalent to users, is to make URLObject a base class of 
 Blob.  URLObject would replace Blob in methods like FileReader.readAsDataURL, 
 createObjectURL and all other places where methods can work without knowing 
 the size in advance.  It would have no methods or attributes (at least at the 
 start).  In other words,
 
 - URLObject represents a resource that can be fetched, FileReader'd, 
 createObjectURL'd, and cloned, but without any knowledge of the contents (no 
 size attribute, no type attribute) and no slice() as URLObjects may not be 
 seekable.
 - Blob extends URLObject, adding size, type, slice(), and the notion of 
 representing an immutable piece of data (URLObject might return different 
 data on different reads; Blob can not).

+1 from me on this one.

I get a sense that this could possibly be a consensus position (or at least I'm 
going to claim that it is so as to get disagreement to manifest). Assuming it 
is, the next steps are:

• Having agreed on a solution, do we agree on the problem? (i.e. would this get 
implemented?)
• If so, we can bake this as a standalone delta spec but it would make more 
sense to me to make the changes directly to the relevant specs, namely FileAPI 
and XHR. I've copied Anne, Arun, and Jonas — any thought? In either case, I'm 
happy to provide the content.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: Lazy Blob

2012-08-06 Thread Robin Berjon
On Aug 2, 2012, at 14:51 , Tobie Langel wrote:
 On 8/2/12 2:29 PM, Robin Berjon ro...@berjon.com wrote:
 On Aug 2, 2012, at 10:45 , Tobie Langel wrote:
 On 8/1/12 10:04 PM, Glenn Maynard gl...@zewt.org wrote:
 Can we please stop saying lazy blob?  It's a confused and confusing
 phrase.  Blobs are lazy by design.
 
 Yes. Remote blob is more accurate and should help think about this
 problem in a more meaningful way.
 
 Actually, you need both to be accurate. With the current stack you can
 have lazy blobs, and you can have remote blobs, but you can't have both
 at the same time. If we're going to be strict about naming this, we're
 talking about remote lazy blobs.
 
 What's a remote blob in the current stack?

Setting responseType to blob on an XHR request.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: Lazy Blob

2012-08-06 Thread Robin Berjon
On Aug 2, 2012, at 17:45 , Glenn Adams wrote:
 Are you saying I am objecting for the fun of it? Where did I say I don't like 
 the idea? You'd best reread my messages.

For the fun of it is an expression. You don't like the idea that the 
solutions proposed in this thread are restricted to what is supported by XHR, 
but have shown no indication of having encountered problems with this 
restriction (if restriction it is, which I don't believe). Put differently, 
based on your messages, which I have read, you appear to be arguing from 
technical purity rather than from technical need.

 If you want a real world use case it is this: my architectural constraints as 
 an author for some particular usage requires that I use WS rather than XHR. I 
 wish to have support for the construct being discussed with WS. How is that 
 not a real world requirement?

Maybe there's a real world requirement underlying this that you're not stating 
— but it's not stated and so I can't just guess it. If you go back to my 
initial message, you will see that the issue I opened is based on a genuine 
problem that Jungkee bumped into while developing a Web application, for which 
we could find no proper workaround. Shortly thereafter, I also found the same 
problem, and solved it by simply dropping the feature (in this case, no 
pictures in an address book) — which is obviously far from ideal. When 
discussing it with others, several folks mentioned bumping into that limitation 
as well. I don't think we're just a bunch of crazy people and that we've hit 
this issue completely from left-field — indeed with usage of postMessage() 
increasing (and made all the more powerful with Intents) it seems highly likely 
to be a wall that other Web hackers will hit.

In contrast, what you cite above as a use case seems rather abstract and — at 
least to me — contrived. I have some difficulty conceiving a situation, 
certainly not one common enough, in which one may be able to use WS but not 
XHR. Reading the above sounds to me like objecting to XHR not supporting 
telnet/RFC15 because if I stretch my imagination in just the right way I can 
conceive of a situation in which only telnet is available and HTTP isn't.

So if you do have a use case, by all means please share it. If not, I maintain 
that you simply have no grounds for objection.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: Lazy Blob

2012-08-06 Thread Robin Berjon
Hi Glenn,

On Aug 3, 2012, at 01:23 , Glenn Maynard wrote:
 I'd suggest the following.
 
 - Introduce an interface URLObject (bikeshedding can come later), with no 
 methods.  This object is supported by structured clone.
 - Add XMLHttpRequest.getURLObject(optional data), which returns a new 
 URLObject.  This can only be called while XMLHttpRequest is in the OPENED 
 state.  The returned URLObject represents the resource that would have been 
 fetched if xhr.send() had been called.  No XHR callbacks are performed, and 
 no network activity takes place.  The data argument acts like send(data), 
 to specify the request entity body that will be sent.
 - Adjust URL.createObjectURL to take (Blob or URLObject), which returns a URL 
 that works the same way as Blob URLs.
 
 Example:
 
 var xhr = new XMLHttpRequest();
 xhr.open(GET, resource.jpg);
 var urlObject = xhr.getURLObject();
 var newURL = URL.getObjectURL(urlObject);
 img.src = newURL;

I like this idea, but I'm not certain if it differs that much from one of the 
options I listed (albeit a fair bit less clearly, and in the middle of a 
shopping list, which could explain why you overlooked it). Quoting:

 === Another XHR Approach
 
 partial interface XMLHttpRequest {
Blob makeLazyBlob ();
 };
 
 Usage:
 var xhr = new XMLHttpRequest();
 xhr.open(GET, /kitten.png, true);
 xhr.setRequestHeader(Authorization, Basic DEADBEEF);
 var blob =xhr.makeLazyBlob();


Unless I missed something, the only differences are the name (which we can 
bikeshed on later indeed — I certainly am not married to lazy blobs though it 
would make for a finer insult than URLObject) and that you mint a new object 
while I reuse Blob. Having mulled this over the weekend, the tradeoffs 
(assuming that both work the same, i.e. like send() with no eventing, etc.) are:

Using Blob
• It doesn't introduce a new interface;
• Blobs can already be scloned;
• Blobs already work with getObjectURL which means that part needn't change 
either.
But
• The Blob's |size| attribute cannot be set (without resorting to HEADs, which 
blows for a bunch of reasons),
• It's a little weird that it seems to duplicate responseType = blob, the 
primary difference (that developers are likely to ever care about) is that the 
network request is deferred (or… lazy ;).

I wonder if we could circumvent the |size| issue by allowing Blobs to return 
null for that (making the type unsigned long long?). I understand how Jonas 
sees this as making it closer to Stream, but I think it's the primary way in 
which it is. It seems more logical to me to occasionally have to deal with 
files the size of which you don't know, than to e.g. assign a stream to an 
img.

Frankly I can easily live with either and will be more than happy to bow to 
implementer preference between the two; what I mostly need is a way of 
exchanging pointers to remote resources in the way described in the original 
use cases.

For URLObject, you mention the case of passing it to XHR:

var urlO = getURLObject(); // this comes from a service or something
var newURL = URL.getObjectURL(urlO);
var xhr = new XMLHttpRequest();
xhr.open(GET, newURL, false);
xhr.responseType = blob;
xhr.send();
var blob = xhr.response;

The ability to get a Blob (rather than just a blob URL) is vitally useful if 
you wish to store the information, e.g. in an IndexedDB. So long as I can get 
one, even if it's a bit more convoluted, I'm happy.

 Passing one of these URLs back to XHR would need extra consideration (eg. 
 should you be able to specify more headers?).

I would assume that the request to the blob URL would work just like any 
request to a blob URL (you can only use GET, setting headers does nothing 
useful, etc.). None of this would have any effect whatsoever on the XHR from 
which the URLObject was created (anything else would be a likely attack vector 
and does not seem useful).

 (Note that I've spent some time thinking about this because I think it's 
 technically interesting, but I haven't looked over the use cases closely 
 enough to say whether I think it'd be worthwhile or not.)

Well, it's an issue that a few of us have bumped into — so I think it's useful 
:)

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




  1   2   3   4   5   >