Re: [whatwg] Supporting feature tests of untestable features

2015-04-09 Thread Simon Pieters

On Thu, 09 Apr 2015 09:50:34 +0200, Simon Pieters sim...@opera.com wrote:

I don't disagree here. I just don't come to the conclusion that we  
should have an API to test everything under the sun. I don't mind  
changing or adding things to help feature-test things that are not  
currently testable in compliant implementations.


I also don't mind changing or adding specific things when the current  
implemented landscape is not compliant, if it makes people move away from  
UA sniffing and so makes it easier for the non-conforming browsers to  
switch to the conforming behavior.


e.g. https://lists.w3.org/Archives/Public/www-style/2015Apr/0145.html

--
Simon Pieters
Opera Software


Re: [whatwg] Supporting feature tests of untestable features

2015-04-09 Thread Simon Pieters

On Wed, 08 Apr 2015 14:59:34 +0200, Kyle Simpson get...@gmail.com wrote:

A lot of the untestable bugs have been around for a really, really  
long time, and are probably never going away. In fact, as we all know,  
as soon as a bug is around long enough and in enough browsers and enough  
people are working around that bug, it becomes a permanent feature of  
the web.


Yes. And we both think it's a bad situation. Let's work to avoid new bugs  
when adding new features, by writing tests, and considering feature  
testability when designing new features (this is being done already). Old  
bugs would be good to fix, too, but it is not without risk since Web pages  
might use the differences for browser sniffing or otherwise rely on them.


So to shrug off the concerns driving this thread as bugs can be fixed  
is either disingenuous or (at best) ignorant of the way the web really  
works.


We have different perspectives for sure.

Sorry to be so blunt, but it's frustrating that our discussion would be  
derailed by rabbit trail stuff like that. The point is not whether this  
clipboard API has bugs or that canvas API doesn't or whatever.


Just because some examples discussed for illustration purposes are bug  
related doesn't mean that they're all bug related.


I didn't say that. link rel is an example already brought up, and I've  
proposed a fix. I've asked for other examples.


There **are** untestable features, and this is an unhealthy pattern for  
the growth/maturity of the web platform.


For example:

1. font-smoothing
2. canvas anti-aliasing behavior (some of it is FT'able, but not all of  
it)

3. clamping of timers
4. preflight/prefetching/prerendering
5. various behaviors with CSS transforms (like when browsers have to  
optimize a scaling/translating behavior and that causes visual artifacts  
-- not a bug because they refuse to change it for perf reasons)

6. CSS word hyphenation quirks
7. ...


Thanks. Can you elaborate on these, why you think it makes sense to  
feature-test them? Do you know of existing pages that would benefit from  
being able to feature-test these (that currently use UA sniffing or so  
instead)?


The point I'm making is there will always be features the browsers  
implement that won't have a nice clean API namespace or property to  
check for. And many or all of those will be things developers would like  
to determine if the feature is present or not to make different  
decisions about what and how to serve.


Philosophically, you may disagree that devs *should* want to test for  
such things, but that doesn't change the fact that they *do*. And right  
now, they do even worse stuff like parsing UA strings and looking  
features up in huge cached results tables.


Consider just how huge an impact stuff like caniuse data is having  
right now, given that its data is being baked into build-process tools  
like CSS preprocessors, JS transpilers, etc. Tens of millions of sites  
are implicitly relying not on real feature tests but on (imperfect)  
cached test data from manual tests, and then inference matching purely  
through UA parsing voodoo.


I don't disagree here. I just don't come to the conclusion that we should  
have an API to test everything under the sun. I don't mind changing or  
adding things to help feature-test things that are not currently testable  
in compliant implementations.


--
Simon Pieters
Opera Software


Re: [whatwg] Supporting feature tests of untestable features

2015-04-09 Thread Roger Hågensen

On 2015-04-09 11:43, Nils Dagsson Moskopp wrote:

Roger Hågensen rh_wha...@skuldwyrm.no writes:


Myself I have to confess that I tend to use caniuse a lot myself. I use
it to check how far back a feature goes in regards to browser versions
and try to decide where you cut the line In other words I'll end up
looking at a featyre and thinking that OK! IE10 supports this feature,
IE9 does not, so minimum IE target is then IE10).


Have you tried progressive enhancement instead of graceful degradation?
I usually build a simple HTML version of a page first, check that it
works using curl or elinks, then enhance it via polyfills and other


I have, but found that relying on polyfills is no different than relying 
on a work around or a 3td party framework.
It easily ads to code bloat, and I'm now moving more and more towards 
static pages with JS and CSS specific to a page are embed in the page as 
that support fast static delivery.


I don't mind waiting a few months or a year for a feature to be added 
and available among all major modern browsers. Once a feature is 
available like that then I make use of it (if I have use for it) to 
support the effort of it being added.
This does mean I end up going back to old code and stripping 
out/replacing my own code that do similar things.

Or to enhance/speed up or simplify the way my old code works.
I don't mind this, it's progress and it gradually improves my code as 
the browsers evolve.
If ten years from now my old pages no longer look/work as intended then 
that is on me or whomever do maintenance them.
If they are important enough then the pages will migrate to newer 
standards/features over time naturally.



I do miss HTML versioning to some extent though, being able to target a 
5.0 or 5.1 minimum (or lowest common denominator) would be nice.

Though this can possibly be missused.

I have seen another approach though to versioning that may be worth 
contemplating though...


Using Javascript one call a function and the browser if this version is 
supported. Example:

if (version_supported('javascript',1,2))

or
if (version_supported('css',3,0))

or
if (version_supported('html',5,0))


The browser could then simply return true or false.
True would mean that all features in the speced version is supported.
False would mean that they are not.

I'm using something similar for a desktop programming API I'm working 
on, the application simply asks if the library support this version 
(number hardcoded at the time the program was compiled) and the library 
answers true or false.
This avoids programmers from accidentally parsing a version number 
wrong, the library is coded to do it correctly instead.


There would still be a small problem where a browser may support for 
example all of CSS 2 but only parts of CSS3.

But in that case it would return True for CSS 2.0 but false for CSS 3.0
So some feature detection would still be needed to detect that.
But being able to ask the browser a general question if it will be able 
to handle a page that targets CSS 3, HTML5, JS 1.2 would simplify things 
a lot.


If actual version numbers is an issue. Why not use years, like so:

if (version_supported('javascript',2004))

or
if (version_supported('css',2009))

or
if (version_supported('html',2012))

Essentially asking, do you support the living spec from 2012 or later?.

I think that using years may possibly be better than versions (and 
you'll never run out of year numbers nor need to start from 1.0 again).

And if finer granularity is needed then months could be added like so:
if (version_supported('html',2012,11))


The javascript can then simply inform the user that they need a more up 
to date browser to fully use the page/app.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-09 Thread Nils Dagsson Moskopp
Roger Hågensen rh_wha...@skuldwyrm.no writes:

 Myself I have to confess that I tend to use caniuse a lot myself. I use 
 it to check how far back a feature goes in regards to browser versions 
 and try to decide where you cut the line In other words I'll end up 
 looking at a featyre and thinking that OK! IE10 supports this feature, 
 IE9 does not, so minimum IE target is then IE10).

 Then I use that feature and test it in latest (general release) version 
 of Firefox, Chrome, IE, Opera, if it relatively looks the same and there 
 are no glitches and the code works and so on then I'm satisfied, if I 
 happen to have a VM available with a older browsers then I might try 
 that too just to confirm what CanIUse states in it's tables.

 This still means that I either need to provide a fallback (which means I 
 need to test for feature existance) or I need to fail gracefully (which 
 might require some user feedback/information so they'll know why 
 something does not work).
 I do not do use browser specific code as I always try to go for feature 
 parity.

Have you tried progressive enhancement instead of graceful degradation?
I usually build a simple HTML version of a page first, check that it
works using curl or elinks, then enhance it via polyfills and other
scripts that do their own user-interface enhancements. Of course I
consult caniuse in the process to see if features are supported.

For me, following a progressive enhancement workflow reduces the need
for testing, as the simple (“fallback”) version does usually work if a
script does not execute and I can always test it by turning of scripts.

Greetings,
-- 
Nils Dagsson Moskopp // erlehmann
http://dieweltistgarnichtso.net


Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Ashley Gullen
CSS.supports() should in theory cover all CSS features. Javascript as ever
can test which features are available. Neither guarantees correct
functioning, only presence - besides, all software has bugs, so at what
point do you draw the line between broken and working?

Things like canvas anti-aliasing are AFAIK implementation details, and
optional in the spec. I don't think it makes sense to make it easy to
detect things which are not mandated by the spec, or are implementation
details, because then you could end up with a lot of web content depending
on some particular implementation detail (which happens anyway, but this
would make it worse). It also seems it would be very difficult to spec the
feature-detection if the feature isn't specified or is an implementation
detail, because in order to specify the feature detection, the feature
would have to be specified! Especially with something like antialiasing
which has a long list of different types of algorithms that can be used,
are you going to spec all of them? Which count and which don't? What if
there is variation between implementations of the same algorithm, e.g. to
use a different rounding, so that in practice it looks identical but the
pixel data is different? This even applies to bugs/quirks: if you want to
add a feature to indicate the presence of a bug or quirk, that would have
to be comprehensively specified... and what if the quirk varies depending
on the environment, e.g. across OS versions?

Bugs and quirks tend to correlate with version ranges of particular
browsers, e.g. one issue may affect Firefox versions 24-28. The user agent
string is a huge mess, but it is possible to sensibly and
forwards-compatibly parse it for this information. It's hard to see any
better way to work around these types of issues.

Ashley





On 8 April 2015 at 13:59, Kyle Simpson get...@gmail.com wrote:

 A lot of the untestable bugs have been around for a really, really long
 time, and are probably never going away. In fact, as we all know, as soon
 as a bug is around long enough and in enough browsers and enough people are
 working around that bug, it becomes a permanent feature of the web.

 So to shrug off the concerns driving this thread as bugs can be fixed is
 either disingenuous or (at best) ignorant of the way the web really works.
 Sorry to be so blunt, but it's frustrating that our discussion would be
 derailed by rabbit trail stuff like that. The point is not whether this
 clipboard API has bugs or that canvas API doesn't or whatever.

 Just because some examples discussed for illustration purposes are bug
 related doesn't mean that they're all bug related. There **are** untestable
 features, and this is an unhealthy pattern for the growth/maturity of the
 web platform.

 For example:

 1. font-smoothing
 2. canvas anti-aliasing behavior (some of it is FT'able, but not all of it)
 3. clamping of timers
 4. preflight/prefetching/prerendering
 5. various behaviors with CSS transforms (like when browsers have to
 optimize a scaling/translating behavior and that causes visual artifacts --
 not a bug because they refuse to change it for perf reasons)
 6. CSS word hyphenation quirks
 7. ...

 The point I'm making is there will always be features the browsers
 implement that won't have a nice clean API namespace or property to check
 for. And many or all of those will be things developers would like to
 determine if the feature is present or not to make different decisions
 about what and how to serve.

 Philosophically, you may disagree that devs *should* want to test for such
 things, but that doesn't change the fact that they *do*. And right now,
 they do even worse stuff like parsing UA strings and looking features up in
 huge cached results tables.

 Consider just how huge an impact stuff like caniuse data is having right
 now, given that its data is being baked into build-process tools like CSS
 preprocessors, JS transpilers, etc. Tens of millions of sites are
 implicitly relying not on real feature tests but on (imperfect) cached test
 data from manual tests, and then inference matching purely through UA
 parsing voodoo.


Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Kyle Simpson
A lot of the untestable bugs have been around for a really, really long time, 
and are probably never going away. In fact, as we all know, as soon as a bug is 
around long enough and in enough browsers and enough people are working around 
that bug, it becomes a permanent feature of the web.

So to shrug off the concerns driving this thread as bugs can be fixed is 
either disingenuous or (at best) ignorant of the way the web really works. 
Sorry to be so blunt, but it's frustrating that our discussion would be 
derailed by rabbit trail stuff like that. The point is not whether this 
clipboard API has bugs or that canvas API doesn't or whatever.

Just because some examples discussed for illustration purposes are bug related 
doesn't mean that they're all bug related. There **are** untestable features, 
and this is an unhealthy pattern for the growth/maturity of the web platform.

For example:

1. font-smoothing
2. canvas anti-aliasing behavior (some of it is FT'able, but not all of it)
3. clamping of timers
4. preflight/prefetching/prerendering
5. various behaviors with CSS transforms (like when browsers have to optimize a 
scaling/translating behavior and that causes visual artifacts -- not a bug 
because they refuse to change it for perf reasons)
6. CSS word hyphenation quirks
7. ...

The point I'm making is there will always be features the browsers implement 
that won't have a nice clean API namespace or property to check for. And many 
or all of those will be things developers would like to determine if the 
feature is present or not to make different decisions about what and how to 
serve.

Philosophically, you may disagree that devs *should* want to test for such 
things, but that doesn't change the fact that they *do*. And right now, they do 
even worse stuff like parsing UA strings and looking features up in huge cached 
results tables.

Consider just how huge an impact stuff like caniuse data is having right now, 
given that its data is being baked into build-process tools like CSS 
preprocessors, JS transpilers, etc. Tens of millions of sites are implicitly 
relying not on real feature tests but on (imperfect) cached test data from 
manual tests, and then inference matching purely through UA parsing voodoo.

Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Roger Hågensen

On 2015-04-08 14:59, Kyle Simpson wrote:

Consider just how huge an impact stuff like caniuse data is having right now, 
given that its data is being baked into build-process tools like CSS preprocessors, JS 
transpilers, etc. Tens of millions of sites are implicitly relying not on real feature 
tests but on (imperfect) cached test data from manual tests, and then inference matching 
purely through UA parsing voodoo.



Myself I have to confess that I tend to use caniuse a lot myself. I use 
it to check how far back a feature goes in regards to browser versions 
and try to decide where you cut the line In other words I'll end up 
looking at a featyre and thinking that OK! IE10 supports this feature, 
IE9 does not, so minimum IE target is then IE10).


Then I use that feature and test it in latest (general release) version 
of Firefox, Chrome, IE, Opera, if it relatively looks the same and there 
are no glitches and the code works and so on then I'm satisfied, if I 
happen to have a VM available with a older browsers then I might try 
that too just to confirm what CanIUse states in it's tables.


This still means that I either need to provide a fallback (which means I 
need to test for feature existance) or I need to fail gracefully (which 
might require some user feedback/information so they'll know why 
something does not work).
I do not do use browser specific code as I always try to go for feature 
parity.


Now being able to poke the browser in a standard/official way to ask if 
certain features exists/are available would make this much easier.


As to the issue of certain versions of a browsers having bugs related to 
a feature has absolutely nothing to do with whether that feature is 
supported or not.
Tying feature tests to only bug free features is silly, no idea who here 
first suggested that (certainly hope it wasn't me) but it's stupid.


Is the feature implemented/available? Yes or no. If there are bugs or 
not is irrelevant. A programmer should assume that APIs are bugfree 
regardless.


Just ask Raymond Chen or people on the Windows compatibility team what 
happens when programmers try to detect bugs or rely on bugs, fixing said 
bugs in the OS then suddenly breaks those programs and and extra code is 
needed in the OS to handle those buggy programs.


Relying on user agent strings or similar is just a nest of snakes you do 
not want to rummage around in. HTML5 pages/apps should be browser neutral.



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


--
Roger Hågensen, Freelancer, http://skuldwyrm.no/


Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Karl Dubost
Kyle,

Let's see.

Le 8 avr. 2015 à 21:59, Kyle Simpson get...@gmail.com a écrit :
 as soon as a bug is around long enough and in enough browsers and enough 
 people are working around that bug, it becomes a permanent feature of the 
 web.

Yes this is a feature. It's part of the platform, even if not perfect, but you 
already know that.

 The point is not whether this clipboard API has bugs or that canvas API 
 doesn't or whatever.

The initial point you made was please add an api to say it's buggy. What I 
understood. Let's find your own words:

Can we add something like a feature test API 
(whatever it's called) where certain hard 
cases can be exposed as tests in some way?

I still think it's a mistake, because of the Web Compat horrors I see be UA 
sniffing or other things. But maybe I entirely misunderstood what you were 
saying because the point you seem to be making seems slightly different:

 The point I'm making is there will always be features the browsers implement 
 that won't have a nice clean API namespace or property to check for.


If an implemented feature is not __currently__ testable, maybe we should just 
make it testable to a level of granularity which is useful for Web devs, based 
on what we see in the wild.

Example testing if a range of unicode characters is displayable:

https://github.com/webcompat/webcompat.com/issues/604#issuecomment-90284059
Basically right now the current technique is to use canvas for rendering the 
character. 
see https://github.com/Modernizr/Modernizr/blob/master/feature-detects/emoji.js
and http://stimulus.hk/demos/testFont.html

Here I would love to have a:

String.fromCharCode(0x1F4A9).rendered()
   true or false

(or whatever makes sense)

The risk I see with the initial proposal is that we had another level of 
complexity instead of just making it testable. 

I may have missed what you wanted. 


-- 
Karl Dubost 
http://www.la-grange.net/karl/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-02 Thread Simon Pieters

On Wed, 01 Apr 2015 06:57:43 +0200, Kyle Simpson get...@gmail.com wrote:

There are features being added to the DOM/web platform, or at least  
under consideration, that do not have reasonable feature tests  
obvious/practical in their design. I consider this a problem, because  
all features which authors (especially those of libraries, like me) rely  
on should be able to be tested if present, and fallback if not present.


Paul Irish did a round-up awhile back of so called undetectables here:  
https://github.com/Modernizr/Modernizr/wiki/Undetectables


I don't want to get off topic in the weeds and/or invite bikeshedding  
about individual hard to test features.


I think we should not design a new API to test for features that should  
already be testable but aren't because of browser bugs. Many in that list  
are due to browser bugs. All points under HTML5 are browser bugs AFAICT.  
Audio/video lists some inconsistencies (bugs) where it makes more sense to  
fix the inconsistency than to spend the time implementing an API that  
allows you to test for the inconsistency.



So I just want to keep this discussion to a narrow request:

Can we add something like a feature test API (whatever it's called)  
where certain hard cases can be exposed as tests in some way?


Apart from link rel=preload, which features in particular?

The main motivation for starting this thread is the new `link  
rel=preload` feature as described here: https://github.com/w3c/preload


Specifically, in this issue thread:  
https://github.com/w3c/preload/issues/7 I bring up the need for that  
feature to be testable, and observe that as currently designed, no such  
test is feasable. I believe that must be addressed, and it was suggested  
that perhaps a more general solution could be devised if we bring this  
to a wider discussion audience.


I'm not convinced that a general API solves more problems than it causes.  
The feature test API is bound to have bugs, too.


A good way to avoid bugs is with test suites. We have web-platform-tests  
for cross-browser tests.


For link rel, we could solve the feature-testing problem by normalizing  
the case for supported keywords but not unsupported keywords, so you can  
check with .rel or .relList:


function preloadSupported() {
  var link = document.createElement('link');
  link.rel = 'PRELOAD';
  return link.rel == 'preload';
}

--
Simon Pieters
Opera Software


Re: [whatwg] Supporting feature tests of untestable features

2015-04-02 Thread Karl Dubost

Le 2 avr. 2015 à 17:36, Simon Pieters sim...@opera.com a écrit :
 On Wed, 01 Apr 2015 06:57:43 +0200, Kyle Simpson get...@gmail.com wrote:
 There are features being added to the DOM/web platform, or at least under 
 consideration, that do not have reasonable feature tests obvious/practical 
 in their design.
 
 I think we should not design a new API to test for features that should 
 already be testable but aren't because of browser bugs.

Agreed with what Simon is saying.

All these are really bugs in detectability and implementations of the features. 
https://github.com/Modernizr/Modernizr/wiki/Undetectables

What we could do is try to increase the granularity of the API/tests if the API 
is too shallow for detecting what is useful.

Adding another meta layer of testing makes the Web Compatibility story even 
harder. As usual, we look at the good side of the story as in it will help us 
make better site, but what I'm seeing often is more on the side of 

What is the hook that will help us to say 'Not Supported'
because our project manager told us browser not below 
   version blah.


(Based on sad true story. Your perpetual blockbuster)

-- 
Karl Dubost 
http://www.la-grange.net/karl/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-02 Thread James Graham
On 02/04/15 09:36, Simon Pieters wrote:

 I think we should not design a new API to test for features that should
 already be testable but aren't because of browser bugs. Many in that
 list are due to browser bugs. All points under HTML5 are browser bugs
 AFAICT. Audio/video lists some inconsistencies (bugs) where it makes
 more sense to fix the inconsistency than to spend the time implementing
 an API that allows you to test for the inconsistency.

[...]

 A good way to avoid bugs is with test suites. We have web-platform-tests
 for cross-browser tests.

Yes, this.

The right way to avoid having to detect bugs is for those bugs to not
exist. Reducing implementation differences is a critical part of making
the web platform an attractive target to develop for, just like adding
features and improving performance.

Web developers who care about the future of the platform can make a huge
difference by engaging with this process. In particular libraries like
Modernizr should be encouraged to adopt a process in which they submit
web-platform tests for each interoperability issue they find, and report
the bugs to browser vendors with a link to the test. This means both
that existing buggy implementations are likely to be fixed — because a
bug report with a test and an impact on a real product are the best,
highest priority, kind — and are likely to be avoided in future
implementations of the feature.

I really think it's important to change the culture here so that people
understand that they have the ability to directly effect change on the
web-platform, and not just through standards bodies, rather than
regarding it as something out of their control that must be endured.


Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread Roger Hågensen

On 2015-04-01 06:57, Kyle Simpson wrote:

There are features being added to the DOM/web platform, or at least under 
consideration, that do not have reasonable feature tests obvious/practical in 
their design. I consider this a problem, because all features which authors 
(especially those of libraries, like me) rely on should be able to be tested if 
present, and fallback if not present.

Paul Irish did a round-up awhile back of so called undetectables here: 
https://github.com/Modernizr/Modernizr/wiki/Undetectables

I don't want to get off topic in the weeds and/or invite bikeshedding about individual 
hard to test features. So I just want to keep this discussion to a narrow 
request:

Can we add something like a feature test API (whatever it's called) where certain 
hard cases can be exposed as tests in some way?

The main motivation for starting this thread is the new `link rel=preload` 
feature as described here: https://github.com/w3c/preload

Specifically, in this issue thread: https://github.com/w3c/preload/issues/7 I 
bring up the need for that feature to be testable, and observe that as 
currently designed, no such test is feasable. I believe that must be addressed, 
and it was suggested that perhaps a more general solution could be devised if 
we bring this to a wider discussion audience.



A featurecheck API? THat sort of makes sense.

I see two ways to do this.

One would be to call a function like (the fictional) featureversion() 
and get back a version indicating that the browser support the ECMA 
something standard as a bare minimum. But version checking is something 
I try to avoid even when doing programming on Windows (and MicroSoft 
advise against doing it).


So a better way might be:
featexist('function','eval')
featexist('document','link','rel','preload')
featexist('api','websocket')

Yeah the preload example does not look that pretty but hopefully you 
know what I'm getting at here. Maybe featexist('html','link','preload') 
instead?


On Windows programs I try to always dynamically load a library and then 
I get a function pointer to a named function, if it fails then I know 
the function does not exist in that dll and I can either fail gracefully 
or provide alternative code to emulate the missing function.
It's thanks to this that on streaming audio player I made axctually 
works on anything from Windows 2000 up to WIndows 8.1 and dynamically 
makes use of new features in more recent Windows versions thanks to 
being able to check if functions actually exists.


I use the same philosophy when doing Javascript and HTML5 coding.

With the featexist() above true is returned if present and false if not 
present.
Now what to do if eval() does not exist as a pre-defined function but a 
user defined eval() function exists instead for some reason.
My suggestion is that featexist() should return false in that case as it 
is not a function provided by the browser/client.


Now obviously a|if (typeof featexist == 'function')| would have to be 
done before calling featexist() and there is no way to get around that.


Another suggestion is that if a feature is disabled (by the user, the 
admin or the browser/client for some reason) ten featexist() should 
behave as if that feature does not existis not supported.
In other words featexist() could be a simple way to ask the browser if 
is this available? can I use this right now?



--
Roger Hågensen, Freelancer, http://skuldwyrm.no/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread James M. Greene
We had it but browser cendors abandoned its proper behavior [for some
historical reason unbeknownst to me]

DOMImplementation.hasFeature (document.hasFeature):
http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-5CED94D7

and

Node.isSupported:
http://www.w3.org/TR/DOM-Level-3-Core/core.html#Level-2-Core-Node-supports

We are running into the exact same issues with the HTML Clipboard API being
unreliably detectable. Even more troubling, this is especially true because
it is already partially supported (paste events) in some browsers (e.g.
Chrome), not at all supported in others, and fully supported in none.

Sincerely,
   James M. Greene
On Apr 1, 2015 3:36 AM, Roger Hågensen rh_wha...@skuldwyrm.no wrote:

 On 2015-04-01 06:57, Kyle Simpson wrote:

 There are features being added to the DOM/web platform, or at least under
 consideration, that do not have reasonable feature tests obvious/practical
 in their design. I consider this a problem, because all features which
 authors (especially those of libraries, like me) rely on should be able to
 be tested if present, and fallback if not present.

 Paul Irish did a round-up awhile back of so called undetectables here:
 https://github.com/Modernizr/Modernizr/wiki/Undetectables

 I don't want to get off topic in the weeds and/or invite bikeshedding
 about individual hard to test features. So I just want to keep this
 discussion to a narrow request:

 Can we add something like a feature test API (whatever it's called)
 where certain hard cases can be exposed as tests in some way?

 The main motivation for starting this thread is the new `link
 rel=preload` feature as described here: https://github.com/w3c/preload

 Specifically, in this issue thread: https://github.com/w3c/
 preload/issues/7 I bring up the need for that feature to be testable,
 and observe that as currently designed, no such test is feasable. I believe
 that must be addressed, and it was suggested that perhaps a more general
 solution could be devised if we bring this to a wider discussion audience.


 A featurecheck API? THat sort of makes sense.

 I see two ways to do this.

 One would be to call a function like (the fictional) featureversion()
 and get back a version indicating that the browser support the ECMA
 something standard as a bare minimum. But version checking is something I
 try to avoid even when doing programming on Windows (and MicroSoft advise
 against doing it).

 So a better way might be:
 featexist('function','eval')
 featexist('document','link','rel','preload')
 featexist('api','websocket')

 Yeah the preload example does not look that pretty but hopefully you know
 what I'm getting at here. Maybe featexist('html','link','preload')
 instead?

 On Windows programs I try to always dynamically load a library and then I
 get a function pointer to a named function, if it fails then I know the
 function does not exist in that dll and I can either fail gracefully or
 provide alternative code to emulate the missing function.
 It's thanks to this that on streaming audio player I made axctually works
 on anything from Windows 2000 up to WIndows 8.1 and dynamically makes use
 of new features in more recent Windows versions thanks to being able to
 check if functions actually exists.

 I use the same philosophy when doing Javascript and HTML5 coding.

 With the featexist() above true is returned if present and false if not
 present.
 Now what to do if eval() does not exist as a pre-defined function but a
 user defined eval() function exists instead for some reason.
 My suggestion is that featexist() should return false in that case as it
 is not a function provided by the browser/client.

 Now obviously a|if (typeof featexist == 'function')| would have to be
 done before calling featexist() and there is no way to get around that.

 Another suggestion is that if a feature is disabled (by the user, the
 admin or the browser/client for some reason) ten featexist() should behave
 as if that feature does not existis not supported.
 In other words featexist() could be a simple way to ask the browser if
 is this available? can I use this right now?


 --
 Roger Hågensen, Freelancer, http://skuldwyrm.no/




Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread Boris Zbarsky

On 4/1/15 8:27 AM, James M. Greene wrote:

We had it but browser cendors abandoned its proper behavior [for some
historical reason unbeknownst to me]


The support signal (the hasFeature() implementation) was not in any way 
coupled with the actual implementation.


So you would have cases in which hasFeature() claimed false even though 
the browser supported the feature, cases in which hasFeature() claimed 
true even though the browser didn't support the feature, and cases in 
which the browser had somewhat rudimentary support for the feature but 
hasFeature() claimed true because of various market pressures.  This was 
especially driven by the coarse nature of the features involved -- you 
could at best ask questions like is this spec supported?, not is this 
particular piece of functionality supported?.  That works OK for small 
targeted specs, but the W3C wasn't so much in the business of doing those.


The upshot was that in any sort of interesting case hasFeature was 
useless at best and misleading at worst.



We are running into the exact same issues with the HTML Clipboard API being
unreliably detectable. Even more troubling, this is especially true because
it is already partially supported (paste events) in some browsers (e.g.
Chrome), not at all supported in others, and fully supported in none.


So let's consider this case.  How would a hasFeature deal with this 
situation?  At what point would you expect it to start returning true 
for the clipboard API?


-Boris

P.S.  Looking over the clipboard API, it seems like it really has the 
following bits: 1) The various before* events, which would be detectable 
if the spec added the corresponding onbefore* attributes to someplace, 
and 2) The copy/paste/etc events, which could likewise be detectable 
with on* attributes.  Am I missing something else that is not detectable 
for fundamental reasons?


Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread James M. Greene
 P.S.  Looking over the clipboard API, it seems like it really has the
following bits:
 1) The various before* events, which would be detectable if the spec
added the
 corresponding onbefore* attributes to someplace, and 2) The
copy/paste/etc events,
 which could likewise be detectable with on* attributes.  Am I missing
something else
 that is not detectable for fundamental reasons?

Not to side-track this generalized conversation too much but it may
actually be useful to detail the state of Clipboard API detection here
based on current practices and APIs (which is completely hopeless):

*Event-based detection:*
The onbeforecopy, onbeforecut, onbeforepaste, oncopy, oncut, and
onpaste events have all existed in IE, Safari, and Chrome (and now modern
Opera, I'd imagine) for quite some time, related to standard user actions
for copy/cut/paste (and more importantly for ContentEditable purposes,
IIRC).  So this approach would return false positives for at least every
modern major browser other than Firefox (if not Firefox, too).

*Method-based detection:*
Firefox, for example, implements
`clipboardData.setData()`/`clipboardData.getData()` and also implements the
`ClipboardEvent` constructor, does not actually modify the user's clipboard
on `clipboard.setData()`. Additionally, to even get access to
`clipboardData`, you must be able to synthetically execute one of these
commands (e.g. copy). So this approach isn't great, and will return false
positives for Firefox anyway.

*Command-based detection:*
Since the actual actions like copy, etc. will be triggered via ye olde
query commands (i.e. `document.execCommand`), another proper approach for
detection would be using the `document.queryCommandSupported` and
`document.queryCommandEnabled` methods to check for support.  However,
those methods work inconsistently cross-browser, some throw Errors instead
of returning `false`, etc.  Beside all that, the copy, cut, and paste
query commands have all previously existed for ContentEditable purposes.
So this approach is flawed, unreliable, and will return false positives.

*Implementation-based detection:*
As previously discussed in this thread,
`document.implementation.hasFeature` and/or `Node.isSupported` could have
been another idea for feature detection but has already been bastardized to
the point of no return.  So this approach, if implemented, would almost
always return false positives (e.g. Firefox, if not all other browsers,
returns `true` for everything).



As far as checking on full implementations, it would be nice if specs
followed some reliable versioning pattern like SemVer that we could use to
verify the correct level of support.  Even at that point, though: should
such an approach only check against W3C specs or also WHATWG specs?

Additionally, as you mentioned, it would be much better if we could create
some API which would offer the ability to check for partial implementations
(features/sub-features) vs. full spec implementations as well.  For
example, Chrome has implemented the paste feature of the Clipboard API
nearly completely but has NOT implemented the copy or cut features.

Sincerely,
James Greene


On Wed, Apr 1, 2015 at 8:07 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/1/15 8:27 AM, James M. Greene wrote:

 We had it but browser cendors abandoned its proper behavior [for some
 historical reason unbeknownst to me]


 The support signal (the hasFeature() implementation) was not in any way
 coupled with the actual implementation.

 So you would have cases in which hasFeature() claimed false even though
 the browser supported the feature, cases in which hasFeature() claimed true
 even though the browser didn't support the feature, and cases in which the
 browser had somewhat rudimentary support for the feature but hasFeature()
 claimed true because of various market pressures.  This was especially
 driven by the coarse nature of the features involved -- you could at best
 ask questions like is this spec supported?, not is this particular piece
 of functionality supported?.  That works OK for small targeted specs, but
 the W3C wasn't so much in the business of doing those.

 The upshot was that in any sort of interesting case hasFeature was useless
 at best and misleading at worst.

  We are running into the exact same issues with the HTML Clipboard API
 being
 unreliably detectable. Even more troubling, this is especially true
 because
 it is already partially supported (paste events) in some browsers (e.g.
 Chrome), not at all supported in others, and fully supported in none.


 So let's consider this case.  How would a hasFeature deal with this
 situation?  At what point would you expect it to start returning true for
 the clipboard API?

 -Boris

 P.S.  Looking over the clipboard API, it seems like it really has the
 following bits: 1) The various before* events, which would be detectable if
 the spec added the corresponding onbefore* attributes to someplace, and 2)
 The 

Re: [whatwg] Supporting feature tests of untestable features

2015-04-01 Thread James M. Greene
P.S. If you want to get involved, here is a link to the archive of the most
recent email thread about feature detection for the Clipboard API:

https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0592.html


Sincerely,
James Greene


On Wed, Apr 1, 2015 at 9:04 AM, James M. Greene james.m.gre...@gmail.com
wrote:

  P.S.  Looking over the clipboard API, it seems like it really has the
 following bits:
  1) The various before* events, which would be detectable if the spec
 added the
  corresponding onbefore* attributes to someplace, and 2) The
 copy/paste/etc events,
  which could likewise be detectable with on* attributes.  Am I missing
 something else
  that is not detectable for fundamental reasons?

 Not to side-track this generalized conversation too much but it may
 actually be useful to detail the state of Clipboard API detection here
 based on current practices and APIs (which is completely hopeless):

 *Event-based detection:*
 The onbeforecopy, onbeforecut, onbeforepaste, oncopy, oncut, and
 onpaste events have all existed in IE, Safari, and Chrome (and now modern
 Opera, I'd imagine) for quite some time, related to standard user actions
 for copy/cut/paste (and more importantly for ContentEditable purposes,
 IIRC).  So this approach would return false positives for at least every
 modern major browser other than Firefox (if not Firefox, too).

 *Method-based detection:*
 Firefox, for example, implements
 `clipboardData.setData()`/`clipboardData.getData()` and also implements the
 `ClipboardEvent` constructor, does not actually modify the user's clipboard
 on `clipboard.setData()`. Additionally, to even get access to
 `clipboardData`, you must be able to synthetically execute one of these
 commands (e.g. copy). So this approach isn't great, and will return false
 positives for Firefox anyway.

 *Command-based detection:*
 Since the actual actions like copy, etc. will be triggered via ye olde
 query commands (i.e. `document.execCommand`), another proper approach for
 detection would be using the `document.queryCommandSupported` and
 `document.queryCommandEnabled` methods to check for support.  However,
 those methods work inconsistently cross-browser, some throw Errors instead
 of returning `false`, etc.  Beside all that, the copy, cut, and paste
 query commands have all previously existed for ContentEditable purposes.
 So this approach is flawed, unreliable, and will return false positives.

 *Implementation-based detection:*
 As previously discussed in this thread,
 `document.implementation.hasFeature` and/or `Node.isSupported` could have
 been another idea for feature detection but has already been bastardized to
 the point of no return.  So this approach, if implemented, would almost
 always return false positives (e.g. Firefox, if not all other browsers,
 returns `true` for everything).



 As far as checking on full implementations, it would be nice if specs
 followed some reliable versioning pattern like SemVer that we could use to
 verify the correct level of support.  Even at that point, though: should
 such an approach only check against W3C specs or also WHATWG specs?

 Additionally, as you mentioned, it would be much better if we could create
 some API which would offer the ability to check for partial implementations
 (features/sub-features) vs. full spec implementations as well.  For
 example, Chrome has implemented the paste feature of the Clipboard API
 nearly completely but has NOT implemented the copy or cut features.

 Sincerely,
 James Greene


 On Wed, Apr 1, 2015 at 8:07 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/1/15 8:27 AM, James M. Greene wrote:

 We had it but browser cendors abandoned its proper behavior [for some
 historical reason unbeknownst to me]


 The support signal (the hasFeature() implementation) was not in any way
 coupled with the actual implementation.

 So you would have cases in which hasFeature() claimed false even though
 the browser supported the feature, cases in which hasFeature() claimed true
 even though the browser didn't support the feature, and cases in which the
 browser had somewhat rudimentary support for the feature but hasFeature()
 claimed true because of various market pressures.  This was especially
 driven by the coarse nature of the features involved -- you could at best
 ask questions like is this spec supported?, not is this particular piece
 of functionality supported?.  That works OK for small targeted specs, but
 the W3C wasn't so much in the business of doing those.

 The upshot was that in any sort of interesting case hasFeature was
 useless at best and misleading at worst.

  We are running into the exact same issues with the HTML Clipboard API
 being
 unreliably detectable. Even more troubling, this is especially true
 because
 it is already partially supported (paste events) in some browsers (e.g.
 Chrome), not at all supported in others, and fully supported in none.


 So let's consider this case. 

[whatwg] Supporting feature tests of untestable features

2015-03-31 Thread Kyle Simpson
There are features being added to the DOM/web platform, or at least under 
consideration, that do not have reasonable feature tests obvious/practical in 
their design. I consider this a problem, because all features which authors 
(especially those of libraries, like me) rely on should be able to be tested if 
present, and fallback if not present.

Paul Irish did a round-up awhile back of so called undetectables here: 
https://github.com/Modernizr/Modernizr/wiki/Undetectables

I don't want to get off topic in the weeds and/or invite bikeshedding about 
individual hard to test features. So I just want to keep this discussion to a 
narrow request:

Can we add something like a feature test API (whatever it's called) where 
certain hard cases can be exposed as tests in some way?

The main motivation for starting this thread is the new `link rel=preload` 
feature as described here: https://github.com/w3c/preload

Specifically, in this issue thread: https://github.com/w3c/preload/issues/7 I 
bring up the need for that feature to be testable, and observe that as 
currently designed, no such test is feasable. I believe that must be addressed, 
and it was suggested that perhaps a more general solution could be devised if 
we bring this to a wider discussion audience.