Re: Steps to creating a browser standard for the moz-icon:// scheme

2010-02-04 Thread Adam Barth
On Thu, Feb 4, 2010 at 1:34 AM, timeless timel...@gmail.com wrote:
 2010/2/3 Adam Barth w...@adambarth.com:
 You've been getting a lot of feedback from Mozilla.  Jonas Sicking,
 Robert O'Callahan, and Boris Zbarsky are all leading members of the
 Mozilla community.

 I guess that makes me a trailing member :)

I was going to mention you, but then your email spoke about Nokia so I
was worried that I was confused.

Adam



Re: [WebTiming] HTMLElement timing

2010-02-04 Thread Lenny Rachitsky
I¹d like to jump in here and address this point:

³While I agree that timing information is important, I don't think it's
going to be so commonly used that we need to add convenience features
for it. Adding a few event listeners at the top of the document does
not seem like a big burden.²

I work for a company that sells a web performance monitoring service to
Fortune 1000 companies. To give a quick bit of background to the monitoring
space, there are two basic ways to provide website owners with reliable
performance metrics for their web site/applications. The first is to do
active/synthetic monitoring, where you test the site using an automated
browser from various locations around the world, simulating a real user. The
second approach is called passive or real user monitoring, which captures
actual visits to your site and records the performance of those users. This
second approach is accomplished with either a network tap appliance sitting
in the customers datacenter that captures all of the traffic that comes to
the site, or using the ³event listener² javascript trick which times the
client side page performance and sends it back to a central server.

Each of these approaches has pros and cons. The synthetic approach doesn¹t
tell you what actual users are seeing, but it consistent and easy to
setup/manage. The appliance approach is expensive and misses out on
components that don¹t get served out of the one datacenter, but it sees real
users performance. The client side javascript timing approach gives you very
limited visibility, but is easy to setup and universally available. This
limited nature of the this latter javascript approach is the crux of why
this ³Web Timing² draft is so valuable. Website owners today have no way to
accurately track the true performance of actual visitors to their website.
With the proposed interface additions, companies would finally be able to
not only see how long the page truly takes to load (including the
pre-javascript execution time), but they¹d also now be able to know how much
DNS and connect time affect actual visitors¹ performance, how much of an
impact each image/objects makes (an increasing source of performance
issues), and ideally how much JS parsing and SSL handshakes add to the load
time. This would give website owners tremendously valuable data is currently
impossible to reliably track.

--
Lenny Rachitsky 
Neustar, Inc. / Software Architect/RD
9444 Waples St., San Diego CA 92121
Office: +1.877.524.8299x434  / lenny.rachit...@webmetrics.com /
www.neustar.biz
 
 



Re: [WebTiming] HTMLElement timing

2010-02-04 Thread lenny.rachitsky
I’d like to jump in here and address this point:

“While I agree that timing information is important, I don't think it's
going to be so commonly used that we need to add convenience features
for it. Adding a few event listeners at the top of the document does
not seem like a big burden.”

I work for a company that sells a web performance monitoring service to
Fortune 1000 companies. To give a quick bit of background to the monitoring
space, there are two basic ways to provide website owners with reliable
performance metrics for their web site/applications. The first is to do
active/synthetic monitoring, where you test the site using an automated
browser from various locations around the world, simulating a real user. The
second approach is called passive or real user monitoring, which captures
actual visits to your site and records the performance of those users. This
second approach is accomplished with either a network tap appliance sitting
in the customers datacenter that captures all of the traffic that comes to
the site, or using the “event listener” javascript trick which times the
client side page performance and sends it back to a central server.

Each of these approaches has pros and cons. The synthetic approach doesn’t
tell you what actual users are seeing, but it consistent and easy to
setup/manage. The appliance approach is expensive and misses out on
components that don’t get served out of the one datacenter, but it sees real
users performance. The client side javascript timing approach gives you very
limited visibility, but is easy to setup and universally available. This
limited nature of the this latter javascript approach is the crux of why
this “Web Timing” draft is so valuable. Website owners today have no way to
accurately track the true performance of actual visitors to their website.
With the proposed interface additions, companies would finally be able to
not only see how long the page truly takes to load (including the
pre-javascript execution time), but they’d also now be able to know how much
DNS and connect time affect actual visitors’ performance, how much of an
impact each image/objects makes (an increasing source of performance
issues), and ideally how much JS parsing and SSL handshakes add to the load
time. This would give website owners tremendously valuable data is currently
impossible to reliably track.


Lenny Rachitsky
Webmetrics


James Robinson-5 wrote:
 
 On Tue, Feb 2, 2010 at 10:36 AM, Zhiheng Wang zhihe...@google.com wrote:
 
 Hi, Olli,

 On Fri, Jan 29, 2010 at 6:15 AM, Olli Pettay
 olli.pet...@helsinki.fiwrote:

  On 1/27/10 9:39 AM, Zhiheng Wang wrote:

 Folks,

  Thanks to the much feedback from various developers, the WebTiming
 specs has undergone some
 major revision. Timing info has now been extended to page elements and
 a
 couple more interesting timing
 data points are added. The draft is up on
 http://dev.w3.org/2006/webapi/WebTiming/

  Feedback and comments are highly appreciated.

 cheers,
 Zhiheng



 Like Jonas mentioned, this kind of information could be exposed
 using progress events.

 What is missing in the draft, and actually in the emails I've seen
 about this is the actual use case for the web.
 Debugging web apps can happen outside the web, like Firebug, which
 investigates what browser does in different times.
 Why would a web app itself need all this information? To optimize
 something, like using different server if some server is slow?
 But for that (extended) progress events would be
 good.
 And if the browser exposes all the information that the draft suggest,
 it
 would make sense to dispatch some event when some
 new information is available.


Good point and I do need to spend more time on the intro and use cases
 throughout
 the specs. In short, the target of this specs are web site owners who
 want
 to benchmark their
 user exprience in the field. Debugging tools are indeed very powerful in
 development but things
  could become quite different once the page is put to the wild, e.g.,
 there
 is no telling
 about dns, tcp connection time in the dev space; UGC only adds more
 complications to the
 overall latency of the page; and, what is the right TTL for my dns
 record
 if I want to maintain
 certain cache hit rate?, etc.


 There are also undefined things like paint event, which is
 referred to in lastPaintEvent and paintEventCount.
 And again, use case for paintEventCount etc.


Something like Mozilla's MozAfterPaint?  I do need to work on more use
 cases.

 
 In practice I think this will be useless.  In a page that has any sort of
 animation, blinking cursor, mouse movement plus hover effects, etc the
 'last
 paint time' will always be immediately before the query.   I would
 recommend
 dropping it.
 
 - James
 
 


 The name of the attribute is very strange:
 readonly attribute DOMTiming document;


agreed... how about something like root_times?




 What is the reason for timing array in window object? 

Re: [WebTiming] HTMLElement timing

2010-02-04 Thread Lenny Rachitsky
I¹d like to jump in here and address this point:

³While I agree that timing information is important, I don't think it's
going to be so commonly used that we need to add convenience features
for it. Adding a few event listeners at the top of the document does
not seem like a big burden.²

I work for a company that sells a web performance monitoring service to
Fortune 1000 companies. To give a quick bit of background to the monitoring
space, there are two basic ways to provide website owners with reliable
performance metrics for their web site/applications. The first is to do
active/synthetic monitoring, where you test the site using an automated
browser from various locations around the world, simulating a real user. The
second approach is called passive or real user monitoring, which captures
actual visits to your site and records the performance of those users. This
second approach is accomplished with either a network tap appliance sitting
in the customers datacenter that captures all of the traffic that comes to
the site, or using the ³event listener² javascript trick which times the
client side page performance and sends it back to a central server.

Each of these approaches has pros and cons. The synthetic approach doesn¹t
tell you what actual users are seeing, but it consistent and easy to
setup/manage. The appliance approach is expensive and misses out on
components that don¹t get served out of the one datacenter, but it sees real
users performance. The client side javascript timing approach gives you very
limited visibility, but is easy to setup and universally available. This
limited nature of the this latter javascript approach is the crux of why
this ³Web Timing² draft is so valuable. Website owners today have no way to
accurately track the true performance of actual visitors to their website.
With the proposed interface additions, companies would finally be able to
not only see how long the page truly takes to load (including the
pre-javascript execution time), but they¹d also now be able to know how much
DNS and connect time affect actual visitors¹ performance, how much of an
impact each image/objects makes (an increasing source of performance
issues), and ideally how much JS parsing and SSL handshakes add to the load
time. This would give website owners tremendously valuable data is currently
impossible to reliably track.

--
Lenny Rachitsky 
Neustar, Inc. / Software Architect/RD
9444 Waples St., San Diego CA 92121
Office: +1.877.524.8299x434  / lenny.rachit...@webmetrics.com /
www.neustar.biz
 
 



Re: MPEG-U

2010-02-04 Thread Robin Berjon
Hi,

On Feb 3, 2010, at 09:47 , Cyril Concolato wrote:
 I've been informed by the ISO secretariat that the liaison from MPEG was sent 
 to the W3C and that the right persons this time have received it. Is it 
 correct? Can you tell me what the next step is ? Has the group discussed it ? 
 What is the opinion of the group ? If not, when will it be discussed ?

The group received this email on 26/01:

  http://www.w3.org/mid/4b5ff205.1050...@w3.org (member-only)

Which contains no information but speaks of an attached PDF. But there is no 
attached PDF (maybe it got lost in the transfer?). I don't know if anyone 
replied to enquire about it.

It would be a lot simpler if the secretariat would just send an email to this 
list!

-- 
Robin Berjon - http://berjon.com/






Re: [WARP] Use cases for local network access

2010-02-04 Thread Arve Bersvendsen
On Tue, 02 Feb 2010 19:09:26 +0100, Stephen Jolly  
stephen.jo...@rd.bbc.co.uk wrote:



All,

As actioned in the 21st Jan teleconference, here are the use cases that  
have motivated my specific proposal for supporting local network access  
in the WARP spec (see  
http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/0173.html  
for details).


1. A developer wishes to write widgets that can connect to the web API  
exposed by a network-connected television or personal video recorder  
(aka digital video recorder) on their home network.  This API allows  
(for example) the channel being viewed to be changed or the list of  
scheduled recordings to be modified, via a user interface presented by  
the widget.


During this teleconference, I was asked to elaborate my position on this  
topic.  The advantage of creating a definition of a local network is the  
following variant of the use case:


A developer wants to write a widget that posts whatever channel the user  
switches to on a network-connected TV to http://μblogging.example.com/.   
The problem, with WARP as is that, since the network address of said TV is  
indeterminate, the developer's only option is to allow the widget to  
connect to any URL it wishes (specifying '*' in origin, or add a large  
(read: huge) set of origins in order to be able to do this.


My proposal would be to add a second attribute to the specification, a  
boolean, such as origin-local, which would replace any IP address the user  
agent considers to be local, link-local or even local-machine addresses.   
Alternatively, should fine-grained distinction between the three, these  
could alternatively be keywords in the existing origin attribute.




--
Arve Bersvendsen

Opera Software ASA, http://www.opera.com/



[widgets] Draft Minutes for 4-Feb-2010 voice conference

2010-02-04 Thread Arthur Barstow
The draft minutes from the 4 February Widgets voice conference are  
available at the following and copied below:


 http://www.w3.org/2010/02/04-wam-minutes.html

WG Members - if you have any comments, corrections, etc., please send  
them to the public-webapps mail list before 11 February (the next  
Widgets voice conference); otherwise these minutes will be considered  
Approved.


-Regards, Art Barstow

   [1]W3C

  [1] http://www.w3.org/

   - DRAFT -

   Widgets Voice Conference

04 Feb 2010

   [2]Agenda

  [2] http://lists.w3.org/Archives/Public/public-webapps/ 
2010JanMar/0411.html


   See also: [3]IRC log

  [3] http://www.w3.org/2010/02/04-wam-irc

Attendees

   Present
  Art, Arve, Marcos, StephenJ, StevenP, Robin, Marcin

   Regrets
  Josh

   Chair
  Art

   Scribe
  Art

Contents

 * [4]Topics
 1. [5]Review and tweak agenda
 2. [6]Announcements
 3. [7]PC spec: Any critical comments against PC CR#2?
 4. [8]PC spec: Interop plans (and exiting CR)
 5. [9]TWI spec: test case comments
 6. [10]TWI spec: Interop plans?
 7. [11]WARP spec: test suite plans
 8. [12]WARP spec: use cases for local network access
 9. [13]URI Scheme spec: Status of LC comment tracking
10. [14]AOB
 * [15]Summary of Action Items
 _

   scribe ScribeNick: ArtB

   scribe Scribe: Art

   Date: 4-Feb-2010

Review and tweak agenda

   AB: agenda submitted on Feb 3 (
   [16]http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/04
   11.html ). We will drop 4.a. because Marcos already closed action
   476. Any change requests?

 [16] http://lists.w3.org/Archives/Public/public-webapps/ 
2010JanMar/0411.html


Announcements

   AB: any short announcements?

PC spec: Any critical comments against PC CR#2?

   AB: the comment period for PC CR#2 ended 24-Jan-2010. About 15
   comments were submitted against the spec and its test suite see the
   list in: (
   [17]http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/04
   10.html ). Marcos said (
   [18]http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/04
   13.html ) the emails resulted in clarifications to the spec and
   fixes in the test suite.
   ... any comments about Marcos' analysis or any concerns about the
   comments that were submitted?

 [17] http://lists.w3.org/Archives/Public/public-webapps/ 
2010JanMar/0410.html
 [18] http://lists.w3.org/Archives/Public/public-webapps/ 
2010JanMar/0413.html


   darobin +1

   AB: I also did not recognize any substantial comments

PC spec: Interop plans (and exiting CR)

   AB: the PC CR Implementation Report (
   [19]http://dev.w3.org/2006/waf/widgets/imp-report/ ) shows 3
   implementations pass 100% of the tests in the test suite. I think
   that means we can now exit CR and advance to PR.
   ... any comments?
   ... any disagreements with my intepretation?

 [19] http://dev.w3.org/2006/waf/widgets/imp-report/

   MC: I added one test to the test suite
   ... thus everyone is down to 99%
   ... planning to add one more test
   ... then I think it will be complete

   SP: what are the exit criteria?

   MC: 2 impls that pass 100% of the tests

   Arve: having 2 interop impls doesn't mean there are no problems
   ... if those impls are widely used
   ... Perhaps the exit criteria should have been tighter

   AB: we are free to create any criteria we want
   ... I would caution though on being overly constraining
   ... I am also sympathetic to the concerns Marcos raised

   Steven-cwi and demonstrated at least two interoperable
   implementations (interoperable meaning at least two implementations
   that pass each test in the test suite).

   MC: we all agree we don't want to rush it

   SP: agree and that's not what I was saying; just wanted to clarify

   Steven-cwi Traditionally, exiting CR was with two impls of each
   feature, rather than two implementations of EVERY feature

   MC: think we need more in the wild usage

   Steven-cwi but we are being stricter, which is fine

   Steven-cwi but the wording can actually be interpreted as the
   looser version

   RB: I think we're OK to ship
   ... think we've already done pretty good
   ... if we run into serious probs we can publish a 2nd edition
   ... we have done a bunch of authoring and not found major issues

   MC: if people feel confident, I won't block moving forward

   AB: coming back to these two new test cases

   Marcos
   [20]http://dev.w3.org/2006/waf/widgets/test-suite/test-cases/ta-rZdc
   MBExBX/002/

 [20] http://dev.w3.org/2006/waf/widgets/test-suite/test-cases/ 
ta-rZdcMBExBX/002/


   AB: at a minumum, presume we would need at least 2/3 impls to run
   these 2 new tests
   ... one of the new tests is checked in already?

   MC: yes
   ... and the 2nd will be checked in 

Re: [WARP] Use cases for local network access

2010-02-04 Thread Stephen Jolly
On 4 Feb 2010, at 15:15, Arve Bersvendsen wrote:
On Tue, 02 Feb 2010 19:09:26 +0100, Stephen Jolly stephen.jo...@rd.bbc.co.uk 
wrote:
 As actioned in the 21st Jan teleconference, here are the use cases that have 
 motivated my specific proposal for supporting local network access in the 
 WARP spec (see 
 http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/0173.html for 
 details).
 
 1. A developer wishes to write widgets that can connect to the web API 
 exposed by a network-connected television or personal video recorder (aka 
 digital video recorder) on their home network.  This API allows (for 
 example) the channel being viewed to be changed or the list of scheduled 
 recordings to be modified, via a user interface presented by the widget.
 
 During this teleconference, I was asked to elaborate my position on this 
 topic.  The advantage of creating a definition of a local network is the 
 following variant of the use case:
 
 A developer wants to write a widget that posts whatever channel the user 
 switches to on a network-connected TV to http://μblogging.example.com/.  The 
 problem, with WARP as is that, since the network address of said TV is 
 indeterminate, the developer's only option is to allow the widget to connect 
 to any URL it wishes (specifying '*' in origin, or add a large (read: huge) 
 set of origins in order to be able to do this.

Surely the same problem exists if you remove the dependency on the blogging 
service - ie if the widget merely wants to connect to the television?

 My proposal would be to add a second attribute to the specification, a 
 boolean, such as origin-local, which would replace any IP address the user 
 agent considers to be local, link-local or even local-machine addresses.  
 Alternatively, should fine-grained distinction between the three, these could 
 alternatively be keywords in the existing origin attribute.

I would be OK with a solution along those lines, although leaving the 
definition of local up to the user agent still concerns me due to the 
potential impact on developers and users when the same widget behaves 
differently on different WUAs.

S




RE: [WARP] Use cases for local network access

2010-02-04 Thread Marcin Hanclik
Hi Arve,

Alternatively, should fine-grained distinction between the three, these
could alternatively be keywords in the existing origin attribute.
+1
It matches the proposal at http://dev.w3.org/2006/waf/widgets-access-upnp/, 
although the name of that document is misleading.

Thanks,
Marcin

Marcin Hanclik
ACCESS Systems Germany GmbH
Tel: +49-208-8290-6452  |  Fax: +49-208-8290-6465
Mobile: +49-163-8290-646
E-Mail: marcin.hanc...@access-company.com

-Original Message-
From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
Behalf Of Arve Bersvendsen
Sent: Thursday, February 04, 2010 4:16 PM
To: Stephen Jolly; public-webapps@w3.org
Subject: Re: [WARP] Use cases for local network access

On Tue, 02 Feb 2010 19:09:26 +0100, Stephen Jolly
stephen.jo...@rd.bbc.co.uk wrote:

 All,

 As actioned in the 21st Jan teleconference, here are the use cases that
 have motivated my specific proposal for supporting local network access
 in the WARP spec (see
 http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/0173.html
 for details).

 1. A developer wishes to write widgets that can connect to the web API
 exposed by a network-connected television or personal video recorder
 (aka digital video recorder) on their home network.  This API allows
 (for example) the channel being viewed to be changed or the list of
 scheduled recordings to be modified, via a user interface presented by
 the widget.

During this teleconference, I was asked to elaborate my position on this
topic.  The advantage of creating a definition of a local network is the
following variant of the use case:

A developer wants to write a widget that posts whatever channel the user
switches to on a network-connected TV to http://μblogging.example.com/.
The problem, with WARP as is that, since the network address of said TV is
indeterminate, the developer's only option is to allow the widget to
connect to any URL it wishes (specifying '*' in origin, or add a large
(read: huge) set of origins in order to be able to do this.

My proposal would be to add a second attribute to the specification, a
boolean, such as origin-local, which would replace any IP address the user
agent considers to be local, link-local or even local-machine addresses.
Alternatively, should fine-grained distinction between the three, these
could alternatively be keywords in the existing origin attribute.



--
Arve Bersvendsen

Opera Software ASA, http://www.opera.com/




Access Systems Germany GmbH
Essener Strasse 5  |  D-46047 Oberhausen
HRB 13548 Amtsgericht Duisburg
Geschaeftsfuehrer: Michel Piquemal, Tomonori Watanabe, Yusuke Kanda

www.access-company.com

CONFIDENTIALITY NOTICE
This e-mail and any attachments hereto may contain information that is 
privileged or confidential, and is intended for use only by the
individual or entity to which it is addressed. Any disclosure, copying or 
distribution of the information by anyone else is strictly prohibited.
If you have received this document in error, please notify us promptly by 
responding to this e-mail. Thank you.


Re: [XHR] XMLHttpRequest specification lacks security considerations

2010-02-04 Thread Thomas Roessler
On 31 Jan 2010, at 14:23, Anne van Kesteren wrote:

 On Tue, 19 Jan 2010 08:01:12 +0100, Thomas Roessler t...@w3.org wrote:
 With apologies for the belated Last Call comment -- the XMLHttpRequest 
 specification
  http://www.w3.org/TR/XMLHttpRequest/
 
 ... doesn't have meaningful security considerations.
 
 I actually removed that section altogether in the editors draft.

Strikes me as a step in the wrong direction.

 - Somewhat detailed considerations around CONNECT, TRACE, and TRACK (flagged 
 in the text of the specification, but not called out in the security 
 section; 4.6.1).
 
 What is the reason for duplicating this information?

It will be useful for implementers and reviewers of this specification to find 
a brief summary of the relevant issues within the spec itself.  That doesn't 
imply that you simply need to duplicate information.

 - Considerations around DNS rebinding.
 
 Why would these be specific to XMLHttpRequest?

These indeed apply to just about any specification that uses a same-origin 
policy. But that's not a justification for ignoring them here.  DNS rebinding 
has been both obvious and overlooked for some 10-15 years, so reminding 
reviewers and implementers of both the security risk and the countermeasures 
would seem appropriate.

 - Some explanation for the security reasons that are mentioned in section 
 4.6.2 (setRequestHeader).
 
 Maybe removing security reasons would be better?

No.  It's worth explaining why (a) we have a specific blacklist, and (b) what 
the impact of not having that blacklist is -- this is effectively profiling of 
the protocol elements that are accessible to applications; if I've seen a 
design decision that deserves a rationale in the spec, then this qualifies.

 - The rationale for the handling of HTTP redirects in section 4.6.4.
 
 I agree that this should be clarified, though I do not see why it should be 
 mentioned in a separate section as well.

It sounds useful to tell a single, consistent story about the security model 
around redirects, DNS rebinding, and same-origin policies, instead of 
scattering rationales through the spec.  Therefore, I'm in favor of covering 
these topics in a single security considerations section.

 - The fact that this specification normatively defines the same-origin 
 policy as it applies to network access within browsers (section 4.6.1; 
 though that mostly refers to HTML5 these days)

 It does not define the policy. It just uses it.

It does not define what same-origin means.  It *is* the place that explains 
what policy applies to XMLHttpRequest, and the redirect section is one example 
where the policy needs to be refined for the specific case.

So, without going into semantics of what define the policy means, I suggest 
calling out that this spec sets the security policy for XHR, what that policy 
is, how the different pieces that are relevant to it tie together, and what the 
risks are.

 
 Related to this, what is the rationale for making the following (explicitly 
 security-relevant) conformance clauses SHOULD, not MUST?
 
 ** 4.6.1
 
 If method is a case-sensitive match for CONNECT, TRACE, or TRACK the user 
 agent should raise a SECURITY_ERR exception and terminate these steps.
 
 If the origin of url is not same origin with the XMLHttpRequest origin the 
 user agent should raise a SECURITY_ERR exception and terminate these steps.
 
 ** 4.6.2
 
 For security reasons, these steps should be terminated if header is an ASCII 
 case-insensitive match for one of the following headers:
 ...
 
 Early on we agreed that all security-relevant conformance clauses should be 
 SHOULD and not MUST so that implementors could ignore them in specific 
 contexts. E.g. extensions. I would personally be fine with making these MUST.

I'd be significantly more comfortable with a MUST, and wonder whether the 
extension considerations have changed over time.  *If* we stick to SHOULD, some 
analysis of the combined effects of different choices would be in order.

(Considering the discussion around cross-origin XHR over the past two or three 
years, I suspect that we've had a (partial) change of attitude around playing 
with different security models for the API.  Hence, I'd like us to reconsider 
that particular decision.)




Re: [WebTiming] HTMLElement timing

2010-02-04 Thread Jonas Sicking
On Mon, Feb 1, 2010 at 5:00 PM, Lenny Rachitsky
lenny.rachit...@webmetrics.com wrote:
 I’d like to jump in here and address this point:

 “While I agree that timing information is important, I don't think it's
 going to be so commonly used that we need to add convenience features
 for it. Adding a few event listeners at the top of the document does
 not seem like a big burden.”

 I work for a company that sells a web performance monitoring service to
 Fortune 1000 companies. To give a quick bit of background to the monitoring
 space, there are two basic ways to provide website owners with reliable
 performance metrics for their web site/applications. The first is to do
 active/synthetic monitoring, where you test the site using an automated
 browser from various locations around the world, simulating a real user. The
 second approach is called passive or real user monitoring, which captures
 actual visits to your site and records the performance of those users. This
 second approach is accomplished with either a network tap appliance sitting
 in the customers datacenter that captures all of the traffic that comes to
 the site, or using the “event listener” javascript trick which times the
 client side page performance and sends it back to a central server.

 Each of these approaches has pros and cons. The synthetic approach doesn’t
 tell you what actual users are seeing, but it consistent and easy to
 setup/manage. The appliance approach is expensive and misses out on
 components that don’t get served out of the one datacenter, but it sees real
 users performance. The client side javascript timing approach gives you very
 limited visibility, but is easy to setup and universally available. This
 limited nature of the this latter javascript approach is the crux of why
 this “Web Timing” draft is so valuable. Website owners today have no way to
 accurately track the true performance of actual visitors to their website.
 With the proposed interface additions, companies would finally be able to
 not only see how long the page truly takes to load (including the
 pre-javascript execution time), but they’d also now be able to know how much
 DNS and connect time affect actual visitors’ performance, how much of an
 impact each image/objects makes (an increasing source of performance
 issues), and ideally how much JS parsing and SSL handshakes add to the load
 time. This would give website owners tremendously valuable data is currently
 impossible to reliably track.

Hi Lenny,

I agree that exposing performance metrics to the web page is a good
idea. I just disagree with the list of elements for which metrics is
being collected. Every element that we put on the list incurs a
significant cost to browser implementors, time that could be spent on
other, potentially more important, features. Just because something
could be useful doesn't mean that it's worth its cost.

Additionally, the more metrics that is collected, the more browser
performance is spent measuring these metrics. So there is a cost to
every one else, both authors and users, too.

/ Jonas



[widgets-twi] window object

2010-02-04 Thread Cyril Concolato

Hi all,

After PC, I'm looking now at the Widget Interface spec, in particular to check 
the test suite and produce the implementation report. I have a problem with the 
spec. In GPAC, we implement only SVG not HTML5, with the Window object. So I'm 
wondering how should the widget object be implemented in a UA that does not support 
the window object ?

Best regards,

Cyril
--
Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Mutimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France
http://concolato.blog.telecom-paristech.fr/



Re: [WebTiming] HTMLElement timing

2010-02-04 Thread Lenny Rachitsky
Understood. I used to run the engineering department here at Webmetrics so I
understand the cost/benefit decisions that need to be made with any new
functionality. However coming from the web performance industry anything
that could help website owners understand and track their performance better
is exciting to me, especially with the potential that this proposed
functionality provides. All of the existing techniques scratch at the ideal
that this interface allows, giving us the ability to finally track the full
and accurate performance of the end user. It would also help the various
in-browser performance tools report consistent results, which is something
we¹ve heard customers complain about (especially if this is implemented
across browsers).

Clearly slowing down the user experience is bad. I have nearly zero
knowledge of browser internals, but one thought...allow the website owner to
activate these metrics using a flag, leaving it to them to decide if it¹s
worth the added processing time to capture this data.

P.S. I apologize for the multiple submissions...I kept banging my head
against the wall trying to post to the list and for some reason they all
queued up and spammed the list. Technology fail.


On 2/4/10 9:17 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 1, 2010 at 5:00 PM, Lenny Rachitsky
 lenny.rachit...@webmetrics.com wrote:
  I¹d like to jump in here and address this point:
 
  ³While I agree that timing information is important, I don't think it's
  going to be so commonly used that we need to add convenience features
  for it. Adding a few event listeners at the top of the document does
  not seem like a big burden.²
 
  I work for a company that sells a web performance monitoring service to
  Fortune 1000 companies. To give a quick bit of background to the monitoring
  space, there are two basic ways to provide website owners with reliable
  performance metrics for their web site/applications. The first is to do
  active/synthetic monitoring, where you test the site using an automated
  browser from various locations around the world, simulating a real user. 
The
  second approach is called passive or real user monitoring, which captures
  actual visits to your site and records the performance of those users. This
  second approach is accomplished with either a network tap appliance sitting
  in the customers datacenter that captures all of the traffic that comes to
  the site, or using the ³event listener² javascript trick which times the
  client side page performance and sends it back to a central server.
 
  Each of these approaches has pros and cons. The synthetic approach doesn¹t
  tell you what actual users are seeing, but it consistent and easy to
  setup/manage. The appliance approach is expensive and misses out on
  components that don¹t get served out of the one datacenter, but it sees
 real
  users performance. The client side javascript timing approach gives you
 very
  limited visibility, but is easy to setup and universally available. This
  limited nature of the this latter javascript approach is the crux of why
  this ³Web Timing² draft is so valuable. Website owners today have no way to
  accurately track the true performance of actual visitors to their website.
  With the proposed interface additions, companies would finally be able to
  not only see how long the page truly takes to load (including the
  pre-javascript execution time), but they¹d also now be able to know how
 much
  DNS and connect time affect actual visitors¹ performance, how much of an
  impact each image/objects makes (an increasing source of performance
  issues), and ideally how much JS parsing and SSL handshakes add to the load
  time. This would give website owners tremendously valuable data is
 currently
  impossible to reliably track.
 
 Hi Lenny,
 
 I agree that exposing performance metrics to the web page is a good
 idea. I just disagree with the list of elements for which metrics is
 being collected. Every element that we put on the list incurs a
 significant cost to browser implementors, time that could be spent on
 other, potentially more important, features. Just because something
 could be useful doesn't mean that it's worth its cost.
 
 Additionally, the more metrics that is collected, the more browser
 performance is spent measuring these metrics. So there is a cost to
 every one else, both authors and users, too.
 
 / Jonas
 


--
Lenny Rachitsky 
Neustar, Inc. / Software Architect/RD
9444 Waples St., San Diego CA 92121
Office: +1.877.524.8299x434  / lenny.rachit...@webmetrics.com /
www.neustar.biz
 
 



[widgets] TWI: comments

2010-02-04 Thread Cyril Concolato

Hi all,

While trying to implement the widget interface spec [1], I found two typos:
- a user agent can to support = a user agent can support
- missing closing parenthese in conjunction to the preferences attribute).

I have also some remarks/questions:

* A user agent whose start file implements [HTML5]'s Window  interface MUST 
...
The start file does not implement anything. The user agent implements. I 
suggest to change it to something like:
User agent implementing the [HTML5]'s Window interface MUST ...

* Section 5. is called Widget Interface but it starts by saying The widget object provides 
 It think it should say Objects implementing the widget interface provide ...

* The step 1 in the initialization of the preference attribute algorithm which says Establish 
the instance of a widget for this widget and create a storage area that is unique for the 
origin. should probably say unique for the origin and for that instance.

* The spec says:
When an object implementing the Widget interface is instantiated, if a user agent 
has not previously associated a storage area with the instance of a widget, then the user 
agent must initialize the preferences attribute.
What happens if the UA has already associated a storage area ? It should 
probably say that no initialization of the preferences attribute is made but 
the associated storage area can be used using the Storage interface, no ?

*In case two instances of the same widget package are loaded, modified (e.g. 
weather in Paris and in New York) and then closed, how does the UA retrieve the 
associated storage area when one is reloaded ? I don't think it can be 
specified but I think you should probably add a note saying that it is 
implementation specific, for example by asking the user what previous instance 
it want to start first.

* What happens to the storage event fired by the setItem or removeItem 
methods when the UA does not implement the window object ?

* What is the return value for the openURL method when there is a scheme 
handler associate to the IRI ? When there is none, the text says the method 
returns void. I think it also returns void so I wonder what's the point of the 
paragraph.

* The IDL spec indicates that the preference attribute implements the Storage 
interface, but I can't find a 'real' sentence saying it. I find:
Note: A user agent can  support the Storage interface on DOM attributes other than 
the preferences attribute (e.g., a user agent can to support the [WebStorage]  
specification's localStorage attribute of the window object in conjunction to the 
preferences attribute) but this is a note, hence not normative.

Return the Storage  object associated with that widget instance's preferences 
attribute. but that's not really explicit.

Implement the Storage  interface on the storage area, and make the preferences attribute a 
pointer to that storage area. but this isn't as clear as The UA MUST support the 
Storage interface on the preferences attribute or similar...

I suggest that you add an additional sentence. Also, the given example is not 
really clear because it does not show the relationship between a config.xml 
document with preference elements and the associated script and storage.

Finally, can you clarify if the usage of getItem / setItem such as in 
widget.preferences.getItem('foo'); and widget.preferences.setItem('foo', 
'dahut'); is allowed or if only the brackets notation 
(widget.preferences['foo']) is allowed. Maybe by adding an example ?

Regards,
Cyril

[1] http://www.w3.org/TR/2009/CR-widgets-apis-20091222/
--
Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Mutimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France
http://concolato.blog.telecom-paristech.fr/widgets/



Re: [XHR2] AnonXMLHttpRequest()

2010-02-04 Thread Tyler Close
On Wed, Feb 3, 2010 at 2:34 PM, Maciej Stachowiak m...@apple.com wrote:
 I don't think I've ever seen a Web server send Vary: Cookie. I don't know 
 offhand if they consistently send enough cache control headers to prevent 
 caching across users.

I've been doing a little poking around. Wikipedia sends Vary:
Cookie. Wikipedia additionally uses Cache-Control: private, as do
some other sites I checked. Other sites seem to be relying on
revalidation of cached entries by making them already expired.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-04 Thread Thomas Broyer
On Thu, Feb 4, 2010 at 11:05 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, Feb 3, 2010 at 2:34 PM, Maciej Stachowiak m...@apple.com wrote:
 I don't think I've ever seen a Web server send Vary: Cookie. I don't know 
 offhand if they consistently send enough cache control headers to prevent 
 caching across users.

 I've been doing a little poking around. Wikipedia sends Vary:
 Cookie. Wikipedia additionally uses Cache-Control: private, as do
 some other sites I checked. Other sites seem to be relying on
 revalidation of cached entries by making them already expired.

FWIW, Django also sends Vary: Cookie when using sessions (which
includes form authentication AFAICT):
http://code.djangoproject.com/browser/django/trunk/django/contrib/sessions/middleware.py

-- 
Thomas Broyer
/tɔ.ma.bʁwa.je/



Re: [XHR2] AnonXMLHttpRequest()

2010-02-04 Thread Kenton Varda
On Thu, Feb 4, 2010 at 2:05 PM, Tyler Close tyler.cl...@gmail.com wrote:

 On Wed, Feb 3, 2010 at 2:34 PM, Maciej Stachowiak m...@apple.com wrote:
  I don't think I've ever seen a Web server send Vary: Cookie. I don't
 know offhand if they consistently send enough cache control headers to
 prevent caching across users.

 I've been doing a little poking around. Wikipedia sends Vary:
 Cookie. Wikipedia additionally uses Cache-Control: private, as do
 some other sites I checked. Other sites seem to be relying on
 revalidation of cached entries by making them already expired.


Unfortunately, lots of sites don't get this right.  Look back to 2005-ish
when Google released the Google web accelerator -- basically a glorified
HTTP proxy.  It assumed that servers correctly implemented the standards,
and got seriously burned for serving private pages meant for one user to
other users.  Naturally, web masters all blamed Google, and the product was
withdrawn.  (Note that I was not an employee at the time, much less on the
team, so my version of the story should not be taken as authoritative.)

On the other hand, refusing to cache anything for which the request
contained a cookie seems like a pretty unfortunate limitation.