Re: [whatwg] Browser Bundled Javascript Repository

2009-06-15 Thread Chris Holland
As an alternative,common libraries could get shipped as browser  
plugins, allowing developers to leverage local URIs such as  
chrome:// in XUL/mozilla/firefox apps. This would only effectively  
work if:


- all vendors define a same local URI prefix. I do like chrome://.  
Mozilla dudes were always lightyears ahead in all forms of cross- 
platform app development with XUL.
- all vendors extend their existing plugin architecture to accomodate  
this URI and referencing from network-delivered pages.
- some form of discovery exists, with ability to provide network  
transport alternative: use chrome URI if exists, use http URI if not


Library vendors would then ship their releases as browser plugins,  
using existing discovery mechanisms, as well as software update  
mechanisms.


-chris


On Jun 15, 2009, at 11:55, Oliver Hunt oli...@apple.com wrote:


Pros:
- Pre-Compiled: By bundling known JS Libraries with the browser,  
the browser could store a more efficient representation of the  
file.  For instance pre-compiled into Bytecode or something else  
browser specific.
I think something needs to be clarified wrt to compile times and the  
like.  In the WebKit project we do a large amount of performance  
analysis and except in the most trivial of cases compile time just  
doesn't show up as being remotely significant in any profiles.   
Additionally the way JS works, certain forms of static analysis  
result in behaviour that cannot reasonably be cached.  Finally the  
optimised object lookup and function call behaviour employed by  
JavaScriptCore, V8 and (i *think*) TraceMonkey is not amenable to  
caching, even within a single browser session, so for modern engines  
i do not believe caching bytecode or native is really reasonable --  
i suspect the logic required to make this safe would not be  
significantly cheaper than just compiling anyway.


- Less HTTP Requests / Cache Checks: If a library is in the  
repository no request is needed. Cache checks don't need to be  
performed.  Also, for the 100 sites you visit that all send you the  
equivalent jquery.js you now would send 0 requests.  I think this  
would be enticing to mobile browsers which would benefit from this  
Space vs. Time tradeoff.
I believe http can specify how long you should wait before  
validating the cached copy of a resource so i'm not know if this is  
a real win, but i'm not a networking person so am not entirely sure  
of this :D


- Standardizing Identifier For Libraries: Providing a common  
identifier for libraries would be open for discussion.  The best  
idea I've had would be to provide the SHA1 Hash of the Desired  
Release of a Javascript Library.  This would ensure a common  
identifier for the same source file across browsers that support  
the feature. This would be useful for developers as well.  A debug  
tool can indicate to a developer that the script they are using is  
available in the Browser Repository with a certain identifier.

This isn't a pro -- it's additional work for the standards body


Cons:

- May Not Grow Fast Enough: If JS Libraries change too quickly the  
repository won't get used enough.
- May Not Scale: Are there too many JS Libraries, versions, etc  
making this unrealistic?  Would storage become too large?

- Adds significant spec complexity
- Adds developer complexity, imagine a developer modifies their  
servers copy of a given script but forgets to update the references  
to the script, now they get inconsistent behaviour between browsers  
that support this feature and browsers that don't.


--Oliver



Re: [whatwg] Browser Bundled Javascript Repository

2009-06-15 Thread Chris Holland
If you build a Firefox plugin, you can put some code on your page that
allows users to click to install if the user doesn't already have
the plugin installed. If you ship an updated version of your plug-in,
users get notified and prompted to install the new one.

Similar mechanisms exist on other browsers.

But you're right, this is all a lot of end-user intervention: it would
be a slightly, err, very painful process of installing a browser
plugin, which is currently very-much of a user opt-in process, and not
something very practical.

However, the underlying plugin infrastructure could be extended for a
more transparent process built specifically to handle browser
javascript library extensions. I'm just trying to find ways to
leverage a lot of what's already there.

On the web developer's end, one might consider:

Instead of adding an attribute to a script tag, the good old' link /
element could be both backward and forward compatible:

link rel=local:extension  type=application/x-javascript
href=ext:ibdom.0.2.js /
or
link rel=local:extension  type=text/javascript href=ext:ibdom.0.2.js /

instead of a chrome:// prefix, some new protocol to specifically
designate a local extension would likely be more appropriate. I'm
throwing ext: out there for now.

Interesting thing is the same scheme could be leveraged for local CSS
extensions:

link rel=local:extension  type=text/css href=ext:ibdom.0.2.js /

To handle users who don't have the ibdom javascript extension
installed, developers could add something like this to their document:
(assuming a decent library which declares a top-level
object/namespace):

script type=text/javascript
if (!window.IBDOM) {
var newScript = document.createElement(script);
scr.setAttribute(type,text/javascript);
scr.setAttribute(src,/path/to/ibdom.0.2.js);
document.appendChild(newScript);
}
/script

-chris

P.S.: http://ibdom.sf.net/


On Mon, Jun 15, 2009 at 12:53 PM, Joseph Pecorarojoepec...@gmail.com wrote:
 Library vendors would then ship their releases as browser plugins, using
 existing discovery mechanisms, as well as software update mechanisms.

 -chris

 This sounds to me as though the user would have to download a browser
 plugin.  I would hope this would be as transparent as possible to the user.
 Maybe I'm misunderstanding the discovery mechanisms you're talking about.
  Could you expand on this?

 I do think that a URI prefix is a neat idea.  This could eliminate the need
 for a new attribute.  Is it as backwards-compatible?

 - Joe




-- 
Chris Holland
http://webchattr.com/ - chat rooms done right.


Re: [whatwg] A standard for adaptive HTTP streaming for media resources

2010-05-24 Thread Chris Holland




* authoring of content in a specific way
* description of the alternative files on the server and their
features for the UA to download and use for switching
* a means to easily switch mid-way between these alternative files


I don't have something decent to offer for the first and last bullets  
but I'd like to throw-in something for the middle bullet:


The http protocol is vastly under utilized today when it comes to URIs  
and the various Accept* headers.


Today developers might embed an image in a document as chris.png. Web  
daemons know to find that resource and serve it, in this sense,  
chris.png is a resource locator.


Technically one might reference the image as a resource identifier  
named chris. The user's browser may send  image/gif as the only  
value of an accept header, signaling the following to the server: I'm  
supposed to download an image of chris here, but I only support gif,  
so don't bother sending me a .png. In a perhaps more useful scenario  
the user agent may tell the server don't bother sending me an image,  
I'm a screen reader, do you have anything my user could listen to?.  
In this sense, the document's author didn't have to code against or  
account for every possible context out there, the author merely puts  
a reference to a higher-level representation that should remain  
forward-compatible with evolving servers and user-agents.


By passing a list of accepted mimetypes, the accept http header  
provides this ability to serve context-aware resources, which starts  
to feel like a contender for catering to your middle bullet.


To that end, new mime-types could be defined to encapsulate media type/ 
bit rate combinations.


Or the accept header might remain confined to media types and  
acceptable bit rate information might get encapsulated into a new  
header, such as: X-Accept-Bitrate .


If you combined the above approach with existing standards for http  
byte range requests, there may be a mechanism there to cater to your  
3rd bullet as well: when network conditions deteriorate, the client  
could interrupt the current stream and issue a new request where it  
left off to the server. Although this likel wouldn't work because a  
byte range request would mean nothing on files of two different sizes.  
For playbacked media, time codes would be needed to define range.


-chris


On May 24, 2010, at 19:33, Silvia Pfeiffer silviapfeiff...@gmail.com  
wrote:



Hi all,

I would like to raise an issue that has come up multiple times before,
but hasn't ever really been addressed properly.

We've in the past talked about how there is a need to adapt the
bitrate version of a audio or video resource that is being delivered
to a user agent based on the available bandwidth on the network, the
available CPU cycles, and possibly other conditions.

It has been discussed to do this using @media queries and providing
links to alternative versions of a media resources through the
source element inside it. But this is a very inflexible solution,
since the side conditions for choosing a bitrate version may change
over time and what is good at the beginning of video playback may not
be good 2 minutes later (in particular if you're on a mobile device
driving through town).

Further, we have discussed the need for supporting a live streaming
approach such as RTP/RTSP - but RTP/RTSP has its own non-Web issues
that will make it difficult to make it part of a Web application
framework - in particular it request a custom server and won't just
work with a HTTP server.

In recent times, vendors have indeed started moving away from custom
protocols and custom servers and have moved towards more intelligence
in the UA and special approaches to streaming over HTTP.

Microsoft developed Smooth Streaming [1], Apple developed HTTP Live
Streaming [2] and Adobe recently launched HTTP Dynamic Streaming
[3]. (Also see a comparison at [4]). As these vendors are working on
it for MPEG files, so are some people for Ogg. I'm not aware anyone is
looking at it for WebM yet.

Standards bodies haven't held back either. The 3GPP organisation have
defined 3GPP adaptive HTTP Streaming (AHS) in their March 2010 release
9 of  3GPP [5]. Now, MPEG has started consolidating approaches for
adaptive bitrate streaming over HTTP for MPEG file formats [6].

Adaptive bitrate streaming over HTTP is the correct approach towards
solving the double issues of adapting to dynamic bandwidth
availability, and of providing a live streaming approach that is
reliable.

Right now, no standard exists that has been proven to work in a
format-independent way. This is particularly an issue for HTML5, where
we want at least support for MPEG4, Ogg Theora/Vorbis, and WebM.

I know that it is not difficult to solve this issue in a
format-independent way, which is why solutions are jumping up
everywhere. They are, however, not compatible and create a messy
environment where people have to install solutions for multiple
different 

Re: [whatwg] window.opener security issues (Was: WhatWG is broken)

2016-11-30 Thread Chris Holland
ttps://bar.net/b.html; target="_blank">link to different
> domain
>   
> 
>
> 
> 
>   
> Test window.opener
> 
> if (window.opener && !window.opener.closed)
>   opener.location = '<a  rel="nofollow" href="http://www.example.org/">http://www.example.org/</a>'
> 
>   
>   
> The page on foo.com will have changed to http://www.example.org/
> because this page had script access to that window. Obvious very serious
> phishing concern, and probably other concerns
>   
> 
>
>


-- 

Chris Holland
http://www.linkedin.com/in/chrisholland
310-500-7598


Re: [whatwg] ContextAgnosticXmlHttpRequest: an informal RFC

2005-03-09 Thread Chris Holland
Jim,

did you get a chance to go over this:
http://chrisholland.blogspot.com/2005/03/contextagnosticxmlhttprequest-informal.html
?

I've gone over a few use cases and security concerns in there, but
it's true I haven't developed privacy concerns surrounding the
Refer(r)er header. Here's an attempt at addressing this:

In hindsight, there could be a concern whereby, this time, the
document originating the ContextAgnosticXmlHttpRequest lives on an
intranet and decides to display a blog RSS feed that lives on the open
internet. If the Referer header is being sent along, the entity who
offers the RSS feed will see the exact URI for the requesting document
that lives behind the intranet. Then again, any document that lives on
an Intranet that links to an outside source or embeds an outside image
is also vulnerable to a similar issue.




On Wed, 9 Mar 2005 16:55:54 +, Jim Ley [EMAIL PROTECTED] wrote:
 On Wed, 9 Mar 2005 08:42:25 -0800, Chris Holland [EMAIL PROTECTED] wrote:
  On Wed, 9 Mar 2005 12:14:52 +, Jim Ley [EMAIL PROTECTED] wrote:
  Are you sure you're not advocating this to get around privacy based
  proxies of the type that normally disable such referrer based content
  so as to reliably block
  privacy invasions?
 
  well, if a proxy starts filtering out http headers sent by the client,
  there isn't much we can do about that now is there. heh.
 
 Who said anything about proxy?  You were requiring that a conformant
 gibberishName UA send the correct referrer header, that's something
 that many people, and many browsers currently do not want to do for
 valid privacy concerns.  Just saying there's nothing we can do about
 those when you've not really provided a use case for the information
 in the first place isn't a good way to go I think.
 
  thanks for the feedback! :)
 
 The biggest problem is you've not provided use-cases, you've not
 provided any security analysis of your proposal, as it stands it's
 extremely inadequate.  Come up with some use-cases, and a real
 analysis of what extra features need to be added to make it secure,
 what impact it has on privacy etc.
 
 Cheers,
 
 Jim.
 


-- 
Chris Holland
http://chrisholland.blogspot.com/