Re: [whatwg] Built-in image sprite support in HTML5
> HTTP-level solutions are vulnerable to broken proxies and caches, of which > there are many. This is why HTTP pipelining doesn't really work. Yeah I know, but does that mean HTML should work around lack of features in HTTP? I mean you could say HTML5 is vulnerable to broken browsers :-)
Re: [whatwg] Built-in image sprite support in HTML5
> Finally, there have been proposals for removing the need to sprite > altogether, by allowing authors to send a bunch of resources packed > into a single compressed archive, and just addressing individual files > inside of it. Yeah, I'd think this isn't really a problem that should be solved as part of HTML5 but rather as improvements to the protocol level. Spriting is after all just a hack around the strict 1-file-1-request nature of HTTP and not something that's really fundamental. For instance theoretically SPDY should solve this problem. Or indeed so would HTTP pipelining, if it was used more often.
Re: [whatwg] prompts, alerts and showModalDialog during beforeunload/unload events
Browsers could solve the editor use case by treating "close tab" as "hide tab" for a minute or two before actually shutting down the page. Then the problem becomes, how do you make it obvious to users that they can get their work back by pressing a magic button somewhere? The modal quit loop is frequently used to try and make people download malware. It'd be nice to eliminate it in a backwards compatible manner.
Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application
WebSockets doesn't let you open arbitrary ports and listen on them, so, I don't think it can be used for what you want. P2P in general is a lot more complicated than it sounds. It sort of works for things like large movies and programs because they aren't latency sensitive and chunk ordering doesn't matter (you can wait till the end then reassemble). But it has problems: - A naive P2P implementation won't provide good throughput or latency because you might end up downloading files from a mobile phone on the other side of the world rather than a high performance CDN node inside your local ISP. That sucks for users and also sucks for your ISP who will probably find their transit links suddenly saturated and their nice cheap peering links with content providers sitting idle. - That means unless you want to have your system throttled (or in companies/universities, possibly banned) you need to respect network topology and have an intimate understanding of how it works. For example the YouTube/Akamai serving systems have this intelligence but whatever implementation you come up with won't. - P2P is far more complicated than an HTTP download. I never use BitTorrent because it basically never worked well for me, compared to a regular file download. You don't see it used much outside the pirate scene and distributing linux ISOs these days for that reason, I think. Your friends problem has other possible solutions: 1) Harvesting low hanging fruit: 1a) Making sure every static asset is indefinitely cacheable (use those ISP proxy caches!) 1b) Ensuring content is being compressed as effectively as possible 1c) Consider serving off a CDN like Akamai or Limelight. There is apparently a price war going on right now. and of course the ultimate long term solution 2) Scaling revenues in line with traffic
Re: [whatwg] Web API for speech recognition and synthesis
Is speech support a feature of the web page, or the web browser? On Wed, Dec 2, 2009 at 12:32 PM, Bjorn Bringert wrote: > We've been watching our colleagues build native apps that use speech > recognition and speech synthesis, and would like to have JavaScript > APIs that let us do the same in web apps. We are thinking about > creating a lightweight and implementation-independent API that lets > web apps use speech services. Is anyone else interested in that? > > Bjorn Bringert, David Singleton, Gummi Hafsteinsson > > -- > Bjorn Bringert > Google UK Limited, Registered Office: Belgrave House, 76 Buckingham > Palace Road, London, SW1W 9TQ > Registered in England Number: 3977902 >
Re: [whatwg] Canvas pixel manipulation and performance
> That's one way to get a healthy performance boost (typically) > but where does the web developer stand in this work? Are > you suggesting native code should replace JavaScript? For code where performance is critical (like complex animation code) yes. Don't get me wrong, I'm all for better JavaScript performance, but we have to be realistic. Compared to native code JavaScripts performance will always be lacking - many clever tricks have been deployed to speed up JavaScript but even the fastest JS engines don't come close to the output of average C++ compilers/JVMs. The nature of JS makes it likely that this situation will remain true for a long time, perhaps forever. So there are two possibilities here - one is to introduce ever more complexity into the web APIs for diminishing returns, even though a primary goal of the web APIs is simplicity. And the other is to just bind native code to those APIs, hopefully eliminating much of the marshalling overhead along the way. The latter approach has the advantage of not requiring novice-level developers to understand things like endianness or bit masking to draw some pixels (replace as appropriate for any given API), whilst allowing developers that need it to get the fastest execution securely possible.
Re: [whatwg] Canvas pixel manipulation and performance
I have to wonder if it's worth trying to micro-optimize web APIs like this. Your suggestions will squeeze out only a small amount of additional performance - the goals will get a bit higher and we'll be back at square one. I know NativeClient isn't a proposed spec or standardised piece of web infrastructure, but I think what you really need is the ability to scribble on a canvas from native code rather than JavaScript. Work done on that has the advantage of generalizing to all web APIs and use cases rather than just direct graphics access.
Re: [whatwg] Resolving the persistent vs cache dilemma with file|save
> What security issues? There aren't any issues with the file selector box, > right? How is this different? That's true, I guess I was thinking that these buttons have to be treated specially by the browser (to prevent scripts clicking them) and stuff. But it's not such a big deal. >> 2) Explicit quota request (i think MB/GB isn't a meaningful unit of >> measure for most people) > > You'd get this with the input button as well. I'm confused. That's my point - was proposed to ask that, and I suspect most people wouldn't understand the question. If you just reuse the OS common file dialogs, it's not an issue (I never saw an app tell me how big the file would be before I saved it, I'm not convinced web apps would need to either if there was an explicit save action involved). > This is a somewhat compelling argument. We'd have to completely change how > LocalStorage works for file:/// urls though, right? I was thinking that the event handler would just provide a Storage object, so the problem of origins isn't present. >> 4) No selection of paths or files was mentioned, so, how will users >> know what to back up/put on USB keys etc? > > The discussion in the thread generally found this to be a good > thing...though there was't 100% consensus. Ahh, I didn't find the thread to contain much consensus at all :) I will have to re-read it, as I don't remember what the arguments for making stored data harder to find were. >> So I think this proposal will work better, at least a little bit. It >> avoids introducing new UI elements at least. > > I'm not sure if that's a good thing or not. It also still has most if not > all of the downsides to the input box. Could you hint me where in the discussion they were spelled out? I saw Linus' proposal, and Adrian Sutton raised the backup/file management issue as well. But I didn't see much followup discussion of that. What use case isn't possible under an explicit user-triggered model in which a regular file/save mechanism is used (button or existing menu option doesn't matter)? I see that in low disk space conditions, you might lose things like settings, but the existing OS behavior is even worse - if you run out of disk space most desktop programs will panic and start corrupting data. 99% of desktop software just isn't tested under failing write conditions anymore. This is another reason why any system that requires imposing quota seems likely to fail - most apps won't be tested against that condition and will just crash or corrupt data when they run out of quota even if the system actually has enough space to succeed. Removing the oldest, least accessed files automatically seems preferable to that, at least the data loss is controlled and affects what you are NOT using, rather than what you are. Of course no data loss at all is the most preferable, but that's what servers are for :) > Maybe you should reply to some of the original points (preferably in the > original thread)? I wasn't subscribed at that time, so I can't :-( Roll on Google Wave
Re: [whatwg] Resolving the persistent vs cache dilemma with file|save
The closest suggestion I saw was Linus' which isn't quite the same: 1) Triggered from web page rather than browser chrome with associated security issues 2) Explicit quota request (i think MB/GB isn't a meaningful unit of measure for most people) 3) Doesn't solve the fact that file|save doesn't work for web apps 4) No selection of paths or files was mentioned, so, how will users know what to back up/put on USB keys etc? 5) Freeing up space is done in the same way it is today - by deleting files using a file manager So I think this proposal will work better, at least a little bit. It avoids introducing new UI elements at least. > I'm not saying there's no merit to this, but since the thread was not that > long ago, I don't think the burden should be on everyone on this list to > re-explain their points. Perhaps I should have been clearer. I read the whole "apparent contradiction in spec" thread. If there was similar discussion in other threads well, it's impossible to read the whole whatwg archives even for a couple of months. That's a lot of mail! I hope I read the parts relevant for this discussion. Anyway, to prove I did actually read it, I believe Jens Alfke said > Replace "user agent" -> "operating system" and "local state" -> "user > files", and you have an argument that, when the hard disk in my > MacBook gets too full, the OS should be free to start randomly > deleting my local files to make room. This would be a really bad idea. Isn't that exactly the idea behind Chrome OS? If files are backed up to the cloud then the entire local storage can be seen as one big cache and the OS would indeed start deleting local files when storage got full, redownloading them as required. Arguably that just moves the "storage full" problem to the server, but then again, nothing except the potential for accidents stops a company offering "infinite" quota and charging only for what is used, in which case, you might never run out of space. This "cloud vs desktop" platform thing seems to be a key area of disagreement. I'm firmly in the cloud camp - writing HTML5 apps independent of a server seems isn't something I understand. JavaScript+HTML aren't going to beat .NET/Mono/Java at their own game, so why even try? The thing that makes web apps interesting is the very fact that they _are_ deeply integrated with the internet, are simple, they don't require explicit management etc. If HTML5 ends up changing these attributes just to be yet another API over the same old paradigm of downloadable, installable apps that generate DOC files or whatever, it won't have achieved very much.
[whatwg] Resolving the persistent vs cache dilemma with file|save
Hiya, I read the threads on whether local storage should be managed by the browser or user with interest. I'm not sure if there was agreement on this or not (didn't read the whole thing), but had an idea for one solution. Namely, that local storage is indeed managed by the browser automatically and can be purged at any time long-term client persistence of web app data is instead done by letting webapps handle the file->save menu item that exists in nearly every browser except smartphones (see below). The advantages of providing onfilesave="" and onfileload="" event handlers are: 1) It does not add complexity, because the user isn't expected to do any more than they are today (no new ui to manage local storage etc) 2) It is 100% compatible with users existing mental model for how they export data from an application that they wish to save, back up to disk etc 3) It fixes the existing behavior of file|save for web apps which is pretty useless and not what the user expects, unlike for regular web pages where it does work 4) It solves the problem of client-side backups How could it work? In response to an onsave event, the app could return a Storage object that contains the data that makes sense to persist (ie, not big data files). There could be a few reserved keys for things like the icon, default file name and URL to open in conjunction with that "web bundle". If the user double clicks or uses file|open on such a file, the browser would load the URL named in the bundle file and fire the onfileload handler with that storage object. This seems like a decent compromise between the two positions, in that it'd let you make traditional desktop-style apps written in HTML as Apple want, but for pure "cloud" apps that happen to just need more local storage, the user isn't asked to do more than they are today - which I personally consider vital for the evolution of the web. What about clients where there is no user-accessible file system, like smartphones or perhaps ChromeOS? Then onfilesave/onfileload can be integrated with whatever other UI is wanted - for instance, perhaps starring/bookmarking should trigger a save to local storage, or a basic "documents associated with this web site" list could be used. What do you think? thanks -mike