On 1 Mar 2007, at 04:39, David Karger wrote:

> This is why web 2.0 contains the seeds of its own destruction.   
> Everyone
> writes web-2.0 apps by scraping web-1.5 pages (the ones that use  
> <div>s
> to define structure instead of just using <p>s to define  
> formatting) but
> they produce pages that can't be scraped.  If ever a substantial
> fraction of the web becomes 2.0, it will collapse from the lack of 1.5
> pages to scrape.  The solution, of course, is the exhibit model, where
> the data is naked and can be scraped no matter how much you dress  
> up the
> presentations.

Sorry, but I don't buy any of this. There are no prominent Web 2.0  
application that work by “scraping web-1.5 pages”. And most prominent  
Web 2.0 sites can be scraped just fine. And a substantial fraction of  
the Web already *is* 2.0, without showing any signs of collapsing  
from lack of pages to scrape. Data re-use on the Web 2.0 isn't about  
scraping. It's about REST and SOAP APIs [1].

Best,
Richard

[1] http://en.wikipedia.org/wiki/Web_2.0
_______________________________________________
General mailing list
[email protected]
http://simile.mit.edu/mailman/listinfo/general

Reply via email to