On 12.07.2010 14:44, Mike Wilcox wrote:
On Jul 12, 2010, at 2:30 AM, Julian Reschke wrote:
Google:
<http://validator.w3.org/check?uri=http%3A%2F%2Fwww.google.com&charset=%28detect+automatically%29&doctype=Inline&group=0
<http://validator.w3.org/check?uri=http%3A%2F%2Fwww.google.com&charset=%28detect+automatically%29&doctype=Inline&group=0>>
- 35 errors
That's a little different. Google purposely uses unstandardized,
incorrect HTML in ways that still render in a browser in order to make
it more difficult for screen scrapers. They also "break it" in a
different way every week.
How exactly is it different?
Do you think that what Google does somehow is "better"?
Just asking.
As far as I can tell, it just shows that content providers continue to
send whatever happens to work, thus are not concerned at all about
validity (note: there's a permathread about this as well -- why disallow
things that are known the work reliably...).
Best regards, Julian