Toby Inkster wrote:
On Tue, 2009-09-15 at 17:09 +0100, Philip Taylor wrote:
Othar wrote:
We surely have errors in our parsing (thanks for finding several:
we'll look into these on Monday). But we will also deviate from the
standard in some cases to be forgiving of webmaster errors. For
example, we expect that some webmasters will forget the xmlns
attribute entirely.
"we will [...] deviate from the standard" makes me believe that the
above problems are an unavoidable consequence of Google's intentions,
rather than just unintentional transient fixable bugs, and therefore are
a serious concern (which is why I'm writing about it like this rather
than just listing bugs).
I think it's reasonable to build in a degree of laxity into an RDFa
parser. Postel's Law applies.
The RDFa specification (and also the HTML+RDFa draft, which is probably
relevant here) defines conformance requirements for processing any
document, valid or invalid. (Hence the discussions like
http://lists.w3.org/Archives/Public/public-rdf-in-xhtml-tf/2009Sep/0089.html
about precisely how processing of certain invalid inputs should be
specified.)
Given those requirements, implementers have no room to apply Postel's
Law without violating the specification.
(The writers of the specification could apply Postel's Law in
determining what requirements to specify, but that's a separate issue.)
Are you suggesting that Google should intentionally violate the RDFa
specification? Or are you suggesting the RDFa specification should be
relaxed to allow implementers freedom in handling invalid documents? I
think it must be one or the other, as long as Google is claiming to
implement RDFa.
--
Philip Taylor
pj...@cam.ac.uk