On May 15, 2009, at 3:50 PM, Tab Atkins Jr. wrote:
On Fri, May 15, 2009 at 1:32 PM, Manu Sporny <mspo...@digitalbazaar.com
> wrote:
Tab Atkins Jr. wrote:
Reversed domains aren't *meant* to link to anything. They shouldn't
be parsed at all. They're a uniquifier so that multiple
vocabularies
can use the same terms without clashing or ambiguity. The Microdata
proposal also allows normal urls, but they are similarly nothing
more
than a uniquifier.
CURIEs, at least theoretically, *rely* on the prefix lookup. After
all, how else can you tell that a given relation is really the same
as, say, foaf:name? If the domain isn't available, the data will be
parsed incorrectly. That's why link rot is an issue.
Where in the CURIE spec does it state or imply that if a domain isn't
available, that the resulting parsed data will be invalid?
Assume a page that uses both foaf and another vocab that subclasses
many foaf properties. Given working lookups for both, the rdf parser
can determine that two entries with different properties are really
'the same', and hopefully act on that knowledge.
If the second vocab 404s, that information is lost. The parser will
then treat any use of that second vocab completely separately from the
foaf, losing valuable semantic information.
(Please correct any misunderstandings I may be operating under; I'm
not sure how competent parsers currently are, and thus how much they'd
actually use a working subclassed relation.)
RDFa parsers simply adhere to the parsing algorithm outlined in the
RDFa specification. Their job is to extract the metadata found in the
page and that's pretty much it. You are combining features from the
broader RDF world with RDFa. The fact that we can lean on RDFS and OWL
to more accurately describe that metadata should be considered an
added bonus. However, I personally would like to see this as baby
steps. We defined a general syntax for declaring metadata and
describing resources in XHTML (soon hopefully HTML(x)), the current
steps are people adding metadata to their sites and people learning
how to make sense of that data. Google is defining a vocabulary they
understand, Yahoo is both creating new and re-using existing
vocabularies they understand. Anyone, can correct me if I'm wrong, but
I thought the more advanced function where machines will apply
inference to understand newly encountered vocabularies should be left
as an exercise for others outside the RDFa group/work.
-Elias