On Aug 23, 2008, at 02:43, Ben Adida wrote:

Why would you reinvent URIs in a way that they can't be de-referenced?

To avoid having misleading affordances.
http://en.wikipedia.org/wiki/Affordance

We want one parser, with variability and innovation in the vocabulary definition only.

Having one parser seems appealing compared to using the native mechanisms of each of HTML (<meta>, <link>), PDF (document information dictionary), PNG (tEXt chunk), etc. at first, but the vision that tools handle this all when you remix culture already requires the tools to support reading and writing the file formats they remix. When you already have format-native key-value read/write capability, the ability to build and mine RDF *graphs* becomes an additional burden.

What barrier is there to building reusable vocabularies?

The follow-your-nose principle is missing, which is fairly essential for discovering the meaning of vocabularies (partially automatically, not by
doing a Google search.)

The partial automation with RDFa doesn't go very far. If a program automatically dereferences http://creativecommons.org/ns# and parses the result as RDFa, the program now has a human-readable string for each property--not exactly something that the program can act on further without human help.

Also, tools that automatically follow their nose with RDFa will perform a DDoS attack on hosts whose names appear in *other* namespace URIs that were meant as mere identifiers.

The failures of the past have had little to do with the syntax or
expression mechanisms. They have to do with users simply not caring.
They don't care because there are no useful tools for them to care
about

"The tools will save us" is about as big a warning sign as you can get.

I didn't say that. I said that when you preclude good tools, then you're doomed. Tools are not sufficient, but they often play an important role.

For the RDF vision to work, the entire culture remixing toolchain has to preserve digital rights management metadata.

Sure, like I said, we have lots of very versatile extension mechanisms
already.

And as I keep pointing out, the syntax enabled by the existing extension
mechanism is not generic enough for the classes of data folks need to
express. With RDFa, they would be generic enough.

For example, in PDF, do people *really* need all this cruft:
People don't need it, machines do.

No they don't. Again, consider the RDF-blobs-in-HTML-comments stuff. The machines don't need the RDF cruft around the metadata, they just need the license URI. Tools that process those license statements at scale don't do
any RDF processing at all.

But... I already told you that we're trying to do more than just the
license statement.

Attribution URL and Attribution name could be additional key-value entries in the document information dictionary. (And attribution name could default to the standard entry for "Author".)

--
Henri Sivonen
[EMAIL PROTECTED]
http://hsivonen.iki.fi/


Reply via email to