On 9 Jul 2008, at 12:53, Yves Raimond wrote:
If the best data / tools you have suggest that two docs/datasets are
describing the selfsame entity, using owl:sameAs seems fine, even
if you
have a secret hunch you're only perhaps 95% confident of the data
quality or
tool reliability. If the best information you have instead is
telling you
"these two documents seem to be talking about more or less the
same notion",
then owl:sameAs probably isn't for you: it doesn't communicate
what you
know. Which of these situations you're in might be something of a
judgement
call, but it should be a judgement call grounded in clarity about
what a use
of owl:sameAs is claiming.
Just jumping on that part. My particular use-case is that I have an
algorithm to automatically derive owl:sameAs between two datasets [1].
This algorithm gives a really low-rate of false-positives after
evaluation. However, whenever this tool publish an owl:sameAs
statement, it has a "confidence" associated with it. Is there any
"standard" way to publish this confidence, as well as the sameAs
statement?
No. OWL2 allows for axiom annotations, but these tend to look fairly
ugly in RDF (due to reification).
If you wanted to support some inference with those, you may want to
try Pronto:
http://pellet.owldl.com/pronto
Pavel and I are looking for test data.
We also used axiom annotations to associate probabilities with
assertions, see:
http://www.w3.org/2007/OWL/wiki/Annotation_System
esp.
http://www.w3.org/2007/OWL/wiki/
Annotation_System#Probabilistic_extension
You might also look at my reificaiton table:
http://www.w3.org/2007/OWL/wiki/Reification_Alternatives
Unfortunately, no one has added any examples or even really exhibited
interest :)
You can see the data uri trick:
http://www.w3.org/mid/[EMAIL PROTECTED]
I can also think about further data that I may want to
publish and which I can quantify the accuracy (eg. RDF statements
derived from audio or video content).
Pavel and I would be interested in that work.
Hope this helped.
Cheers,
Bijan.