Larry,
(This has been sitting in my in-box for way too long. Sorry!)
Broadly, I agree with what you say about trust being a prior requirement for
sound inference, and that for many purposes trust and ambiguity may be inseparable.
In many cases, I think that trust is implied by the context of use, and that
this corresponds to the "99% of the time" that I can ignore trust. In this, I
see no difference to any other data processing application. Having made a
decision to perform a computation and do something with the results, there's an
implication that the inputs are worth processing. GIGO applies.
For me, it's having a "way to represent and talk about contextualization" that
allows trust and ambiguity to be treated explicitly, either as part of a
computation, or as part of a separate process of deciding what inputs are
appropriate to the purpose of a computation.
In this, I'm not seeing any fundamental disagreement with what you say. What I
perceive is that having as way to contextualize RDF statements, and process the
RDF accordingly, provides a framework within which a theory of speech acts might
also be accommodated.
In this, I think I'm also in agreement with what David said in response to your
message.
#g
--
On 30/01/2013 16:01, Larry Masinter wrote:
For me, there are several intertwined issues here, in no particular order:
- context
- ambiguity
- vagueness
- sound inference
- modalities (? - I mean conflicting or differing interpretations in a common
discourse)
What we *have* in the present model theoretic approach is sound inference.
In particular, with RDF, the idea that the RDF merge of two (or more) graphs is
true under exactly the interpretations that make the original graphs true. I
think this is a key necessity (but not sufficiency) for combining and remixing
data on the web through automated processing, and of itself represents an
important step forwards from what we had before. I'm reluctant to let that go.
I think you can only keep "sound inference" after you've done some kind of
Trust transformation, where the semantics of responses to requests are
Initially posited to not be available for combining and remixing before they
have been explicitly accepted as trustworthy.
I see no point in distinguishing between ambiguous assertions and untrustworthy
ones, and I like having a model where trusting is an explicit part of the
interface.
Along with this, I think vagueness is somewhat covered by a Quine-like appeal
to consideration of statements that people broadly accept as true, if one
doesn't
get too hung up on exactly *what* is denoted by individual terms, just
accepting that they have denotations that satisfy certain properties.
I think that ambiguity of the kind that permits Herbrand style models is
something that we should just ignore - it seems to me that trying to exclude
this kind of ambiguity in the formal structures leads to the kind of tar-pit
we've been wading in.
I *think*, BICBW, the last two points somewhat reflect what Tim was trying to
say in his original "without being ambushed by Ambiguity" - so to that extent
we
may agree.
But what we don't have is a satisfactory, easy to follow story that covers
context and modality (if "modality" is the right word to use here). Which would
(should) extend to topics like "slander".
Here, I fear we're being let down by the RDF working group. They have agreed
a structure, RDF Datasets, that is capable of encoding such ideas, but seem
unable to come to a consensus on how to provide semantic underpinning for using
this
structure. IMO, *any* semantic underpinning would be better than none -
without it, we're back in the mess we had figuring reification last time round.
(What I
was hoping for is *not* a definitive "this is what datasets mean", but a
framework within which one could construct semantics for datasets without
fear that the ground would later shift.) There have been several proposals,
and at
least two that I'm aware of in the life of the current RDF group - including
Pat's RDF as context logic - any (or most) of which could serve.
(Personally, I liked the proposal that was made, and apparently rejected, a
month or so ago
(http://www.w3.org/2011/rdf-wg/wiki/TF-Graphs/Minimal-dataset-
semantics). I
have the impression, maybe wrong, that Pat's context logic approach was a bit
more constrained, but still flexible enough to support a useful range of
modalities.)
Given this much, we would have some basis for actually talking about (or
representing) some of the tricky issues that are so hard to discuss in the
current "one interpretation to rule them all" view of RDF (and URIs). We could
propose structures that capture belief, provenance (which I come to see can
itself be highly contextual), disagreement, debate, conditionality, and so much
more. Maybe then we also have a framework for encoding the theory of
speech
acts, etc?
If we have a way to represent and talk about contextualization, then I think the
whole issue of a URI having different interpretations in different contexts (or
applications) is something we can accommodate. That is, it allows us to set out
without a presumption of global meaning, yet still exploit the commonalities
we can observe. Within RDF as we currently have it, we're forced to go "out of
band", and that makes it hard to really understand each other's difficulties.
...
As for "attrition", I don't think we're dealing with a belligerent enemy here.
But I do feel like I'm on the rough edge of the grindstone here. For the most
part, I can ignore this stuff in my daily work with RDF: 99% of the time it
seems it just doesn't matter. But I fear if we don't build on sound foundations
then sooner or later things will start to crumble. I care if that's the case,
but a lot less than I care about a lot of other things, so my forays into this
arena will be of limited energy. Maybe that's for the best.
#g
The problem with "for the most part, for 99% of the time, I can ignore
trust" is that you don't know which 1% of cases you can't. And if you
can't distinguish in advance between situations where you can trust
the results and situations when you can, then you basically have to
distrust everything.