Hmm...

2013-06-14 Thread Danny Ayers
Linked Data. Oo, that's a tricky one. I suppose it might be, like data,
that's, erm, linked.

I don't mean to trivialise, such questions should always be asked. The Web
has turned out to be rather a complicated system. But it mostly works
through simplicity, isn't that hard to conceive of a link between ideas,
but it's breathtakingly powerful when you follow the thought through.

I vote C, something I hadn't thought of this morning.

pip pip,
Danny.


-- 
http://dannyayers.com

http://webbeep.it  - text to tones and back again


Lucky SPARQL

2012-03-29 Thread Danny Ayers
A proposed mini-convention for giving SPARQL endpoints an I'm Feeling
Lucky option and hence supporting things like WebFinger.

Take a query like:

SELECT DISTINCT ?blog WHERE {
   ?person foaf:name James Snell .
   ?person foaf:weblog ?blog .
}
LIMIT 1

If I'm asking something like that, then what I'm probably trying to
achieve is to get to James' blog. But if use that on an endpoint, what
I'll get back is a bunch of XML (or JSON), from which I'll have to
parse out the URI, then fire off another GET. So what about having the
endpoint server support an additional parameter, something like:

http://example.org/sparql?query=SELECT+DISTINCT+... action=redirect

which would tell the server to pull out the URI in the results, and return:

HTTP/1.1 302 Found
Location: http://chmod777self.blogspot.com

- thus taking me straight to my actual target.

I've had James Snell's proposal [1] for simplifying WebFinger [2]
simmering away in the back of my mind. I'm unconvinced by the
architectural style of what he suggests (Gopher?), but he does get
bonus points for creativity. (See also James' response on that [3]).
In the query above I've used foaf:name which is likely to give
ambiguous results. But if it was foaf:mbox_sha1sum instead, you've got
a mechanism for WebFinger with James' optimization. Ok, the request
URI is a bit cumbersome, but templating a short version for special
cases like WebFinger would be easy enough.

(Blogged at [4] and fired off to all the social nets, but then it
occurred to me that most of the potential implementers will be around
here :)

Cheers,
Danny.

[1] http://chmod777self.blogspot.com/2012/03/thoughts-on-webfinger.html
[2] http://www.ietf.org/id/draft-jones-appsawg-webfinger-01.txt
[3] https://plus.google.com/112609322932428633493/posts/WvedRayrYEn
[4] http://dannyayers.com/2012/03/29/Lucky-SPARQL

-- 
http://dannyayers.com

http://webbeep.it  - text to tones and back again



Re: Lucky SPARQL

2012-03-29 Thread Danny Ayers
PS. A better name might be Optimistic SPARQL (and it should probably
return a 404 if the query doesn't return a suitable pattern).

On 29 March 2012 14:18, Danny Ayers danny.ay...@gmail.com wrote:
 A proposed mini-convention for giving SPARQL endpoints an I'm Feeling
 Lucky option and hence supporting things like WebFinger.

 Take a query like:

 SELECT DISTINCT ?blog WHERE {
   ?person foaf:name James Snell .
   ?person foaf:weblog ?blog .
 }
 LIMIT 1

 If I'm asking something like that, then what I'm probably trying to
 achieve is to get to James' blog. But if use that on an endpoint, what
 I'll get back is a bunch of XML (or JSON), from which I'll have to
 parse out the URI, then fire off another GET. So what about having the
 endpoint server support an additional parameter, something like:

 http://example.org/sparql?query=SELECT+DISTINCT+... action=redirect

 which would tell the server to pull out the URI in the results, and return:

 HTTP/1.1 302 Found
 Location: http://chmod777self.blogspot.com

 - thus taking me straight to my actual target.

 I've had James Snell's proposal [1] for simplifying WebFinger [2]
 simmering away in the back of my mind. I'm unconvinced by the
 architectural style of what he suggests (Gopher?), but he does get
 bonus points for creativity. (See also James' response on that [3]).
 In the query above I've used foaf:name which is likely to give
 ambiguous results. But if it was foaf:mbox_sha1sum instead, you've got
 a mechanism for WebFinger with James' optimization. Ok, the request
 URI is a bit cumbersome, but templating a short version for special
 cases like WebFinger would be easy enough.

 (Blogged at [4] and fired off to all the social nets, but then it
 occurred to me that most of the potential implementers will be around
 here :)

 Cheers,
 Danny.

 [1] http://chmod777self.blogspot.com/2012/03/thoughts-on-webfinger.html
 [2] http://www.ietf.org/id/draft-jones-appsawg-webfinger-01.txt
 [3] https://plus.google.com/112609322932428633493/posts/WvedRayrYEn
 [4] http://dannyayers.com/2012/03/29/Lucky-SPARQL

 --
 http://dannyayers.com

 http://webbeep.it  - text to tones and back again



-- 
http://dannyayers.com

http://webbeep.it  - text to tones and back again



Re: NIR SIDETRACK Re: Change Proposal for HttpRange-14

2012-03-27 Thread Danny Ayers
This seems an appropriate place for me to drop in my 2 cents.

I like the 303 trick. People that care about this stuff can use it
(and appear to be doing so), but it doesn't really matter too much
that people that don't care don't use it. It seems analogous to the
question of HTML validity. Best practices suggest creating valid
markup, but if it isn't perfect, it's not a big deal, most UAs will be
able to make sense of it. There will be reduced fidelity of
communication, sure, but there will be imperfections in the system
whatever, so any trust/provenance chain will have to consider such
issues anyway.
So I don't really think Jeni's proposal is necessary, but don't feel
particularly strongly one way or the other.

Philosophically I reckon the flexibility of what a representation of a
resource can be means that the notion of an IR isn't really needed.
I've said this before in another thread somewhere, but if the network
supported the media type thing/dog then it would be possible to GET
http://example.org/Basil with full fidelity. Right now it doesn't, but
I'd argue that what you could get with media type image/png would
still be a valid, if seriously incomplete representation of my dog. In
other words, a description of a thing shares characteristics with the
thing itself, and that's near enough for HTTP representation purposes.

Cheers,
Danny.

-- 
http://dannyayers.com

http://webbeep.it  - text to tones and back again



Minimum useful linked data

2011-09-03 Thread Danny Ayers
I'm not sure the 80% demographic for linked data is getting enough
attention. So how about this -

We have :

a) timbl's definition of linked data [1]

and it seems reasonable to assume that :

b) most developers using APIs aren't that familiar with RDF
c) JSON is popular with these developers

But (playing Devil's Advocate) the work around JSON-LD doesn't
consider a), and the work around linked data APIs doesn't consider b).

JSON-LD is another RDF format and the framing algorithms intended to
transform that format into something JSON-developer-friendly. But I
can't see any reference to the linked data principle: When someone
looks up a URI, provide useful information. Without that principle,
this isn't really linked data.

On the other hand the linked data API has this covered, e.g. in the
deployment example [2]:

 /doc/school/12345 should respond with a document that includes
information about /id/school/12345 (a concise bounded description of
that resource)

Except that to work with a CBD, reasonable knowledge is needed of RDF
*and* there isn't really a friendly mapping from arbitrary graphs to
JSON.

But surely most of the immediately useful information (and ways to
find further information) about the resource /doc/school/12345 will
be contained in the triples matching:

/doc/school/12345 ?pA ?o .
?s ?pB /doc/school/12345 .

where ?o and ?s *are not bnodes*

(some kind of arbitrary, 'disposable' local skolemisation might be
nice, so there was a dereferenceable URI to follow the path, but
keeping it simple - just drop bnodes)

If we've just looked up /doc/school/12345 we already know that
resource. Which leaves two arrays of pairs - a lot simpler than
arbitrary RDF graphs.

A small snag is ?s, ?p1 and ?p2 will always be URIs, but ?o could be a
URI or literal. Also the developer still has to deal with URIs as
names, not entirely intuitive.

There are probably dozens ways this data could be represented in JSON,
but for the sake of example I'll pick one (might be errorful - I'm not
that familiar with js) :

var forward = [ pA1 : { o1 : literal}, pA2 : { o2 : uri} ...]
var backward = [ s1 : pB1, s2 : pB2 ...]

Ok, URIs as names - borrowing an idea from JSON-LD, if we also have
some thing like:

 { context:
  { name:  http://xmlns.com/foaf/0.1/name;, homepage:
http://xmlns.com/foaf/0.1/homepage;
...

The data could be presented as e.g.
forward = [ name : Fred, homepage : http://example.org; ...

allowing easy access via :
 forward.name[0]

and lookup of the URIs if needed :
  context.name

Ideally the resource description would contain both
easy-to-use/minimal and anyRDF/maximal data. But maybe a link to a
SPARQL endpoint in the easy-to-use/minimal version could provide the
latter.

Ok, I've glossed over the question of how the consumer app is to know
how to interpret the data - but that's a problem for any arbitrary
format, like application/xml.

[1] http://www.w3.org/DesignIssues/LinkedData.html
[2] http://code.google.com/p/linked-data-api/wiki/API_Deployment_Example

-- 
http://dannyayers.com



Re: ANN: Sparallax! - Browse sets of things together (now those on your SPARQL endpoint)

2011-08-19 Thread Danny Ayers
On 18 August 2011 02:37, Giovanni Tummarello
giovanni.tummare...@deri.org wrote:
 Hi Danny,

 i liked sparallax a lot, problem is its hard to maintain. David didnt
 upgrade parallax any longer and the intern who did the sparql to MQL
 conversion that allows sparallax to operate on sparql is now not
 working with us anymore. Hard to say how difficult it would be to
 progress on that project

That's a shame. Hope it finds a new champion (might well have a fiddle myself).

 On the other hand in terms of browser i am going to be sponsoring
 development of TFacets ,

Link?

which is pretty good IMO, so we might be
 releasing a new version in a few months together with the original
 developer of course. What do you think of that?

Great stuff, keep up the good work!

Cheers,
Danny.

-- 
http://dannyayers.com



Re: ANN: Sparallax! - Browse sets of things together (now those on your SPARQL endpoint)

2011-08-17 Thread Danny Ayers
Nice work!

 Due to requirements of query functionalities and aggregates, Sparallax
 currently only works on Virtuoso SPARQL endpoints. (do other
 triplestores have aggregates? if so please let us know and we'll try
 to support other syntaxes as well)

Which aggregate functions are needed?
ARQ has some support (very possibly more than) listed here:
http://jena.sourceforge.net/ARQ/group-by.html

[Andy, are there any more query examples around? I can't seem to get
count(*) working here]

I see there are Graph and Dataset fields on the form - how are they
used? What happens with an endpoint that only has a default, unnamed
graph?

 Try sparallax on our test datasets or put just the URL of your SPARQL
 endpoint (but make sure you have materialized your RDFs triples, it's
 needed)

materialized?

(I assume that's not the httpRange-14 extension for dogs :)

What else can be done to the data to make it more Sparallax friendly?

I noticed clues in the form boxes for abstractprop and imageprop, also
the config file -

http://code.google.com/p/freebase-parallax/source/browse/app-parallax/sparallax/scripts/config.xml

Do I take it from the rdfs:Class entry there that it uses RDFS to
decide on collections?

Sorry for the raft of questions, but I recently put LD browser on my
todo list, completely forgetting about Parallax and all the boxes that
already ticks, and now Sparallax... so I want to get an idea of how it
works to avoid reinventing the wheel (in a square shape).

Cheers,
Danny.





-- 
http://dannyayers.com



Defining LD (was Re: Branding?)

2011-07-28 Thread Danny Ayers
[cc'ing public-lod@w3.org, this all seems to be drifting a little
beyond JSON scope - see [1], [2], [3] ]

LD meaning Labeled and Directed for JSON-LD works for me too.

But I don't see a problem with defining linked data as being all-URIs
(fully grounded, no bnodes or literals) just for spec purposes, it
does at least emphasize the key feature (although I'm still a fan of
bnodes :)

Is a graph solely comprised of bnodes linked data? Presumably not.

Is the result of merging an all-URI graph with an all-bnode graph
linked data? In general parlance and practice yes, but it doesn't
actually contain any more information than the first subgraph.

So what happens with a graph which contains something like:

#uriA :p1 _:x .
_:x :p2 #uriB .

?

It's tricky, the individual triples don't entirely fit with the 4
principles, together they kind-of do. But I certainly don't think we
need to leap to skolemization to make sense of this.

If the graph's on the Web as it should be, then it'll be named with a
URI, so we could get a quasi-entailment along the lines of:

#graph :contains #uriA .
#graph :contains #uriB .

or if you prefer to stay within the graph, something like:

#uriA :p1 _:x .
_:x :p2 #uriB .
=
#uriA rdfs:seeAlso #uriB .

Dunno, this might all just be angels on a pinhead stuff...

Cheers,
Danny.

[1] http://json-ld.org/spec/latest/
[2] http://lists.w3.org/Archives/Public/public-linked-json
[3] https://plus.google.com/102122664946994504971/posts/15eHTC3FA4A


-- 
http://danny.ayers.name



Re: WebID and pets -- was: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-20 Thread Danny Ayers
On 20 June 2011 10:51, Kingsley Idehen kide...@openlinksw.com wrote:
 On 6/20/11 8:31 AM, Henry Story wrote:

 Perhaps it can become mythical. The URL should be by now:-)

 The URI :-)

The mythical URI, perfect.

-- 
http://danny.ayers.name



Re: WebID and pets -- was: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-20 Thread Danny Ayers
On 19 June 2011 20:42, Henry Story henry.st...@bblfish.net wrote:

 Ok. So you need to give each of your dogs and cats a webid enabled RDFID chip

To inject a little reality: Sashapooch has got an embedded RFID (not
yet RDFID!) tag, not sure but I think it became Italian law.
Basilhound being a bit older, before this stuff came in, has a
(sloppy) tattoo on his tummy, something like LOLU51. I assume the chip
in Sasha has a similar string in it.

But the idea is great - in the same way QR codes are most useful when
they include a URL, putting one in the RFID tag of animals makes a lot
of sense. Simple use case: when the critter wanders off, you can
easily contact the owner.

I know the RFID chips are now really cheap commodities, what I don't
know about is the scanners - are they yet affordable enough that you
could say include one in a mobile phone?

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: Fwd: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-20 Thread Danny Ayers
Point taken, I forget where I am sometimes, will try harder. My apologies.

On 19 June 2011 21:06, Nathan nat...@webr3.org wrote:
 Danny Ayers wrote:

 I feel very guilty being in threads like this. Shit fuck smarter people
 than
 me.

 Just minor, and I can hardly talk as I swear most often in different
 settings, but I am a little surprised to see this language around here. I
 quite like having an arena where these words don't arise in the general
 conversation.

 Ack you know what I'm saying - nothing personal, but I'd personally
 appreciate not seeing them too frequently around here :)

 Best!




-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-19 Thread Danny Ayers
Point taken Pat but I have been in the same ring as you for many
years, but to progress the Web  can't we just take our hands off
the wheel, let it go where it wants. (Not that I have any influence,
and realistically you neither Pat). I'm now just back from a
sabbatical, but right now would probably be a good time to take one.
If these big companies do engage on the microdata front, it's great.
I'm sure it's been said before, why don't we get pornographers working
hard on their metadata on visuals, because they work for Google/Bing
whatever. The motivation right now might not be towards Tim's day one
goals of sharing some stuff between departments at CERN, but that's
irrelevant in the longer term. Getting the the Web as an
infrastructure for data seems like a significant step in human
evolution. And it's a no-brainer. But getting from where we are to
there is tricky. Honestly, I don't care. It'll happen, my remaining
lifespan or about 50 on top, there will be another, big, revolution.

Society is already so different, just with little mobile phones.

/gak I'm no going to speculate, we're heading for a major change.

Cheers,
Danny.

On 19 June 2011 06:05, Pat Hayes pha...@ihmc.us wrote:
 Really (sorry to keep raining on the parade, but) it is not as simple as 
 this. Look, it is indeed easy to not bother distinguishing male from female 
 dogs. One simply talks of dogs without mentioning gender, and there is a lot 
 that can be said about dogs without getting into that second topic. But 
 confusing web pages, or documents more generally, with the things the 
 documents are about, now that does matter a lot more, simply because it is 
 virtually impossible to say *anything* about documents-or-things without 
 immediately being clear which of them - documents or things - one is talking 
 about. And there is a good reason why this particular confusion is so 
 destructive. Unlike the dogs-vs-bitches case, the difference between the 
 document and its topic, the thing, is that one is ABOUT the other. This is 
 not simply a matter of ignoring some potentially relevant information (the 
 gender of the dog) because one is temporarily not concerned with it: it is 
 two different ways of using the very names that are the fabric of the 
 descriptive representations themselves. It confuses language with language 
 use, confuses language with meta-language. It is like saying giraffe has 
 seven letters rather than giraffe has seven letters. Maybe this does not 
 break Web architecture, but it certainly breaks **semantic** architecture. It 
 completely destroys any semantic coherence we might, in some perhaps 
 impossibly optimistic vision of the future, manage to create within the 
 semantic web. So yes indeed, the Web will go on happily confusing things with 
 documents, partly because the Web really has no actual contact with things at 
 all: it is entirely constructed from documents (in a wide sense). But the 
 SEMANTIC Web will wither and die, or perhaps be still-born, if it cannot find 
 some way to keep use and mention separate and coherent. So far, http-range-14 
 is the only viable suggestion I have seen for how to do this. If anyone has a 
 better one, let us discuss it. But just blandly assuming that it will all 
 come out in the wash is a bad idea. It won't.

 Pat

 On Jun 18, 2011, at 1:51 PM, Danny Ayers wrote:

 On 17 June 2011 02:46, David Booth da...@dbooth.org wrote:

 I agree with TimBL that it is *good* to distinguish between web pages
 and dogs -- and we should encourage folks to do so -- because doing so
 *does* help applications that need this distinction.  But the failure to
 make this distinction does *not* break the web architecture any more
 than a failure to distinguish between male dogs and female dogs.

 Thanks David, a nice summary of the most important point IMHO.

 Ok, I've been trying to rationalize the case where there is a failure
 to make the distinction, but that's very much secondary to the fact
 that nothing really gets broken.

 Cheers,
 Danny.

 http://danny.ayers.name



 
 IHMC                                     (850)434 8903 or (650)494 3973
 40 South Alcaniz St.           (850)202 4416   office
 Pensacola                            (850)202 4440   fax
 FL 32502                              (850)291 0667   mobile
 phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes









-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-19 Thread Danny Ayers
On 19 June 2011 12:37, Henry Story henry.st...@bblfish.net wrote:

[snip pat]

 The way to do this is to build applications where this thing matters. So for 
 example in the social web we could build
 a slightly more evolved like protocol/ontology, which would be 
 decentralised for one, but would also allow one to distinguish documents, 
 from other parts of documents and things. So one could then say that one 
 wishes to bring people's attention to a well written article on a rape, 
 rather than having to like the rape. Or that one wishes to bring people's 
 attention to the content of an article without having to like the style the 
 article is written in.

I would have come down on you like a ton of bricks for that Henry, if
it wasn't for seeing to-and-fro on Facebook about some Nazi-inspired
club (Slimelight, for the record). On FB there is no way to express
your sentiments. Like/blow to smithereens.

 If such applications take hold, and there is a way the logic of using these 
 applications is made to work where these distinctions become useful and 
 visible to the end user, then there will be millions of vocal supporters of 
 this distinction - which we know exists, which programmers know exists, which 
 pretty much everyone knows exists, but which people new to the semweb web, 
 like the early questioners of the viability of the mouse and the endless 
 debates about that animal, will question because they can't feel in their 
 bones the reality of this thing.


 So far, http-range-14 is the only viable suggestion I have seen for how to 
 do this.

 Well hash uris are of course a lot easier to understand. http-range-14 is 
 clearly a solution which is good to know about but that will have an adoption 
 problem.

 I am of the view that this has been discussed to death, and that any mailing 
 list that discusses this is short of real things to do.

I confess to talking bollocks when I should be coding.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-19 Thread Danny Ayers
I thought forever that if we see iniquities we are duty-bound to stand
in the way.

But that don't seem to change anything.

Let the crap rain forth, if you really need to make sense of it the
blokes on this list will do it.

Activity is GOOD, no matter how idiotic.

Decisions made on very different premises than anyone around here would promote.

Sorry, I'm of the opinion that the Web approach is the winner. Alas it
also seems lowest common denominator.

Cheers,
Danny.

On 19 June 2011 19:36, Henry Story henry.st...@bblfish.net wrote:

 On 19 Jun 2011, at 18:58, Nathan wrote:

 Nathan wrote:
 Henry Story wrote:
 On 19 Jun 2011, at 18:27, Giovanni Tummarello wrote:

 but dont be surprised as  less and less people will be willing to listen 
 as more and more applications (Eg.. all the stuff based  on schema.org) 
 pop up never knowing there was this problem... (not in general. of course 
 there is in general, but for their specific use cases)

 The question is if schema.org makes the confusion, or if the schemas 
 published there use a DocumentObject ontology where the distinctions are 
 clear but the rule is that object relationships are in fact going via the 
 primary topic of the document. I have not looked at the schema, but it 
 seems that before arguing that they are inconsistent one should see if 
 there is not a consistent interpretation of what they are doing.
 Sorry, I'm missing something - from what I can see, each document has a 
 number of items, potentially in a hierarchy, and each item is either 
 anonymous, or has an @itemid.
 Where's the confusion between Document and Primary Subject?

 Or do you mean from the Schema.org side, where each Type and Property has a 
 dereferencable URI, which currently happens to also eb used for the document 
 describing the Type/Property?

 Well I can't really tell because I don't know what the semantics of those 
 annotations are, or how they function. Without those it is difficult to tell 
 if they have made a mistake. If there is no way of translating what they are 
 doing into a system that does not make the confusion, then one could explore 
 what the cost of that will be to them. If the confusion is strong then there 
 will be limitations in what they can express that way. It will then be a 
 matter of working out what those limitations are and then offering services 
 that allow one to go further than what they are proposing. At the very least 
 the good thing is that they are not bringing the confusion into the RDF 
 space, since they are using their own syntax and ontologies.

 There may also be an higher way to fix this so that they could return a 20x 
 (x-some new number) which points to the document URL (but returns the 
 representation immediately, a kind of efficient HTTP-range-14 version) So 
 there are a lot of options. Currently their objects are tied to an html 
 document. What are the json crowd going to think?

 In any case there is a problem of translation that has to be dealt with first.

 Henry

 Social Web Architect
 http://bblfish.net/





-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-19 Thread Danny Ayers
Only personal Henry, but have you tried the Myers-Briggs thing - I
think you used to be classic INTP/INTF - but once you got WebID in
your sails it's very different. These things don't really allow for
change.

Only slightly off-topic, very relevant here, need to pin down WebID in
a sense my dogs can understand.

The Myers-Briggs thing is intuitively rubbish. But with only one or
two posts in the ground, it does seem you can extrapolate.

On 19 June 2011 19:52, Henry Story henry.st...@bblfish.net wrote:

 On 19 Jun 2011, at 19:44, Danny Ayers wrote:


 I am of the view that this has been discussed to death, and that any 
 mailing list that discusses this is short of real things to do.

 I confess to talking bollocks when I should be coding.

 yeah, me too. Though now you folks managed to get me interested in this 
 problem! (sigh)

 Henry

 Social Web Architect
 http://bblfish.net/





-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-18 Thread Danny Ayers
On 16 June 2011 22:39, Pat Hayes pha...@ihmc.us wrote:

 Not only do I not follow your reasoning, I don't even know what it is you are 
 saying. The document is a valid *representation* of the car, yes of course.

That's all that's necessary to square this circle.

 But as valid as the car itself? So you think a car is a representation of 
 itself? Or are you drawing a contrast between the 'named car resource' and 
 the car itself? ???

All HTTP delivers is representations of named resources. (I very much
do think a car is a representation of itself in HTTP terms, in the
same way a document is, but it isn't necessary here).

 Maybe it would be best if we just dropped this now. I gather that you were 
 offering me a way to make semantic sense of something, but Im not getting any 
 sense at all out of this discussion, I am afraid.

I'll be delighted to drop it, I thought we were getting stuck in a tar
pit but your statement above is the er, oil, that gets us out.

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-18 Thread Danny Ayers
On 17 June 2011 02:46, David Booth da...@dbooth.org wrote:

 I agree with TimBL that it is *good* to distinguish between web pages
 and dogs -- and we should encourage folks to do so -- because doing so
 *does* help applications that need this distinction.  But the failure to
 make this distinction does *not* break the web architecture any more
 than a failure to distinguish between male dogs and female dogs.

Thanks David, a nice summary of the most important point IMHO.

Ok, I've been trying to rationalize the case where there is a failure
to make the distinction, but that's very much secondary to the fact
that nothing really gets broken.

Cheers,
Danny.

http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-15 Thread Danny Ayers
On 13 June 2011 07:52, Pat Hayes pha...@ihmc.us wrote:
 OK, I am now completely and utterly lost. I have no idea what you are saying 
 or how any of it is relevant to the http-range-14 issue. Want to try running 
 it past me again? Bear in mind that I do not accept your claim that a 
 description of something is in any useful sense isomorphic to the thing it 
 describes. As in, some RDF describing, say, the Eiffel tower is not in any 
 way isomorphic to the actual tower. (I also do not understand why you think 
 this claim matters, by the way.)

 Perhaps we are understanding the meaning of http-range-14 differently. My 
 understanding of it is as follows: if an HTTP GET applied to a bare URI 
 http:x returns a 200 response, then http:x is understood to refer to (to be a 
 name for, to denote) the resource that emitted the response. Hence, it 
 follows that if a URI is intended to refer to something else, it has to emit 
 a different response, and a 303 redirect is appropriate. It also follows that 
 in the 200 case, the thing denoted has to be the kind of thing that can 
 possibly emit an HTTP response, thereby excluding a whole lot of things, such 
 as dogs, from being the referent in such cases.

Even with information resources there's a lot of flexibility in what
HTTP can legitimately respond with, there needn't be bitwise identity
across representations of an identified resource. Given this, I'm
proposing a description can be considered a good-enough substitute for
an identified thing. Bearing in mind it's entirely up to the publisher
if they wish to conflate things, and up to the consumer to try and
make sense of it.

As a last attempt - this is a tar pit! - doing my best to take on
board your (and other's) comments, I've wrapped up my claims in a blog
post: http://dannyayers.com/2011/06/15/httpRange-14-Reflux

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle

2011-06-15 Thread Danny Ayers
Awesome rant Richard!

I think this bit would work better live :

 I want to tell the publishers of these web pages that they could join the web 
 of data just by adding a few @rels to some as, and a few @properties to 
 some spans, and a few @typeofs to some divs (or @itemtypes and 
 @itemprops). And I don't want to explain to them that they should also change 
 http:// to thing:// or tdb:// or add #this or #that or make their stuff 
 respond with 303 or to MGET requests because you can't squeeze a dog through 
 an HTTP connection.

for arguments with value, you may have hit the nail on the head here :

 Being useful trumps making semantic sense. The web succeeded *because* it 
 conflates name and address. The web of data will succeed *because* it 
 conflates a thing and a web page about the thing.

Now tell me, why is it so easy to wind you up on these issues?

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle

2011-06-15 Thread Danny Ayers
On 14 June 2011 10:49, Richard Cyganiak rich...@cyganiak.de wrote:
 On 13 Jun 2011, at 20:51, David Booth wrote:
 http://richard.cyganiak.de/
    a foaf:Document;
    dc:title Richard Cyganiak's homepage;
    a foaf:Person;
    foaf:name Richard Cyganiak;
    owl:sameAs http://twitter.com/cygri;
    .

 That should be fine for applications that do not need to distinguish
 between foaf:Documents and foaf:Persons . . . which is a large class of
 applications.  OTOH, there *are* applications that need to distinguish
 between foaf:Documents and foaf:Persons.  *Those* applications will need
 to apply disambiguation techniques, and some of their owners will
 (wrongly) blame you for the perceived extra work it causes them --
 extra only because they happen to be implementing a different class of
 application than your data best supports.

 Yes, good analysis.

Not sure I'm comfortable with the notion of data being published with
a predetermined class of consuming applications. The bottom lines are:
publish what you want, interpret how you see fit. Somewhere between
Postel and Aleister Crowley.

My comments on httpRange-14 could not be any less relevant to the
reality, I'd just rather things were kinda tidy rather than swept
under the carpet (at home I have dog fur on the tiles). Yes, I do
think if we can have some approximation of a consistent common model,
that is better for communication. But it's pretty much a certainty
that the best course of action is to live with whatever comes up and
make the best of it. Build on what we can. Cue cliche if history has
taught us anything...

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-15 Thread Danny Ayers
On 15 June 2011 18:30, Pat Hayes pha...@ihmc.us wrote:

 Boy, that is a humdinger of a non-sequiteur. Given that HTTP has flexibility, 
 it is OK to identify a description of a thing with the actual thing? To me 
 that sounds like saying, given that movies are projected, it is OK to say 
 that fish are bicycles.

Not that I think I did a non-sequiteur, it is totally ok to say that
fish are bicycles, if that's what you want to say.

[snip]

 OK, thanks. Here is your argument, as far as I can understand it.

 1. HTTP representations may be partial or incomplete. (Agreed.)
 2. HTTP reps can have many different media types, and this is OK. (Agreed, 
 though I cant see what relevance this has to anything.)
 3. A description is a kind of representation. (Agreed, and there was no need 
 to get into the 'isomorphism' trap. We in KRep have been calling descriptions 
 representations for decades now.)

 4. Therefore, a HTTP URI can simultaneously be understood as referring to a 
 document and a car.

 Whaaat? How in Gods name can you derive this conclusion from those premises?

my wording could be better, but I stand by it...  a document
describing the car, through HTTP, can be an equally valid
representation of the named car resource as the car itself (as long as
it's qualified by media type)

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-15 Thread Danny Ayers
On 16 June 2011 02:26, Pat Hayes pha...@ihmc.us wrote:

 If you agree with Danny that a description can be a substitute for the thing 
 it describes, then I am waiting to hear how one of you will re-write 
 classical model theory to accommodate this classical use/mention error. You 
 might want to start by reading Korzybski's 'General Semantics'.

IANAL, but I have heard of the use/mention thing, quite often. I don't
honestly know whether classical model theory needs a rewrite, but I'm
sure it doesn't on the basis of this thread. I also don't know enough
to know whether it's applicable - from your reaction, I suspect not.

As a publisher of information on the Web, I'm pretty much free to say
what I like (cf. Tim's Design Notes). Fish are bicycles. But that
isn't very useful.

But if I say Sasha is some kind of weird Collie-German Shepherd cross,
that has direct relevance to Sasha herself. More, the arcs in my
description between Sasha and her parents have direct correspondence
with the arcs between Sasha and her parents. There is information
common to the reality and the description (at least in human terms).
The description may, when you stand back, be very different in its
nature to the reality, but if you wish to make use of the information,
such common aspects are valuable. We've already established that HTTP
doesn't deal with any kind of one true representation. Data about
Sasha's parentage isn't Sasha, but it's closer than a non-committal
303 or rdfs:seeAlso. There's nothing around HTTP that says it can't be
given the same name, and it's a darn sight more useful than a
wave-over-there redirect or a random fish/bike association. I can't
see anything it breaks either.

Cheers,
Danny.







-- 
http://danny.ayers.name



Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-12 Thread Danny Ayers
On 12 June 2011 01:51, Pat Hayes pha...@ihmc.us wrote:

 On Jun 11, 2011, at 12:20 PM, Richard Cyganiak wrote:

 ...
 It's just that the schema.org designers don't seem to care much about the 
 distinction between information resources and angels and pinheads. This is 
 the prevalent attitude outside of this mailing list and we should come to 
 terms with this.

 I think we should foster a greater level of respect for representation
 choices here. Your dismissal of the distinction between information
 resources and what they are about insults the efforts of many
 researchers and practitioners and their efforts in domains where such
 a distinction in quite important. Let's try not to alienate part of
 this community in order to interoperate with another.

 Look, Alan. I've wasted eight years arguing about that shit and defending 
 httpRange-14, and I'm sick and tired of it. Google, Yahoo, Bing, Facebook, 
 Freebase and the New York Times are violating httpRange-14. I consider that 
 battle lost. I recanted. I've come to embrace agnosticism and I am not 
 planning to waste any more time discussing these issues.


 Well, I am sympathetic to not defending HTTP-range-14 and nobody ever, ever 
 again even mentioning information resource, but I don't think we can just 
 make this go away by ignoring it. What do we say when a URI is used both to 
 retrieve, um sorry, identify, a Web page but is also used to refer to 
 something which is quite definitely not a web page? What do we say when the 
 range of a property is supposed to be, say, people, but its considered OK to 
 insert a string to stand in place of the person? In the first case we can 
 just say that identifying and reference are distinct, and that one expects 
 the web page to provide information about the referent, which is a nice 
 comfortable doctrine but has some holes in it. (Chiefly, how then do we 
 actually refer to a web page?) But the second is more serious, seems to me, 
 as it violates the basic semantic model underlying all of RDF through OWL and 
 beyond. Maybe we need to re-think this model, but if so then we really ought 
 to be doing that re-thinking in the RDF WG right now, surely? Just declaring 
 an impatient agnosticism and refusing to discuss these issues does not get 
 things actually fixed here.

For pragmatic reasons I'm inclined towards Richard's pov, but it would
be nice for the model to make sense.

Pat, how does this sound:

From HTTP we get the notions of resources and representations. The
resource is the conceptual entity, the representations are concrete
expressions of the resource. So take a photo of my dog -

http://example.org/sasha-photo foaf:depicts http://example.org/Sasha .

If we deref http://example.org/sasha-photo then we would expect to get
a bunch of bits that can be displayed as an image.

But that bunch of bits may be returned with HTTP header -

Content-Type: image/jpeg

or

Content-Type: image/gif

Which, for convenience, lets say correspond to files on the server
called sasha-photo.jpg and sasha-photo.gif

Aside from containing a different bunch of bits because of the
encoding, sasha-photo.jpg could be a lossy-compressed version of
sasha-photo.gif, containing less pixel information yet sharing many
characteristics.

All ok so far..?

If so, from this we can determine that a representation of a resource
need not be complete in terms of the information it contains to
fulfill the RDF statement and the HTTP contract.

Now turning to http://example.org/Sasha, what happens if we deref that?

Sasha isn't an information resource, so following HTTP-range-14 we
would expect a redirect to (say) a text/html description of Sasha.

But what if we just got a 200 OK and some bits Content-Type: text/html ?

We are told by this that we have a representation of my dog, but from
the above, is there any reason to assume it's a complete
representation?

The information would presumably be a description, but is it such a
leap to say that because this shares many characteristics with my dog
(there will be some isomorphism between a thing and a description of a
thing, right?) that this is a legitimate, however partial,
representation?

In other words, what we are seeing of my dog with -

Content-Type: text/html.

is just a very lossy version of her representation as -

Content-Type: physical-matter/dog

Does that make (enough) sense?

Cheers,
Danny.




-- 
http://danny.ayers.name



Re: Schema.org in RDF ...

2011-06-12 Thread Danny Ayers
On 12 June 2011 16:26, Richard Cyganiak rich...@cyganiak.de wrote:
 Hi Pat,

 On 12 Jun 2011, at 00:33, Pat Hayes wrote:
 Nothing is gained from the range assertions. They should be dropped.

 They capture a part of the schema.org documentation: the “expected type” of 
 each property. That part of the documentation would be lost. Conversely, 
 nothing is gained by dropping them.

 Let me respectfully disagree. Range assertions (in RDFS or OWL) do *not* 
 capture the notion of expected type. They state a strict actual type, and 
 cannot be consistently be over-ridden by some other information. Which has 
 the consequence that these are liable to be, quite often, plain flat wrong. 
 Which in turn has the consequence that there is something to be gained by 
 dropping them, to wit, internal consistency. They are not mere 
 documentation; they have strictly entailed consequences which many actual 
 reasoners can and will generate, and which to deny would be to violate the 
 RDFS specs. If you don't want these conclusions to be generated, don't make 
 the assertions that would sanction them.

 Data on the Web is messy. You cannot reason over it without filtering it 
 first. I think it is useful to document how data publishers are *expected* to 
 use these terms, even if we know that many will -- for good or bad reasons -- 
 use them in *unexpected* ways.

 For documentation, use the structures provided in RDFS for documentation, 
 such as rdfs:comment.

 rdfs:comment is for prose. We explicitly know the “expected types” of 
 properties, and I'd like to keep that information in a structured form rather 
 than burying it in prose. As far as I can see, rdfs:range is the closest 
 available term in W3C's data modeling toolkit, and it *is* correct as long as 
 data publishers use the terms with the “expected type.”

I don't think it is that close to expected type, or at least it's
kinda back to front. If we have -

:Colour a rdfs:Class .
:hasColour a rdf:Property .
:hasColour rdfs:range :Colour .

- and someone makes a statement

#something :hasColour #wet .

then we get

#wet a :Colour .

no?

so it's not an expectation thing, it's an inference that comes after
the fact...if you see what I mean.

As Pat suggested, I think this could easily lead to unintended conclusions.

Cheers,
Danny.







-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-12 Thread Danny Ayers
 (there will be some isomorphism between a thing and a description of a
 thing, right?

 Absolutely not. Descriptions are not in any way isomorphic to the things they 
 describe. (OK, some 'diagrammatic' representations can be claimed to be, eg 
 in cartography, but even those cases don't stand up to careful analysis. in 
 fact.)

Beh! Some isomorphism is all I ask for. Take your height and shoe size
- those numeric descriptions will correspond 1:1 with aspects of the
reality. Keep going to a waxwork model of you, the path you walked in
the park this afternoon - are you suggesting there's no isomorphism?

 ** To illustrate. Someone goes to a website about dogs, likes one of the 
 dogs, and buys it on-line. He goes to collect the dog, the shopkeeper gives 
 him a photograph of the dog. Um, Where is the dog? Right there, says the 
 seller, pointing to the photograph. That isn't good enough. The seller 
 mutters a bit, goes into the back room, comes back with a much larger, 
 crisper, glossier picture, says, is that enough of the dog for you? But the 
 customer still isn't satisfied. The seller finds a flash card with an 
 hour-long HD movie of the dog, and even offers, if the customer is willing to 
 wait a week or two, to have a short novel written by a well-known author 
 entirely about the dog. But the customer still isn't happy. The seller is at 
 his wits end, because he just doesn't know how to satisfy this customer. What 
 else can I do? He asks. I don't have any better representations of the dog 
 than these. So the customer says, look, I want the *actual dog*, not a 
 representation of a dog. Its not a matter of getting me more information 
 about the dog; I want the actual, smelly animal. And the seller says, what do 
 you mean,  an actual dog? We just deal in **representations** of dogs. 
 There's no such thing as an actual dog. Surely you knew that when you looked 
 at our website?

Lovely imagery, thanks Pat.

But replace a novel written by a dog for dog in the above. Why
should the concept of a document be fundamentally any different from
the concept of a dog, hence representations of a document and
representations of a dog? Ok, you can squeeze something over the wire
that represents  a novel written by a dog but you (probably) can't
squeeze a dog over, but that's just a limitation of the protocol.
There's equally an *actual* document (as a bunch of bits) and an
*actual* dog (as a bunch of cells).

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-12 Thread Danny Ayers
On 13 June 2011 02:28, Pat Hayes pha...@ihmc.us wrote:

 Next point: there can indeed be correspondences between the syntactic 
 structure of a description and the aspects of reality it describes.

That is what I was calling isomorphism (which I still don't think was
inaccurate). But ok, say there are correspondences instead. I would
suggest that those correspondences are enough to allow the description
to take the place of a representation under HTTP definitions.

 But I don't think all this is really germane to the http-range-14 issue. The 
 point there is, does the URI refer to something like a representation 
 (information resource, website, document, RDF graph, whatever) or something 
 which definitely canNOT be sent over a wire?

I'm saying conceptually it doesn't matter if you can put it over the
wire or not.

 But replace a novel written by a dog for dog in the above. Why
 should the concept of a document be fundamentally any different from
 the concept of a dog, hence representations of a document and
 representations of a dog?

 I dont follow your point here. If you mean, a document is just as real as a 
 dog, I agree. So?  But if you mean, there is no basic difference between a 
 document and a dog, I disagree. And so does my cat.

Difference sure, but not necessarily relevant.

 Ok, you can squeeze something over the wire
 that represents  a novel written by a dog but you (probably) can't
 squeeze a dog over, but that's just a limitation of the protocol.

 So improved software engineering will enable us to teleport dogs over the 
 internet? Come on, you don't actually believe this.

It would save a lot of effort sometimes (walkies!) but all I'm
suggesting is that if, hypothetically, you could teleport matter over
the internet, all you'd be looking at as far as http-range-14 is
concerned is another media type. Working back from there, and given
correspondences as above, a descriptive document can be a valid
representation of the identified resource even if it happens to be an
actual thing, given that there isn't necessary any one true
representation. We don't need the Information Resource distinction
here (useful elsewhere maybe).

Cheers,
Danny.

-- 
http://danny.ayers.name



Problems with SPARQL on WWW4 server

2011-03-20 Thread Danny Ayers
(Not sure which list is most appropriate - please advise)

I'm getting a lot of errors querying against:

http://www4.wiwiss.fu-berlin.de/factbook/sparql

while some queries work, e.g.

PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema#

SELECT DISTINCT ?s WHERE {
?s rdfs:label ?label .
FILTER REGEX(?label, libya, i)
}

many others fail, e.g.

PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema#

SELECT DISTINCT * WHERE {
http://www4.wiwiss.fu-berlin.de/factbook/resource/Libya ?p ?o
}

=

...
h2HTTP ERROR: 500/h2prerethrew:
de.fuberlin.wiwiss.d2rq.D2RQException: Table 'factbook.neighbors'
doesn't exist: SELECT DISTINCT `T0_neighbors`.`name_encoded` FROM
`bordercountries` AS `T0_bordercountries`, `neighbors` AS
`T0_neighbors`, `countries` AS `T0_countries` WHERE
`T0_bordercountries`.`Landboundaries_bordercountries_title` =
`T0_neighbors`.`Name` AND `T0_bordercountries`.`Name` =
`T0_countries`.`Name` AND `T0_countries`.`name_encoded` = 'Libya'
(E0)/pre
pRequestURI=/sparql/ppismalla
href=http://jetty.mortbay.org/;Powered by Jetty:///a
...

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Metaweb joins Google

2010-07-24 Thread Danny Ayers
It's not hard to find reasons to be cynical about Google's move,
mostly around the potential for them to make Freebase their own in the
sense of hiding it within their infrastructure and only exposing
proprietary, user-oriented interfaces - the temptation for Google to
improve aspects of the system, moving them away from standards.

There's certainly some coincidence between Google's aim of being the
one true search engine, and Freebase as *the* knowledge base.

But Metaweb were relatively quick to expose standards-based
interfaces, and their adoption of a CC license for the data has to be
commended. Another thing they got right was in picking up on the ways
people were spontaneously (well, not W3C-led anyway) using data on the
Web - wikis, tagging, folksonomies etc. (You could maybe say Metaweb
had similar aims at the core, but when it came to end-users pretty
much the opposite end of the spectrum from Cyc).

I agree totally with what Aldo and others have said about this being
great for getting the notion of graph out there, the right companies
do now seem to be getting on the bandwagon.

So worst case scenario I'd say would be for Google to play with the
tech, make things more proprietary, not get interesting results and
for the whole thing to wither as a failed experiment.

Best case maybe we see Google rapidly become a huge blob near the
centre of the linked data cloud, and additionally (and probably more
significantly) demonstrate one way the Web of Data can be useful by
enhancing their search engine.

Personally I'm optimistic, and congratulations to both Metaweb and
Google. Should be interesting...

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Subjects as Literals

2010-07-06 Thread Danny Ayers
I've been studiously avoiding this rat king of a thread, but just on
this suggestion:

On 2 July 2010 11:16, Reto Bachmann-Gmuer reto.bachm...@trialox.org wrote:
...
 Serialization formats could support

 Jo :nameOf :Jo

 as a shortcut for

 [ owl:sameAs Jo; :nameOf :Jo]

 and a store could (internally) store the latter as

 Jo :nameOf :Jo

 for compactness and efficiency.

what about keeping the internal storage idea, but instead of owl:sameAs, using:

:Jo rdfs:value Jo

together with

:Jo rdf:type rdfs:Literal

?

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Subjects as Literals

2010-07-06 Thread Danny Ayers
On 6 July 2010 13:34, Nathan nat...@webr3.org wrote:
 Danny Ayers wrote:

 :Jo rdfs:value Jo

 together with

 :Jo rdf:type rdfs:Literal

 ?

 1: is there and rdfs:value? (rdf:value)

My mistake, it is rdf:value

 2: I would *love* to see rdf:value with a usable tight definition that
 everybody can rely on

It's certainly usable, but the definition is about as open as it could be:

http://www.w3.org/TR/rdf-primer/#rdfvalue

http://www.w3.org/TR/rdf-schema/#ch_value

http://www.w3.org/TR/rdf-mt/#rdfValue

...in fact it rather resembles a bnode in the property position.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Using predicates which have no ontology?

2010-04-02 Thread Danny Ayers
On 3 April 2010 00:53, Nathan nat...@webr3.org wrote:
 Hi All,

 Any guidance on using predicates in linked data / rdf which do not come
 from rdfs/owl. Specifically I'm considering the range of:
  http://www.iana.org/assignments/relation/*

Can't find a URL that resolves there

 such as edit, self, related etc - with additional consideration to the
 thought that these will end up in rdf via RDFa/grddl etc v soon if not
 already.

 Any guidance?

By using something as a predicate you are making statements about it. But...

If you can find IANA terms like this, please use them - though beware
the page isn't the concept. You might have to map them over to your
own namespace, PURL URIs preferred.


-- 
http://danny.ayers.name



Re: KIT releases 14 billion triples to the Linked Open Data cloud

2010-04-01 Thread Danny Ayers
One fact there's no avoiding, the service works! Bravo Denny.

Compelling paper, although more scenarios would be good.

My cousin told me about a cow being stuck in the village post office
this morning, and in both cases things seemed interesting, and
potentially useful towards Web serendipity.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: SPARQL: sorting resources by label?

2010-03-14 Thread Danny Ayers
On 14 March 2010 20:04, Toby Inkster t...@g5n.co.uk wrote:

 I use something like:

        OPTIONAL {
          ?subject ?labelprop ?label .
          GRAPH http://buzzword.org.uk/2009/label-properties {
            ?labelprop a 
 http://buzzword.org.uk/2009/label-properties#LabelProperty .
          }
          FILTER( isLiteral(?label) )
        }

 Having first loaded http://buzzword.org.uk/2009/label-properties into
 the store.

I like your idea of wrapping all the label properties up, but don't
you need to run subproperty inference before the query will work?



-- 
http://danny.ayers.name



Re: SPARQL: sorting resources by label?

2010-03-12 Thread Danny Ayers
On 13 March 2010 04:16, Axel Rauschmayer a...@rauschma.de wrote:

 Thanks for any comments or suggestions...

I'm a little perturbed that you have to use something so convoluted to
get labels

Why not something just like (whatever graph)  SELECT ?o WHERE { ?s
rdfs:label ?o } , or at worse an OPTIONAL on maybe dc:label or
whatever..?

- are the objects of any labels resources?

Can you please clarify what you are looking for, and explain further -
I honestly hope you are missing something there.

 If there is something wrong with the material, the problems should be
surfaced and fixed (and no doubt will be for the next rev, if need be)

Cheers,
Danny.

http://danny.ayers.name



Re: RDFa for Turtles 2: HTML embedding

2010-03-10 Thread Danny Ayers
On 10 March 2010 18:19, Paul Houle ontolo...@gmail.com wrote:

 head xmlns=http://www.w3.org/1999/xhtml; 
 xmlns:dcterms=http://purl.org/dc/terms/;
     meta rel=dcterms:creator content=Ataru Morobishi
 /head

...

 This does bend the XHTML/RDFa standard and also HTML a little (those 
 namespace declarations aren't technically valid)

Sorry, but what's not valid about that?

Cheers,
Danny.

--
http://danny.ayers.name



Re: head/@profile needed in HTML 5? GRDDL in Linked Data community?

2010-03-01 Thread Danny Ayers
I worry that discarding profile URIs may cause problems further down the
line. The cost of keeping it in the spec is virtually zero - ignore usually.


We know that things like URI-based extensibitlies work, dereference for more
info, having a profile URI leaves the window open for e.g. a proprietary
client to behave in the way expected.

While GRDDL exists, it isn't an argument here - the best use case is with
more XML data oriented sources, and there is no conflict.

For all the wrong reasons, I reckon a doc level profile thing should stay.
If a browser developer (MS, let's say) wishes to make the browsing
experience richer, why not? The appropriate way there is to push an inline
message for further information. If we don't allow it in the head of a
document, where is it to go - x-bollocks headers?

The use of special short strings is very wrong, it closes the door to future
development on global scale.

My two cents.

Cheers,
Danny.

On 25 February 2010 15:55, Daniel O'Connor daniel.ocon...@gmail.com wrote:

 As someone who's implemented a pretty poor grddl library, I'd have to say
 for real world use, profile isn't needed - it's assumed a lot of the time.

 Practically no authors respect it regarding microformats and friends. I
 found much utility coming from explcitly saying search this document for
 vocabulary X.

 Most of my implementations were around microformat parsing.

 Right now it sits in the I wish people used it properly' category for me.




-- 
http://danny.ayers.name


Re: Terminology when talking about Linked Data

2010-02-17 Thread Danny Ayers
PS.
http://www.w3.org/DesignIssues/LinkedData.html

On 17 February 2010 12:00, Danny Ayers danny.ay...@gmail.com wrote:
 For a definition of Linked Data I'd suggest anything that conforms to
 timbl's Linked Data expectations:

   1. Use URIs as names for things
   2. Use HTTP URIs so that people can look up those names.
   3. When someone looks up a URI, provide useful information, using
 the standards (RDF, SPARQL)
   4. Include links to other URIs. so that they can discover more things.

 While Tim only lists RDF  SPARQL as the standards, pragmatically I
 reckon there's a bit of leeway here, e.g. a HTML document or Atom feed
 is likely to contain links and data that can be interpreted as RDF -
 in fact *any* hyperlink could be seen as an RDF statement (maybe
 docA dc:relation docB), so depending on the context a looser
 definition of linked data as linky stuff doesn't seem unreasonable.

 Cheers,
 Danny.

 --
 http://danny.ayers.name




-- 
http://danny.ayers.name



Re: Terminology when talking about Linked Data

2010-02-17 Thread Danny Ayers
For a definition of Linked Data I'd suggest anything that conforms to
timbl's Linked Data expectations:

   1. Use URIs as names for things
   2. Use HTTP URIs so that people can look up those names.
   3. When someone looks up a URI, provide useful information, using
the standards (RDF, SPARQL)
   4. Include links to other URIs. so that they can discover more things.

While Tim only lists RDF  SPARQL as the standards, pragmatically I
reckon there's a bit of leeway here, e.g. a HTML document or Atom feed
is likely to contain links and data that can be interpreted as RDF -
in fact *any* hyperlink could be seen as an RDF statement (maybe
docA dc:relation docB), so depending on the context a looser
definition of linked data as linky stuff doesn't seem unreasonable.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: The status of Semantic Web community- perspective from Scopus and Web Of Science (WOS)

2010-02-13 Thread Danny Ayers
In defence of Ying Ding, mapping out the academic citation material is
worthwhile, but I do tend to agree with Dan and Jeremy in that it's
only part of the picture (and almost certainly not the major part).

While I could have a good old rant about the role played by
enthusiastic amateurs (which hopefully is all somewhere archived in
mailing lists such as rdf-dev  xml-dev), something that could more
easily be overlooked is the influence of (developers of) related tech,
in particular things like the rise of REST as *the* practical paradigm
for Web services and the explosion of online social networks, all very
strongly informing the Semantic Web effort.

I would suggest that these outside influences had a lot to do with the
reinvention of the Semantic Web as Linked Data (though timbl is the
authority on that bit of history), rather than either as just a
metadata idea or another kind of expert system.

Cheers,
Danny.

-- 
http://danny.ayers.name



Academic publishing and the Web [was Re: The status of Semantic Web community- perspective from Scopus and Web Of Science (WOS)]

2010-02-13 Thread Danny Ayers
 Irrespective, don't you think HTML or even better an RDF (re. your data
 sources) would be sort of congruent with this entire effort? Dan and others
 could have just slotted URIs into the RDF etc.. and the resource could just
 grow and evenly rid itself of its current contextual short-comings etc..

Absolutely. (The kind of data-heavy material Ying Ding has produced
would be an ideal candidate for expression in a data-oriented form).

 Sorry (for grumpy sounding comment), but PDFs really get under my skin as
 sole mechanism for transmitting data when conversation is about the Semantic
 Web Project etc.. Sadly, this realm is rife with PDF as sole information
 delivery mechanism, even when the conversation is actually about the Web
 (a medium not constructed around Linked PDF documents).

Again, absolutely (and it annoys the tits off me too) - not only pdf
but also ps, and in the odd strange case MS doc format.

Alas it seems academia is largely slow on the uptake when it comes to
publication. I'm sure this is just as frustrating for the individual
that wishes to be published as the rest-of-the-world that wants their
information.

But then again, we still have printed matter...

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: Question about paths as URIs in the BBC RDF

2010-01-29 Thread Danny Ayers
On 29 January 2010 00:31, Nathan nat...@webr3.org wrote:

 I have got one kind of big question; why not just be more verbose and
 include full URIs? if it ensures that the data is always perfect and
 full abstracted from the notion of HTTP (touchy subject?)[1] then why
 not do it?

Including an xml:base would have the same effect as using full URIs
for the same-base links (ensuring the data is always perfect could be
harder!)

I do think adding an xml:base or using absolute URIs would be a good
move, the point re. separation of concerns is a good one.

Which approach is easier to implement is another matter (ease of
implementation here is probably more significant than questions of
verbosity).

In practice I've been caught out numerous times with downloaded
base-free data, winding up with fairly useless file:/// URIs.
Annoying.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Creating JSON from RDF

2009-12-14 Thread Danny Ayers
2009/12/14 Richard Light rich...@light.demon.co.uk:
 In message c74badc3.20683%t.hamm...@nature.com, Hammond, Tony
 t.hamm...@nature.com writes

 Normal developers will always want simple.

 Surely what normal developers actually want are simple commands whereby data
 can be streamed in, and become available programmatically within their
 chosen development environment, without any further effort on their part?

To my mind that's very well put. But I would argue against that a
cost/benefit case - ok, it's programming hell, but it doubles your
salary - would any of us complain?


 Personally I don't see how providing a format which is easier for humans to
 read helps to achieve this.  Do normal developers like writing text parsers
 so much?

I don't know about you, but anything that helps avoiding writing
parsers is honey to me.

 Give 'em RDF and tell them to develop better toolsets ...

Why not?

 Come to that, RDF-to-JSON conversion could be a downstream service that
 someone else offers.  You don't have to do it all.

I am a bit annoyed we haven't got much pluggability between systems
yet, but strongly believe this is the right track.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Creating JSON from RDF

2009-12-12 Thread Danny Ayers
Jeni, marvellous that you're working on this, and marvellous that
you've thought this through an awful lot already.

I can't offer any real practical suggestions right away (a lot to
digest here!) but one question I think right away may some
significance: you want this to be friendly to normal developers - what
kind of things are they actually used to? Do you have any examples of
existing JSON serializations of relatively complex data structures -
something to give an idea of the target.

While it's a bit apples and oranges, there presumably are plenty of
systems now pushing out JSON rather than the XML they would a few
years ago - is there anything to be learnt from that scenario that can
be exploited for RDF?

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: Ontology Wars? Concerned

2009-11-24 Thread Danny Ayers
Hi Nathan,

A good question, the way it gets answered as far as I can see depends
on what you're after.

Glad to see you're thinking linked data.

But people really do try to overthink it when it comes to ontologies,
in my opinion:  ideally the best ontologies/vocabs will win -

- rubbish.

The ontologies/vocabularies on machines will always be poor
reflections of the things they try to describe. There is a huge amount
of software around these days (and has been for many years) that tries
to describe things. The advantage that the Web languages have is that
it can work in a big distributed environment. Put a marker down (a
URI) for a concept or a dog and it's reusable.

When it comes to multiple ontologies - yes, it's a reality. In
practice maybe it means lots of different clauses in the query - but
that depends on how far you want to ask. What are you interested in?
Certainly good practice says as a publisher of information you should
use existing terms wherever appropriate rather than invent (c'mon,
should I call the creature next to me a wurble or a dog?). But
everyone can make up their own terms, and there's nothing wrong with
that.

To answer your subject line, the only way we can avoid ontology wars
is by making the field flat, globally. I think we have that now, at
least in principle.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Danny Ayers
Good man, I couldn't help thinking there was a paper in that...

2009/11/22 Herbert Van de Sompel hvds...@gmail.com:
 hi all,
 (thanks Chris, Richard, Danny)

 In light of the current discussion, I would like to provide some
 clarifications regarding Memento: Time Travel for the Web, ie the idea of
 introducing HTTP content negotiation in the datetime dimension:
 (*) Some extra pointers:
 - For those who prefer browsing slides over reading a paper, there is
 http://www.slideshare.net/hvdsomp/memento-time-travel-for-the-web
 - Around mid next week, a video recording of a presentation I gave on
 Memento should be available at http://www.oclc.org/research/dss/default.htm
 - The Memento site is at http://www.mementoweb.org. Of special interest may
 be the proposed HTTP interactions for (a) web servers with internal archival
 capabilities such as content management systems, version control systems,
 etc (http://www.mementoweb.org/guide/http/local/) and (b) web servers
 without internal archival capabilities
 (http://www.mementoweb.org/guide/http/remote/).
 (*) The overall motivation for the work is the integration of archived
 resources into regular web navigation by making them available via their
 original URIs. The archived resources we have focused on in our experiments
 so far are those kept by
 (a) Web Archives such as the Internet Archive, Webcite, archive-it.org and
 (b) Content Management Systems such as wikis, CVS, ...
 The reason I pinged Chris Bizer about our work is that we thought that our
 proposed approach could also be of interest in the LoD environment.
  Specifically, the ability to get to prior descriptions of LoD resources by
 doing datetime content negotiation on their URI seemed appealing; e.g. what
 was the dbpedia description for the City of Paris on March 20 2008? This
 ability would, for example, allow analysis of (the evolution of ) data over
 time. The requirement that is currently being discussed in this thread
 (which I interpret to be about approaches to selectively get updates for a
 certain LoD database) is not one I had considered using Memento for,
 thinking this was more in the realm of feed technologies such as Atom (as
 suggested by Ed Summers), or the pre-REST OAI-PMH
 (http://www.openarchives.org/OAI/openarchivesprotocol.html).
 (*) Regarding some issues that were brought up in the discussion so far:
 - We use an X header because that seems to be best practice when doing
 experimental work. We would very much like to eventually migrate to a real
 header, e.g. Accept-Datetime.
 - We are definitely considering and interested in some way to formalize our
 proposal in a specification document. We felt that the I-D/RFC path would
 have been the appropriate one, but are obviously open to other approaches.
 - As suggested by Richard, there is a bootstrapping problem, as there is
 with many new paradigms that are introduced. I trust LoD developers fully
 understand this problem. Actually, the problem is not only at the browser
 level but also at the server level. We are currently working on a FireFox
 plug-in that, when ready, will be available through the regular channels.
 And we have successfully (and experimentally) modified the Mozilla code
 itself to be able to demonstrate the approach. We are very interested in
 getting support in other browsers, natively or via plug-ins. We also have
 some tools available to help with initial deployment
 (http://www.mementoweb.org/tools/ ). One is a plug-in for the mediawiki
 platform; when installed the wiki natively supports datetime content
 negotiation and redirects a client to the history page that was active at
 the datetime requested in the X-Accept-Header. We just started a Google
 group for developers interested in making Memento happen for their web
 servers, content management system, etc.
 (http://groups.google.com/group/memento-dev/).
 (*) Note that the proposed solution also leverages the OAI-ORE specification
 (fully compliant with LoD best practice) as a mechanism to support discovery
 of archived resources.
 I hope this helps to get a better understanding of what Memento is about,
 and what its current status is. Let me end by stating that we would very
 much like to get these ideas broadly adopted. And we understand we will need
 a lot of help to make that happen.
 Cheers
 Herbert
 ==
 Herbert Van de Sompel
 Digital Library Research  Prototyping
 Los Alamos National Laboratory, Research Library
 http://public.lanl.gov/herbertv/
 tel. +1 505 667 1267








-- 
http://danny.ayers.name



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-22 Thread Danny Ayers
2009/11/22 Richard Cyganiak rich...@cyganiak.de:
 On 20 Nov 2009, at 19:07, Chris Bizer wrote:

[snips]

 From a web architecture POV it seems pretty solid to me. Doing stuff via
 headers is considered bad if you could just as well do it via links and
 additional URIs, but you can argue that the time dimension is such a
 universal thing that a header-based solution is warranted.

Sounds good to me too, but x-headers are a jump, I think perhaps it's
a question worthy of throwing at the W3C TAG - pretty sure they've
looked at similar stuff in the past, but things are changing fast...

From what I can gather, proper diffs over time are hard (long before
you get to them logics). But Web-like diffs don't have to be - can't
be any less reliable than my online credit card statement. Bit
worrying there are so many different approaches available, sounds like
there could be a lot of coding time wasted.

But then again, might well be one for evolution - and in the virtual
world trying stuff out is usually worth it.

 The main drawback IMO is that existing clients, such as all web browsers,
 will be unable to access the archived versions, because they don't know
 about the header. If you are archiving web pages or RDF document, then you
 could add links that lead clients to the archived versions, but that won't
 work for images, PDFs and so forth.

Hmm. For one, browsers are in flux, for two then you probably wouldn't
expect that kind of agent to give you anything but the latest.
If I need last years version, I follow my nose through URIs (as in svn
etc) - that kind of thing has to be a fallback, imho.

 In summary, I think it's pretty cool.

Cool idea, for sure. It is something strong...ok, temporal stuff
should be available down at quite a low level, especially given that
things like xmpp will be bouncing around - but I reckon Richard's
right in suggesting the plain old URI thing will currently serve most
purposes.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: differentFrom?

2009-11-14 Thread Danny Ayers
2009/11/14 Simon Reinhardt simon.reinha...@koeln.de:

 I definitely think it's useful for Linked Data purposes, just like
 owl:sameAs, IFPs and everything that Allemand and Hendler describe as
 RDFS-Plus (although they don't include owl:differentFrom in that).

Yeah, certainly worth investigating - an owl:differentFrom is also a
link after all. By definition it's likely to go to data that you don't
actually want, but probably the best way of dealing with that is
figure out what's needed to get it...then do the opposite.



-- 
http://danny.ayers.name



Re: Linked (genealogy) data

2009-10-02 Thread Danny Ayers
Good stuff!

cc'ing Ian who has worked on this kind of stuff for a while.

My paternal grandfather made a huge tree, did a load of research (Boy
Scout until age of 65). Quite recently I tried to explain to my mother
how this was interesting...not quite sure what she had to hide :)

But this is seriously important work - think genetic illnesses.



2009/10/2 Simon Reinhardt simon.reinha...@koeln.de:
 John Goodwin wrote:

 I've been working on my family tree as linked data in my spare time. A few
 sample URIs for anyone interested:

 http://www.johngoodwin.me.uk/family/I0243
 http://www.johngoodwin.me.uk/family/I0265
 http://www.johngoodwin.me.uk/family/F003

 Birth/death events are linked to places in DBpedia, e.g.

 http://www.johngoodwin.me.uk/family/event1918

 Cool stuff!

 I've thought about this as well for some time and did an initial draft of an
 ontology for very detailed genealogy description:
 http://bloody-byte.net/rdf/genealogy/gen.ttl
 Like Toby I'm using GRAMPS and I have to admit I don't really know much
 about the inner workings of GEDCOM so I don't know what feature is part of
 GRAMPS and what GEDCOM can actually do but anyway, my ontology draft is
 mainly inspired by all the detail you can add to stuff in GRAMPS. :-)
 Comments welcome!

 Regards,
  Simon





-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
Many thanks for responses, stuff to think about.

Yihong got to /root of my question, ...miss the main purpose why we
want to have data linked in the first place

why are places like itsy, youtube and redtube (yup, pr0n still lives)
more compelling, given what we know?

people *are* getting the data out, but it seems to me there's a gap
between that and stuff that actually improves people's quality of
life.

sorry if I sound negative, I reckon the semweb is a done deal now, the
many-eyeballs arrived.

but - where should we take it?




-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
2009/9/25 Kjetil Kjernsmo kje...@kjernsmo.net:
 On Friday 25. September 2009 10:15:34 you wrote:
 sorry if I sound negative, I reckon the semweb is a done deal now, the
 many-eyeballs arrived.

 Thanks for asking the right questions, Danny, I believe it is critical for
 the success that someone does!

Thanks, but I'm not even sure they are the right questions.

 but - where should we take it?

 What I'd like to do with it, is to solve problems for people when combining
 data sets that are cannot be solved by conventional means, i.e. today the
 number of people who are interested in a particular combination of datasets
 goes down whereas the cost generally goes up, so it doesn't scale.

Yes, but please bear with me now - do we have to wait for another
generation arriving on the Web? There must be ways we can kick-start
this stuff.

 I think there is a critical piece of technology that is missing in our
 arsenal, namely a (free software) programming stack that makes a large
 group of developers, who are likely to have little prior understanding of
 semweb, to go yeah, I can do that.

Like bengee's ARC2 stack - PHP?

 I think the work done by the Drupal folks is a right step in this
 direction, for the kind of stuff that people use a CMS for. But I think
 that we also need a stack, probably built around the MVC pattern, that can
 be used for more generic purposes.

Absolutely. If we can re-use existing patterns we can get people involved.

 I haven't got anywhere with my ideas on this topic though...

Me neither :)

I should insert a Star Trek quote here, but can't think of one.

Cheers,
Danny.



-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
2009/9/25 Juan Sequeda juanfeder...@gmail.com:
 Linked Data is out there. Now it's time to develop smart (personalized)
 software agents to consume the data and give it back to humans.

I don't disagree, but I do think the necessary agents aren't smart,
just stupid bots (aka Web services a la Fielding).

 try also using SQUIN (www.squin.org)

Thanks, not seen before.


-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
Olaf, comments?

2009/9/25 Leo Sauermann leo.sauerm...@dfki.de:
 Uh, I thought the answer to danny's question is semwebclient by Olaf Hartig
 and others.

 http://www4.wiwiss.fu-berlin.de/bizer/ng4j/semwebclient/

 In general, I thought that Olaf Hartig would be the first contact for such
 things...

 best
 Leo


 It was Danny Ayers who said at the right time 24.09.2009 09:59 the following
 words:

 The human reading online texts has a fair idea of what is and what
 isn't relevant, but how does this work for the Web of data? Should we
 have tools to just suck in any nearby triples, drop them into a model,
 assume that there's enough space for the irrelevant stuff, filter
 later?

 How do we do (in software) things like directed search without the human
 agent?

 I'm sure we can get to the point of - analogy -  looking stuff up in
 Wikipedia  picking relevant links, but we don't seem to have the user
 stories for the bits linked data enables. Or am I just
 imagination-challenged?

 Cheers,
 Danny.




 --
 _
 Dr. Leo Sauermann       http://www.dfki.de/~sauermann
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz DFKI GmbH
 Trippstadter Strasse 122
 P.O. Box 2080           Fon:   +43 6991 gnowsis
 D-67663 Kaiserslautern  Fax:   +49 631 20575-102
 Germany                 Mail:  leo.sauerm...@dfki.de

 Geschaeftsfuehrung:
 Prof.Dr.Dr.h.c.mult. Wolfgang Wahlster (Vorsitzender)
 Dr. Walter Olthoff
 Vorsitzender des Aufsichtsrats:
 Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 _





-- 
http://danny.ayers.name



Re: ...and that URI

2009-09-16 Thread Danny Ayers
2009/9/16 Norman Gray nor...@astro.gla.ac.uk:

 Ooops, sorry.  I've sent the comments off to their proper place.

ditto



-- 
http://danny.ayers.name



Re: ...and that URI

2009-09-15 Thread Danny Ayers
Thanks Raphaël - great stuff! I very much like the 5 aspects (though
visually it's begging for a 6th - not that I can think of one :)

2009/9/15 Raphaël Troncy raphael.tro...@cwi.nl:

 http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/

 The W3C acknowledge the value of this stuff, no? So shouldn't it be
 available (via redirect or whatever) from somewhere more like :

 http://w3.org/lod

 I guess you are not aware of the re-design of the W3C web site.
 Check the beta version of the upcoming SW page:
 http://beta.w3.org/standards/semanticweb/ (LOD is upfront :-)
 Cheers.

  Raphaël

 --
 Raphaël Troncy
 EURECOM, Multimedia Communications Department
 2229, route des Crêtes, 06560 Sophia Antipolis, France.
 e-mail: raphael.tro...@eurecom.fr  raphael.tro...@gmail.com
 Tel: +33 (0)4 - 9300 8242
 Fax: +33 (0)4 - 9000 8200
 Web: http://www.cwi.nl/~troncy/




-- 
http://danny.ayers.name



dbpedia not very visible, nor fun

2009-09-14 Thread Danny Ayers
It seems I have a Wikipedia page in my name (ok, I only did fact-check
edits, ok!?). So tonight I went looking for the corresponding triples,
looking for my ultimate URI...

Google dbpedia = front page, with news

on the list on the left is Online Access

what do you get?

[[
The DBpedia data set can be accessed online via a SPARQL query
endpoint and as Linked Data.

Contents
1. Querying DBpedia
1.1. Public SPARQL Endpoint
1.2. Public Faceted Web Service Interface
1.3. Example queries displayed with the Berlin SNORQL query explorer
1.4. Examples rendering DBpedia Data with Google Map
1.5. Example displaying DBpedia Data with Exhibit
1.6. Example displaying DBpedia Data with gFacet
2. Linked Data
2.1. Background
2.2. The DBpedia Linked Data Interface
2.3. Sample Resources
2.4. Sample Views of 2 Sample DBpedia Resources
3. Semantic Web Crawling Sitemap
]]

Yeah. Unless you're a triplehead none of these will mean a thing. Even
then it's not obvious.

Could someone please stick something more rewarding near the top! I
don't know, maybe a Google-esque text entry form field for a regex on
the SPARQL. Anything but blurb.

Even being relatively familiar with the tech, I still haven't a clue
how to take my little query (do I have a URI here?) forward.

Presentation please.

Cheers,
Danny.

-- 
http://danny.ayers.name



...and that URI

2009-09-14 Thread Danny Ayers
http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/

The W3C acknowledge the value of this stuff, no? So shouldn't it be
available (via redirect or whatever) from somewhere more like :

http://w3.org/lod

??




-- 
http://danny.ayers.name



Anyone RDFized IP address geolocation data?

2009-08-01 Thread Danny Ayers
I'm after an RDF dump so I can get client country from IP address.

There's a source for the raw data at :

http://www.ipinfodb.com/ip_database.php

but if someone's already converted such stuff it'll save me time.

Also, what would be my best bet for mapping country codes (US etc)
to the country's entry in dbpedia (or nearby)? What I'm aiming for is
something like:

[a IPMapping;
 ipAddressRange 123.321.0.0;
 country #country   ]

#country a Country;
countryCode XX .

Suggestions on whether I should retain/modify the original indexing
and appropriate vocabs appreciated too :)

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: looking for an event ontology/vocabulary

2009-07-29 Thread Danny Ayers
2009/7/29 Pat Hayes pha...@ihmc.us:

 Indeed. However, it suffers from one glaring defect, which may simply be a
 problem of documentation: i does not explain its terms.

Documentation is a pretty common problem...

 In particular, it
 refers to a 'factor' of an event, without anywhere saying anything, either
 in the axioms or in the documentation, to explain what this strange term is
 supposed to mean. It is not normal English usage to refer to a 'factor' of
 an event, so ordinary English usage is no guide.

Googling define: factor gives me a bunch of definitions, the first
two of which are:

[[
# anything that contributes causally to a result; a number of factors
determined the outcome
# component: an abstract part of something; jealousy was a component
of his character; two constituents of a musical composition are
melody and harmony; the grammatical elements of a sentence; a key
factor in her success; humor: an effective ingredient of a speech
]]

Either of which could be applicable to an event: something can cause
an event; something can be a part of an event. Significantly different
IMHO.

So I'd suggest that the problem isn't lack of a human language
definition...it's having too many.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Excellent News for LOD: Yahoo Provides Tool for RDFa+GoodRelations for Site Owners

2009-07-16 Thread Danny Ayers
Chipping in a little late - yep, this really is excellent news.

E-commerce was a huge driver for the Web (/me sidesteps the bust),
there's every reason it could be a shot in the arm for the semweb too.

Also the lure of shopkeeper $$$s makes this kind of thing great
pedagogical material - note the old MS demo Northwind database (btw
RDFized by Kingsley  co.) and Java EE Pet Store [1].

Speaking of $$$s, I reckon there's a significant market opening for an
out-of-the-box semweb-enabled online store 'solution'.

 I will publish a more complete example at

 http://www.ebusiness-unibw.org/wiki/GoodRelations_Examples

Good-oh.

 I am currently in contact with Google and trying to communicate the
 advantages of using standard vocabularies for meta-data, and I think there
 are strong arguments.

Good man.

 Also note that it is fairly easy to transform any Google-specific mark-up
 into standardized GoodRelations mark-up, and that several related tools are
 in the making.

Good to hear.

Incidentally I've still not got around to figuring out the part-whole
product description as mentioned at [2] (s/Tinocaster/Vinocaster -
there was a preexisting Tinocaster :) so any suggestions there would
still be appreciated.

Cheers,
Danny.

[1] http://java.sun.com/developer/releases/petstore/
[2] http://lists.w3.org/Archives/Public/semantic-web/2007Apr/0024.html

-- 
http://danny.ayers.name



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-03 Thread Danny Ayers
2009/7/2 Bill Roberts b...@swirrl.com:
 I thought I'd give the .htaccess approach a try, to see what's involved in
 actually setting it up.  I'm no expert on Apache, but I know the basics of
 how it works, I've got full access to a web server and I can read the online
 Apache documentation as well as the next person.

I've tried similar, even stuff using PURLs - incredibly difficult to
get right. (My downtime overrides all, so I'm not even sure if I got
it right in the end)

I really think we need a (copy  paste) cheat sheet.

Volunteers?

Cheers,
Danny.


-- 
http://danny.ayers.name



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-02 Thread Danny Ayers
2009/7/2 Linde, A.E. ae...@leicester.ac.uk:

 Could someone summarise this thread in a single (unbiased?) post, please?

I'll try to answer the questions, even though I've only skimmed the thread...

 a) what is/are the blocks on LOD via RDF

The vast majority of publication tools and supporting services are
geared towards publishing HTML. While a key piece of Web architecture
is the the ability to publish multiple representations of a given
resource (e.g. both HTML and RDF/XML format documents with a single
URI through content negotiation), the mechanisms needed to do this are
often unavailable from regular hosting services. Similarly the
redirect handling needed to provide a description of a resource that
cannot appear directly on the Web - things, people etc - is also not
possible.
Typically these would be done through using .htaccess files on Apache.

 b) how does RDFa help and what are its own failings;

RDFa allows the RDF to be published in a HTML document, so content
negotiation isn't needed. You get two representations in one.

Again tool support is a problem, although with RDFa being a new spec
the situation is bound to improve.

GRDDL may also be a useful alternative if the source data is available
in an XML format.

 c) what are the recipes for making data discoverable, linkable and usable

there are recipes at:
http://www4.wiwiss.fu-berlin.de/bizer/pub/LinkedDataTutorial/
though perhaps a cheat sheet would be a good idea?

  if i) one has full access to a server;

this is pretty well documented, e.g. as above

 ii) one has only user directory acccess to a server;

while this may often be the same as i) generally I'd suggest it's a
case-by-case thing, depending on the web server configuration

iii) one does not know or care what a server is.

Depending on the nature of the data, it may be possible to use one of
the semweb-enabled document-first publishing tools (a semantic wiki or
CMS). Alternately a relational DB to RDF mapping tool may help.

But best bet right now though would be to have a word with someone
offering linked data publishing services - Talis or OpenLink, may be
others.

I've no doubt missed a lot of points and alternate approaches, but I
these were top of my own mental heap :)

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: RDFa vs RDF/XML and content negotiation

2009-06-24 Thread Danny Ayers
Thank you for the excellent questions, Bill.

Right now IMHO the best bet is probably just to pick whichever format
you are most comfortable with (yup it depends) and use that as the
single source, transforming perhaps with scripts to generate the
alternate representations for conneg.

As far as I'm aware we don't yet have an easy templating engine for
RDFa, so I suspect having that as the source is probably a good choice
for typical Web applications.

As mentioned already GRDDL is available for transforming on the fly,
though I'm not sure of the level of client engine support at present.
Ditto providing a SPARQL endpoint is another way of maximising the
surface area of the data.

But the key step has clearly been taken, that decision to publish data
directly without needing the human element to interpret it.

I claim *win* for the Semantic Web, even if it'll still be a few years
before we see applications exploiting it in a way that provides real
benefit for the end user.

my 2 cents.

Cheers,
Danny.



Redundancy (was Re: RDFa vs RDF/XML and content negotiation)

2009-06-24 Thread Danny Ayers
2009/6/24 Ivan Herman i...@w3.org:

 With the
 increasing popularity of RDFa our system guys have already complained
 about sudden server request surges on that service. Ie, although it is
 fine to use the service as it is in the .htaccess example (with full
 URI-s, though) if you (or anybody else) uses it with a large number of
 calls, it is better to install the service locally an run it from there
 (it is a bunch of python files, it should not be difficult to install it).

Ivan, do you know of any easy transparent way for an agent to choose
another equivalent service if there are load issues?



-- 
http://danny.ayers.name



Re: http://ld2sd.deri.org/lod-ng-tutorial/

2009-06-24 Thread Danny Ayers
While we could have countless arguments over the appropriateness of DL
(or OWL 2) in the Web environment, the bottom line is whether or not
owl:imports adds useful information - seems hard to see a problem with
that, whether agents can reason or not. The follow your nose thing.
What's the problem with more data?



-- 
http://danny.ayers.name



Re: Redundancy (was Re: RDFa vs RDF/XML and content negotiation)

2009-06-24 Thread Danny Ayers
2009/6/24 Ivan Herman i...@w3.org:

 Unfortunately, no:-(

concise, but to the point, thanks :)

-- 
http://danny.ayers.name



Re: RDFa vs RDF/XML and content negotiation

2009-06-24 Thread Danny Ayers
Ivan, two words : more python!

2009/6/24  bill.robe...@planet.nl:
 Ivan

 Thanks very much.  I'll take a look at your python scripts, which should be
 very useful.

 Cheers

 Bill
 
 Van: Ivan Herman [mailto:i...@w3.org]
 Verzonden: wo 24-6-2009 9:14
 Aan: Bill Roberts
 CC: public-lod@w3.org
 Onderwerp: Re: RDFa vs RDF/XML and content negotiation

 Bill,

 a while ago I wrote a blog on how I do it on the Semantic Web Activity
 home page:

 http://www.w3.org/QA/2008/05/using_rdfa_to_add_information.html

 the blog is from the early days of RDFa, some of the specific issues may
 be different today (see below), but the overall line, I believe, works
 well. It may be helpful...

 What is different or should be different:

 - The .htaccess example refers to the RDFa distiller at W3C (which,
 well, I wrote, so of course I had to eat my own dogfood:-). With the
 increasing popularity of RDFa our system guys have already complained
 about sudden server request surges on that service. Ie, although it is
 fine to use the service as it is in the .htaccess example (with full
 URI-s, though) if you (or anybody else) uses it with a large number of
 calls, it is better to install the service locally an run it from there
 (it is a bunch of python files, it should not be difficult to install it).

 (Of course, an alternative is to run the script only once, when updating
 the html file. But, if not done manually, this needs some server magic...)

 - I use http://www.w3.org/2001/sw/ as an example, though _that_ one has
 changed a little bit and is more complicated today (Essentially, the
 HTML file has become too large and I had to cut into several files, so I
 have to merge the RDF graphs. This is something different...)

 Cheers

 Ivan


 Bill Roberts wrote:
 Thanks everyone who replied.

 It seems that there's a lot of support for the RDFa route in that
 (perhaps not statistically significant) sample of opinion.  But to
 summarise my understanding of your various bits of advice:  since there
 aren't currently so many applications out there consuming RDF, a good
 RDF publisher should provide as many options as possible.

 Therefore rather than deciding for either RDFa or a content-negotiated
 approach, why not do both (and provide a dump file too)

 Cheers

 Bill




 --

 Ivan Herman, W3C Semantic Web Activity Lead
 Home: http://www.w3.org/People/Ivan/
 mobile: +31-641044153
 PGP Key: http://www.ivan-herman.net/pgpkey.html
 FOAF: http://www.ivan-herman.net/foaf.rdf




-- 
http://danny.ayers.name



Re: Contd LOD Data Sets, Licensing, and AWS

2009-06-24 Thread Danny Ayers
2009/6/25 Ian Davis li...@iandavis.com:

 I think the onus is on the consumer to ensure they abide with the supplier's
 wishes, not the other way round. It's really a matter or respect and
 politeness to give people the credit they ask for.

Certainly in principle, but the supplier should know what they are
doing. It would be their loss after all.



-- 
http://danny.ayers.name



Re: Common Tag - semantic tagging convention

2009-06-18 Thread Danny Ayers
2009/6/18 Bernard Vatant bernard.vat...@mondeca.com:

 - The date seems to me a very important piece of information, in particular
 if you look at tags from the vocabulary management, and/or search engines
 viewpoint. First, labels change more often than concepts, and second, a
 search engine would be happy to leverage on date information to show trendy
 concepts and tags. Which concepts were used as tag today or in the past
 week/month/year.
 - And to follow François, I'm very surprised not to find any taggedBy
 property in the vocabulary.

I'm won over by the arguments for offering support for the date, but...

 That said, why not use simply dc:creator and dc:date to this effect?

Right. dc:date would seem a good choice, though I reckon foaf:maker
might be a better option than dc:creator as the object is a resource
(a foaf:Agent) rather than a literal. While it's likely to mean an
extra node in many current scenarios, it offers significantly more
prospect for linking data (and less ambiguity).

btw, has anyone had chance to revisit the mappings?

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: Bestbuy.com goes Semantic Web the GoodRelations Way

2009-06-08 Thread Danny Ayers
I heard about this elsewhere (?) - I personally think it's hugely
significant. Commerce means a lot to a lot of people, this is
mainstreaming. Bravo!

2009/6/8 Martin Hepp (UniBW) h...@ebusiness-unibw.org:
 Hi all:

 Good news:

 Bestbuy.com goes Semantic Web the GoodRelations Way

 http://tinyurl.com/bestbuy-goodrelations

 Best
 Martin

 --
 --
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail: mh...@computer.org
 phone:  +49-(0)89-6004-4217
 fax:    +49-(0)89-6004-4620
 www:    http://www.unibw.de/ebusiness/ (group)
        http://www.heppnetz.de/ (personal)
 skype:  mfhepp


 Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
 

 Webcast explaining the Web of Data for E-Commerce:
 -
 http://www.heppnetz.de/projects/goodrelations/webcast/

 Tool for registering your business:
 --
 http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

 Overview article on Semantic Universe:
 -
 http://tinyurl.com/goodrelations-universe

 Project page and resources for developers:
 -
 http://purl.org/goodrelations/

 Upcoming events:
 ---
 Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in One Day: A
 Hands-on Introduction to the GoodRelations Ontology, RDFa, and Yahoo!
 SearchMonkey

 http://www.eswc2009.org/program-menu/tutorials/70

 Talk at the Semantic Technology Conference 2009: Semantic Web-based
 E-Commerce: The GoodRelations Ontology
 More information: http://www.semantic-conference.com/session/1881/

 Slides: http://tinyurl.com/semtech-hepp







-- 
http://danny.ayers.name



Re: Extracting RDF triples

2009-06-05 Thread Danny Ayers
Hi Luciano,

There are also a few online tools that will do the conversion, offhand
I can only remember http://triplr.org

e.g.
http://triplr.org/ntriples/www.w3.org/People/Berners-Lee/card.rdf

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: ANN: sameas.org

2009-06-03 Thread Danny Ayers
2009/6/3 Hugh Glaser h...@ecs.soton.ac.uk:
 We are pleased to offer http://sameas.org/ as a service to provide you with
 help finding URIs.

Great stuff!

 We believe that the Semantic Web and Linked Data need to develop clear,
 focussed, services that only do one or two things, so that they can be
 composed and utilised by the more complex services, as well as facilitating
 re-use.

I agree wholeheartedly. There is  still a place for complex monolithic
services (obviously, all scales, quasi-fractal), but I reckon the
URI-addressable minimal-function combinable service is more of a Webby
way to go. Also seems to me the future of agents - comparatively dumb
little services that gain their power through interaction with other
services/agents.

On identi.ca, Davide Palmisano (@dpalmisano) commented that
http://sameas.org was very close to something he was delivering -
which points to other necessary parts of the puzzle: redundancy and
service descriptions (so if a particular service is down or under
stress an equivalent can be substituted). I know quite a few folks
have spent time on vocabularies for service descriptions but must
confess I've no idea what the state of the art is. Pointers for things
suitable for this scenario anyone?

Cheers,
Danny.



-- 
http://danny.ayers.name



Re: BobQL? Boxes of (related) boxes ...

2009-05-31 Thread Danny Ayers
2009/5/31 Dan Brickley dan...@danbri.org:

 Box 1: A journey exploring information about presidents, their kids and
 their education...
 box 1.1: All things that are US presidents
 box 1.2: All things that are children of things in bag_1.1
 box 1.3: All things that are educational institutions, attended by things in
 bag 1.2
 bag1.4: All things that are places that are locations of things in bag
 1.3...

 Box 2: A journey into info about hong kong skyscrapers, their designers, and
 the buildings those designers have made
 box 2.1: All things that are skyscrapers in Hong Kong
 box 2.2: All things that are the architects of things in bag 2.1
 box 2.3: All things that are buildings designed by things that are in bag
 2.2 ...etc

 Note: each sub-box is an RDFesque expression couched in terms of types and
 relations, with a reference to the set of things handed along from the
 previously box. In theory each of these could also evaluated with different
 according to... criteria, which could map into SPARQL GRAPH provenance,
 different databases, or various other ways of indicating who-said-what.

Not thought this through, but first impression is that this sounds
like what you could get using CONSTRUCTs on the wider space to build a
local graph-box (fairly transparent, more filtering than
transforming), and CONSTRUCTing again on that until there's just a wee
bit left which could be rendered using SELECT etc.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: owl:sameAs links from OpenCyc to WordNet

2009-02-23 Thread Danny Ayers
2009/2/23 Dan Brickley dan...@danbri.org:

 Having these associations is still great, so thanks for all the work putting
 this together. I'd suggest making up a custom relationship name for now to
 link from a class to a related Wordnet synset.

I thought maybe SKOS [1] might have a suitable term, but alas not. The
nearest I can find on a quick vocab skim (see [2]) are variations on
'label', but these are unsuitable not only because they're a bit weak,
but also because their range is a literal. So +1 to making up a custom
term.

Cheers,
Danny.

[1] http://www.w3.org/2004/02/skos/
[2] http://schemacache.test.talis.com/

-- 
http://danny.ayers.name



Re: studies about LD visibility

2009-01-27 Thread Danny Ayers
2009/1/27 Jun Zhao jun.z...@zoo.ox.ac.uk


  Our projects have been supporting the needs from users who have little or
 no techy background. They quite buy the idea of Semantic Web, for making it
 easier to mash up datasets, technically speaking. However, we are still
 looking for compelling cases to show that there are things that cannot be
 done without LD. And I would really want to know what the LD community feels
 about how far the LD technology is reaching outside the SW community.


very good questions, I hope we have answers soon...



 Looking forward to SQUIN:-)


me too :-)




 Jun




-- 
http://danny.ayers.name


linked data tutorial, lessons learnt

2009-01-23 Thread Danny Ayers

Somewhat late, but maybe useful for someone:

At the Italian semweb conference SWAP 2008 [1] I did a tutorial, about
15-20 attendees.

The first half, a presentation was straightforward and I think
functional, I used a cut-down  tweaked version [2] of the slides
ChrisB  co. used in Karlsruhe.

The second half hands-on would have been a major embarrassment had it
not been for danbri. I wasn't sure of the demographic, so had decided
to play it by ear, something using FOAF. The night before it occurred
to me to set up ftp on my server so people could actually publish
stuff...after several hours I had no joy on the admin. Then I thought
it would be *much* neater to set things up for POSTing stuff to the
server via HTTP. Naturally I didn't get my Apache config sorted in
time.

My fallback was ssh. Which didn't work on the venue's network.

Dan bailed me out by suggesting the use of a Wiki for upload, and he
had an ARC install to play with, so the session basically turned into
a SPARQL tutorial. It was rather clumsy - e.g. I was using the
computer in the venue, Windows XP and IE are not friendly to even
things like text editing.

Lessons learnt: mostly preparation might have helped :-)

Cheers,
Danny.

[1] http://www.swapconf.it/2008
[2] http://www.swapconf.it/2008/tutorial_day.php

-- 
http://danny.ayers.name



Re: linked data tutorial, lessons learnt

2009-01-23 Thread Danny Ayers

PS. The mighty capable organizers, Aldo and Valentino, got an art
exhibition co-located with the conference. Getting a bit of culture in
there was brilliant :-)

2009/1/23 Danny Ayers danny.ay...@gmail.com:
 Somewhat late, but maybe useful for someone:

 At the Italian semweb conference SWAP 2008 [1] I did a tutorial, about
 15-20 attendees.

 The first half, a presentation was straightforward and I think
 functional, I used a cut-down  tweaked version [2] of the slides
 ChrisB  co. used in Karlsruhe.

 The second half hands-on would have been a major embarrassment had it
 not been for danbri. I wasn't sure of the demographic, so had decided
 to play it by ear, something using FOAF. The night before it occurred
 to me to set up ftp on my server so people could actually publish
 stuff...after several hours I had no joy on the admin. Then I thought
 it would be *much* neater to set things up for POSTing stuff to the
 server via HTTP. Naturally I didn't get my Apache config sorted in
 time.

 My fallback was ssh. Which didn't work on the venue's network.

 Dan bailed me out by suggesting the use of a Wiki for upload, and he
 had an ARC install to play with, so the session basically turned into
 a SPARQL tutorial. It was rather clumsy - e.g. I was using the
 computer in the venue, Windows XP and IE are not friendly to even
 things like text editing.

 Lessons learnt: mostly preparation might have helped :-)

 Cheers,
 Danny.

 [1] http://www.swapconf.it/2008
 [2] http://www.swapconf.it/2008/tutorial_day.php

 --
 http://danny.ayers.name




-- 
http://danny.ayers.name



Re: linked data tutorial, lessons learnt

2009-01-23 Thread Danny Ayers

eeek!
s/Valentino/Valentina

2009/1/23 Danny Ayers danny.ay...@gmail.com:
 PS. The mighty capable organizers, Aldo and Valentino, got an art
 exhibition co-located with the conference. Getting a bit of culture in
 there was brilliant :-)

 2009/1/23 Danny Ayers danny.ay...@gmail.com:
 Somewhat late, but maybe useful for someone:

 At the Italian semweb conference SWAP 2008 [1] I did a tutorial, about
 15-20 attendees.

 The first half, a presentation was straightforward and I think
 functional, I used a cut-down  tweaked version [2] of the slides
 ChrisB  co. used in Karlsruhe.

 The second half hands-on would have been a major embarrassment had it
 not been for danbri. I wasn't sure of the demographic, so had decided
 to play it by ear, something using FOAF. The night before it occurred
 to me to set up ftp on my server so people could actually publish
 stuff...after several hours I had no joy on the admin. Then I thought
 it would be *much* neater to set things up for POSTing stuff to the
 server via HTTP. Naturally I didn't get my Apache config sorted in
 time.

 My fallback was ssh. Which didn't work on the venue's network.

 Dan bailed me out by suggesting the use of a Wiki for upload, and he
 had an ARC install to play with, so the session basically turned into
 a SPARQL tutorial. It was rather clumsy - e.g. I was using the
 computer in the venue, Windows XP and IE are not friendly to even
 things like text editing.

 Lessons learnt: mostly preparation might have helped :-)

 Cheers,
 Danny.

 [1] http://www.swapconf.it/2008
 [2] http://www.swapconf.it/2008/tutorial_day.php

 --
 http://danny.ayers.name




 --
 http://danny.ayers.name




-- 
http://danny.ayers.name



Re: linked data tutorial, lessons learnt

2009-01-23 Thread Danny Ayers

2009/1/23 Juan Sequeda juanfeder...@gmail.com:
 Hi Danny

 Thanks for the info. Could you tell me who were the attendees? I mean, were
 they already knowledgeable about LD and Semantic Web?

I asked who had a FOAF profile and nobody raised their hand (although
at least one person did, to my knowledge).

Were they researchers
 or developers?

I honestly don't know. I would guess mostly developers/coders.

 What was the final outcome? People ready to jump in the LD
 world, or just wanting to get a brief introduction?

On that I really, really don't know - I suspect a brief introduction,
I'll cc danbri to see what he reckons.

What was extremely good was that this tutorial was arranged as intro
bits to the semweb conference, the organizers (Valentina with an 'a'!
and Aldo) recognised the relevance. Having experts in the field
acknowledge the significance of linked data is a good start, imho.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: studies about LD visibility

2009-01-22 Thread Danny Ayers

2009/1/21 Olaf Hartig har...@informatik.hu-berlin.de:

 Does someone know a study that investigates whether people from different
 communities know about linked data and are aware of the benefits?

I've not seen anything, but if the W3C has resources for another
Education and Outreach group in the near future, I'd suggest such a
study as a deliverable.

As far as I can tell there is awareness in some of the scientific
communities, though as a message for the Web at large linked data
seems still to be be limited to a subset of the semweb community.
Having said that, places like Read/WriteWeb and programmableweb.com do
expose running applications to the Web 2.0 community. Also RDFa
deployment seems to be growing in general, and that's good data.

Cheers,
Danny.



-- 
http://danny.ayers.name



Re: The next Internet giant: linking open data, providing open access to repositories

2008-12-07 Thread Danny Ayers

2008/12/7 Sw-MetaPortal-ProjectParadigm [EMAIL PROTECTED]:
 The next Internet giant company will be linking open data and providing open
 access to repositories, in the process seamlessly combining both paid for
 subscriptions, Creative Commons or similar license based or open source
 software schemes.

 Revenues will be generated among other things from online advertising
 streams currently not utilized by Google or Yahoo!

..and the other things, not advertising, can you describe them?

 In the big scheme of things this company will redefine the concept of
 internet search to provide access to deep(er) web levels of data and
 information for which users will be willing to pay an annual flat fee
 subscription.

..and the other things, not search, can you describe them?

Sorry. Seriously I haven't a clue what revenue models we'll be seeing
in 10 or 20 years. I suspect I'd be surprised.

 Sound improbable? Non-profit organizations dedicated to providing global
 open access will soon start exploring just such business schemes to
 determine if it is feasible to fund and maintain the server farms, hard and
 software to do just that.

Cool.

But the Rainbow Warrior was the Greenpeace yacht right?
So how do I know you're not just trying to subvert things here? It happens.
Usually in boats.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: The next Internet giant: linking open data, providing open access to repositories

2008-12-07 Thread Danny Ayers

Abstract looks excellent, though personally I'd drop the hypens ('-').
Now to read a paper!

2008/12/8 Marko A. Rodriguez [EMAIL PROTECTED]:
 Hi all,

 Here is a short column that I wrote that is in line with this thread of
 thought:

 http://arxiv.org/abs/0807.3908

 It addresses the importance of a distributed computing infrastructure for
 the Linked Data cloud, where the download and index philosophy of the
 World Wide Web won't so easily port over.

 Take care,
 Marko A. Rodriguez
 http://markorodriguez.com


 2008/12/7 Sw-MetaPortal-ProjectParadigm [EMAIL PROTECTED]:
 The next Internet giant company will be linking open data and providing
 open
 access to repositories, in the process seamlessly combining both paid
 for
 subscriptions, Creative Commons or similar license based or open source
 software schemes.

 Revenues will be generated among other things from online advertising
 streams currently not utilized by Google or Yahoo!

 ..and the other things, not advertising, can you describe them?

 In the big scheme of things this company will redefine the concept of
 internet search to provide access to deep(er) web levels of data and
 information for which users will be willing to pay an annual flat fee
 subscription.

 ..and the other things, not search, can you describe them?

 Sorry. Seriously I haven't a clue what revenue models we'll be seeing
 in 10 or 20 years. I suspect I'd be surprised.

 Sound improbable? Non-profit organizations dedicated to providing global
 open access will soon start exploring just such business schemes to
 determine if it is feasible to fund and maintain the server farms, hard
 and
 software to do just that.

 Cool.

 But the Rainbow Warrior was the Greenpeace yacht right?
 So how do I know you're not just trying to subvert things here? It
 happens.
 Usually in boats.

 Cheers,
 Danny.

 --
 http://danny.ayers.name







-- 
http://danny.ayers.name



Re: A VoCamp Galway 2008 success story

2008-11-29 Thread Danny Ayers
2008/11/29 François Scharffe [EMAIL PROTECTED]:
 Hi Michael,

 Michael Hausenblas wrote:

 Francois,

 Thanks for your feedback and the question. Though I'm not sure what you
 technically mean with 'my:links is a named graph' :)

ditto

 The system output the links in a named graph. See the following example in
 TRiG:

 my:links
 {
  http://kmi.open.ac.uk/fusion/dblp#document1632795751_264
  owl:same_as
 http://kmi.open.ac.uk/fusion/dblp#document1ad8378bff1fe32cd13989741b50fe3eaef0db93
 .
 }

 We can then describe it as a void:Linkset as I've described below. This
 allows to attach other information such as the author of the linkset, the
 parameters of the algorithm used to generate it, etc.

 I think the answer is simple: indeed we decided to model datasets and
 linksets independently from each other. The following example from the (not
 yet publicly available) voiD guide may illustrate this:

I'm not at all sure that's a valid distinction. How do they differ?

 Let's assume the two well-known linked datasets DBpedia and DBLP:

 :DBpedia void:containsLinks :DBpedia2DBLP  .

 :DBpedia2DBLP rdf:type void:Linkset ;
  void:target :DBLP .

 So, it is a linking *from* DBpedia *to* DBLP; as RDF is a direct graph,
 this makes sense quite a lot (the subject 'sits' in DBpedia, the object in
 DBLP).

That smells very wrong - links work both ways (the implied inverse).
Uniformity of linkage. Making an artificial distinction - pragmatic
reasons?

(c.f. http://dig.csail.mit.edu/breadcrumbs/node/72  )

 IIRC, we had your option in mind as well [1] but decided to go for the
 current modeling due to the above reasons. Actually, as I think, the two
 modelings are equivalent, just with reversed directions:

But I can't actually see anything in the vocab I'd want to change, so
feel free to ignore the above :-)

Cheers,
Danny.

-- 
http://danny.ayers.name


Re: Can we afford to offer SPARQL endpoints when we are successful? (Was linked data hosted somewhere)

2008-11-29 Thread Danny Ayers

%=profanity /%
or something - cool thread

while I disagree with many of Aldo's individual points, getting them
surfaced is really positive

in response to a line from the firestarter:

The only reason anyone can afford to offer a SPARQL endpoint is because it
doesn't get used too much?

while my love of SPARQL is enormous, I can't see the SPARQL endpoint
being a lasting scenario.
linking and the fresh approach to caching this will demand, need
another rev. before the web starts doing data efficiently

the answer to the quoted line is the question - how can you not
afford? Classic stuff re. amazon opening up their silo a little bit -
guess what, profit!

pip,
Danny.


2008/11/28 Juan Sequeda [EMAIL PROTECTED]:



 On Thu, Nov 27, 2008 at 2:33 PM, Peter Ansell [EMAIL PROTECTED]
 wrote:

 2008/11/27 Richard Cyganiak [EMAIL PROTECTED]

 Hugh,

 Here's what I think we will see in the area of RDF publishing in a few
 years:

 - those query capabilities are described in RDF and hence can be invoked
 by tools such as SQUIN/SemWebClient to answer certain queries efficiently

 I still don't understand what SQUIN etc have that goes above
 Jena/Sesame/SemWeb etc which can do this URI resolution with very little
 programming knowledge in custom applications.

 True, jena/sesame does everything that SQUIN entails to do. However, SQUIN
 is oriented to the web2.0 developers. How is a php/ror web developer going
 to interact with the web of data and make some kind of semantic-linked data
 mashup over a night? SQUIN will let them do this. No need of having jena,
 learning jena, etc. Make it simple! If it is not simple, then developers are
 not going to use it.







-- 
http://danny.ayers.name



Suggestions/existing material for Linked Data tutorial?

2008-11-14 Thread Danny Ayers

Hi LODites,

I'm going to be doing a tutorial at the SWAP conference in Rome on
15th Dec (main conf is 16th-17th, http://www.swapconf.it/2008/  ).
Provisional title is  Publishing Linked Data on the Semantic Web: how
and why?.

I have my own thoughts about what to do (naturally ;-) but would very
much appreciate any suggestions for how to go about it, especially the
hands-on part. Also if anyone's already got slide sets on the topic,
I'd be grateful for pointers.

Attendees will apparently be mostly young semweb
researchers/developers and Web 2.0 people, no idea of numbers yet. The
schedule I've proposed is:

9:30 - 11:00 : background and description of techniques (with a bit of
QA at the end)
11:00 - 11:30 : coffee
11:30 - 13:00 : hands-on  (with a bit of QA, discussion  conclusions
at the end)

Cheers,
Danny.

-- 
http://dannyayers.com
~
http://blogs.talis.com/nodalities/this_weeks_semantic_web/



Re: Suggestions/existing material for Linked Data tutorial?

2008-11-14 Thread Danny Ayers
2008/11/14 Andreas Blumauer [EMAIL PROTECTED]:
 Dear Danny, dear LODites,

 since I´m presenting LOD/Semweb Ideas/Methods etc. since years rather for
 non-Semweb people, e.g. for traditional IT-specialists or Web 2.0 folks,
 I have developed some slides recently for this audience:

 http://www.slideshare.net/ABLVienna/linked-data-tutorial-presentation/

 If you like it - I can send you the original file as well,
 please feel free to use anything from it,

 hope your presentation is going well,

Thanks! - sounds ideal. Though I'm unlikely to 'borrow' any slides
directly, I would be grateful for a copy of the original file.

Cheers,
Danny.

-- 
http://dannyayers.com
~
http://blogs.talis.com/nodalities/this_weeks_semantic_web/


Re: Suggestions/existing material for Linked Data tutorial?

2008-11-14 Thread Danny Ayers

2008/11/14 Yves Raimond [EMAIL PROTECTED]:
 Hi Danny!

Hi Yves!

 We plan to publish, with Keith, a small how-to for doing a hands-on
 tutorial like the Web-of-data 101 session we did at the WOD-PD event.
 With Richard's permission, we'll also take a few things from his
 session, where he used some of the data we created during our session.

 The goal of this how-to is to make it easy for people to reproduce
 the tutorial in different places.

Wow, nicely meta.

Please keep me posted on progress - even any rough plans/drafts would helpful.

I'm curious - which tools would you be using?

Cheers,
Danny.

-- 
http://dannyayers.com
~
http://blogs.talis.com/nodalities/this_weeks_semantic_web/