Hello!
I know this issue has been raised during the LOD BOF at WWW 2009, but
I don't know if any possible solutions emerged from there.
The problem we are facing is that data on BBC Programmes changes
approximately 50 000 times a day (new/updated
broadcasts/versions/programmes/segments etc.). As
Yves Raimond wrote:
Hello!
I know this issue has been raised during the LOD BOF at WWW 2009, but
I don't know if any possible solutions emerged from there.
The problem we are facing is that data on BBC Programmes changes
approximately 50 000 times a day (new/updated
On Tue, Apr 28, 2009 at 2:55 PM, Kingsley Idehen kide...@openlinksw.com wrote:
Yves Raimond wrote:
Hello!
I know this issue has been raised during the LOD BOF at WWW 2009, but
I don't know if any possible solutions emerged from there.
The problem we are facing is that data on BBC
Hi YVes,
nothing can beat having a semantic sitemap [1]. Basically you say that you
change 1nce a day and give a link to the dump. Done :-)
if you put it i am ready to show in sindice the information updated every
day, and with no other cost for you than a single dump download.
also the sitemap
Hi Giovanni!
nothing can beat having a semantic sitemap [1]. Basically you say that you
change 1nce a day and give a link to the dump. Done :-)
Well, the problem is that we don't have an RDF dump, and it is quite
costly to generate one, due to the architecture driving the site
(classic
On Tue, Apr 28, 2009 at 3:39 PM, Yves Raimond yves.raim...@gmail.com wrote:
Hello!
I know this issue has been raised during the LOD BOF at WWW 2009, but
I don't know if any possible solutions emerged from there.
The problem we are facing is that data on BBC Programmes changes
approximately
Melvin Carvalho wrote:
On Tue, Apr 28, 2009 at 3:39 PM, Yves Raimond yves.raim...@gmail.com wrote:
Hello!
I know this issue has been raised during the LOD BOF at WWW 2009, but
I don't know if any possible solutions emerged from there.
The problem we are facing is that data on BBC
Hi Yves,
I think the two main options are either to publish a feed containing
pointers to changes, or using a messaging system to push out notifications.
Despite the recent discussion around benefits of, say, Jabber or other
mechanisms for pushing out notifications, I think that a more RESTful
Peter Coetzee wrote:
On Tue, Apr 28, 2009 at 3:39 PM, Kingsley Idehen
kide...@openlinksw.com mailto:kide...@openlinksw.com wrote:
Melvin Carvalho wrote:
On Tue, Apr 28, 2009 at 3:39 PM, Yves Raimond
yves.raim...@gmail.com mailto:yves.raim...@gmail.com wrote:
Hi Yves, all,
We envisioned publishing updates of LOD sources via a special LOD
resource space on the LOD endpoint.
The basic idea is to publish nested sets of updates as linked data for
years, months, days, hours, minutes, seconds.
This allows crawlers to only update resources which were
Hello!
I think the two main options are either to publish a feed containing
pointers to changes, or using a messaging system to push out notifications.
Despite the recent discussion around benefits of, say, Jabber or other
mechanisms for pushing out notifications, I think that a more RESTful
Forced to mention RDFSync then (ISWC 2007)
Giovanni Tummarello, Christian Morbidoni, Reto Bachmann-Gmür, Orri Erling
RDFSync: efficient remote synchronization of RDF models
http://semanticweb.deit.univpm.it/papers/RDFSyncISWC2007.pdf
there was an implementation but it was just a proof of
Hello!
Alternatively, why not take an approach similar to the Wikipedia live feeds,
and push them out on public chat channels; perhaps SPARQL/Update messages on
a read-only Jabber/IRC etc stream? Interested parties are free to consume
them, and use the queries to keep their local copy
Hi,
My only concern about this is that you need to limit the number of
items in the feed. If you have a sudden burst of activity and the
crawler just ping the feed at regular intervals, it may miss some
updates. However, even with 1M updates in a day, with a feed capped to
100 items would
Possibly relevant:
http://www.ietf.org/rfc/rfc5005.txt
Feed paging and archiving for Atom feeds. Paging is a nice solution to
the small window problem with syndication feeds. The concept might
be translatable to RSS 1.0.
Although I have to say that I find the idea of pushing RDF updates
15 matches
Mail list logo