On Sunday 18 October 2009 18:50:50 Hugh Glaser wrote:
> [...]
> In the current practice, the site minted a URI for the book, and then
> asserted owl:sameAs to other URIs - hopefully including mine, hidden in the
> RDF about the site's URI.
>
> In the current SWCL, for example, (I think) if I ask ab
All,
Sponger based description of the book: Weaving the Web: The Original
Design and Ultimate Destiny of the World Wide Web
The main sponger page (a description of the actual HTML doc) is at:
http://tr.im/Cemx .
The foaf:primarytopic's value is the URI of a GoodRelations offering:
http:/
Martin Hepp (UniBW) wrote:
I don't think so, because this would require that Sindice crawled the
whole regular web and checked the Spongers for each URL (sic!).
Martin,
Or crawls the deeper Web of Linked Data via our proxy/wrapper URIs and
ends up achieving what? The shallow Web is still chall
Frederick Giasson wrote:
Hi all,
The Web of Linked
Data shouldn't be about mass crawling (search engine style)
etc...
It has to be. How would you answer a query like "all offers for a book
written by a German author" without crawling the relevant data sets?
First question would be: w
Olaf Hartig wrote:
Hey,
On Sunday 18 October 2009 09:37:14 Martin Hepp (UniBW) wrote:
[...]
So it will boil down to technology that combines (1) crawling and
caching rather stable data sets with (2) distributing queries and parts
of queries among the right SPARQL endpoints (whatever actual D
Hugh Glaser wrote:
Hi Guys,
I am puzzled by the whole discussion, so will try to summarise to find out
if I have some misunderstanding.
It really is "just" about finding where the URIs are, and search engines are
the game in town. We need to make it really easy for people to find the
Linked Data
Hello,
Great to read all your comments on- and off-list. Thanks.
We fixed the bug with ampersands in URIs and the problems that occured with
browsers other than Firefox. We would also like to thank for your ideas and
feature requests. We're discussing which of them we'll implement.
Greetings,
On 18/10/2009 17:12, "Olaf Hartig" wrote:
> On Sunday 18 October 2009 17:50:58 Hugh Glaser wrote:
>> The SWCL-style approach works pretty well as long as the RDF you want about
>> the URIs is the stuff you get by resolving.
>
> Right, that's the fundamental assumption. And that's what Linked D
On Sunday 18 October 2009 17:50:58 Hugh Glaser wrote:
> The SWCL-style approach works pretty well as long as the RDF you want about
> the URIs is the stuff you get by resolving.
Right, that's the fundamental assumption. And that's what Linked Data is
about. ;-)
> It can be much more problematic
The SWCL-style approach works pretty well as long as the RDF you want about
the URIs is the stuff you get by resolving.
It can be much more problematic if the URI is in some site such as (a
wrapped) Amazon, saying what is the price of a book identified by a
publisher's URI.
There are ways round th
Hey Giovanni,
On Sunday 18 October 2009 16:01:41 Giovanni Tummarello wrote:
> I'd say, if i understand well
>
> that that works only for queries where you need the extra dereferenced
> data just "additionally" e.g. to add a label to your result se
I'm not sure what you mean be "additionally" here
> A) The wrapper's Semantic Sitemap points you at the original Sitemap, and
> says how it is doing the wrapping. And because you know how the wrapper is
> behaving, you can process the standard Sitemap to get the information you
> want about what the wrapping site provides.
> Actually, the "slicing
Hi.
On 18/10/2009 14:56, "Giovanni Tummarello" wrote:
> Hi Hugh, thanks for your contribution
>
>
> .. it turns out this discussion is in fact very very important and
Agreed.
> such feedback is indeed very useful
>
> if i just get a sitemap from sponger (which is wrapping a sitemap from
> ano
I agree wihtt this, a combination of the 2, without into unrealistic
services descriptions, is exactly its the question.
its great to be talking about this.
I'd be gladly have a chat about all this at ISWC for those who are there?
Cheers
Giovanni
On Sun, Oct 18, 2009 at 8:37 AM, Martin Hepp (U
I'd say, if i understand well
that that works only for queries where you need the extra dereferenced
data just "additionally" e.g. to add a label to your result se
if you need the remote, on the fly reference data to e.g. sort by
price you'd have to fetch all from the remote site ..
Gio
On Sun
Hey,
On Sunday 18 October 2009 09:37:14 Martin Hepp (UniBW) wrote:
> [...]
> So it will boil down to technology that combines (1) crawling and
> caching rather stable data sets with (2) distributing queries and parts
> of queries among the right SPARQL endpoints (whatever actual DB
> technology th
Hi Hugh, thanks for your contribution
.. it turns out this discussion is in fact very very important and
such feedback is indeed very useful
if i just get a sitemap from sponger (which is wrapping a sitemap from
another site)
then all i can do is really just crawling that sitemap which would
cal
Guys,
the Web of Data cannot rely on mass data crawling of the whole Web but
must combine cached data with federated on-demand queries. Structured
data requires much faster update cycles than typical text-based Web
indices. For example, Google and Yahoo can rely on the fact that
"http://www.cn
18 matches
Mail list logo