Hi!

> feature that needs to be snappy. So the alternative approach I have been
> working on to is to get a subset of a Wikidata dump and put it in an
> ElasticSearch instance.

The linked data fragments implementation would probably be useful for
that, and I think it would be good idea to get one eventually for the
Wikidata Query Service, but not yet. Also, we do have ElasticSearch
index for Wikidata (that's what drives search on site) so it would be
possible to integrate it with Query Service too (there's some support
for it in Blazegraph) but it's still not done. So for now I think we
don't have a ready-made solution yet. You could still try to
prefix-search or regex-search on the query service, but depending on the
query it may be too slow right now.

> *Question:
> *What is the best way to get all the entities matching a given claim?
> My answer so far was downloading a dump, then filtering the entities by
> claim,  but are there better/less resource-intensive ways?

Probably not currently without some outside tools. When we get LDF
support, then that may be the way :)

-- 
Stas Malyshev
smalys...@wikimedia.org

_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to