On 2/13/16 6:29 PM, Markus Kroetzsch wrote:
> On 13.02.2016 23:56, Kingsley Idehen wrote:
>> On 2/13/16 4:56 PM, Markus Kroetzsch wrote:
> ...
>>
>> For a page-size of 20 (covered by LIMIT) you can move through offets of
>> 20 via:
>
> To clarify: I just added the LIMIT to prevent unwary readers
On 2/13/16 6:26 PM, Markus Kroetzsch wrote:
> On 13.02.2016 23:50, Kingsley Idehen wrote:
> ...
>> Markus and others interested in this matter,
>>
>> What about using OFFSET and LIMIT to address this problem? That's what
>> we advice users of the DBpedia endpoint (and others we publish) to do.
>>
project.
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
Hi Joachim,
I think SERVICE queries should be working, but maybe Stas knows more about
this. Even if they are disabled, this should not result in some message rather
than in a NullPointerException. Looks like a bug.
Markus
zsch
Gesendet: Samstag, 13. Februar 2016 22:56
An: Discussion list for the Wikidata project.
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
And here is another comment on this interesting topic :-)
I just realised how close the service is to answering the query. It turns out that
you can in f
And here is another comment on this interesting topic :-)
I just realised how close the service is to answering the query. It
turns out that you can in fact get the whole set of (currently >324000
result items) together with their GND identifiers as a download *within
the timeout* (I tried
On 2/11/16 9:25 AM, Markus Krötzsch wrote:
> On 11.02.2016 15:01, Gerard Meijssen wrote:
>> Hoi,
>> What I hear is that the intentions were wrong in that you did not
>> anticipate people to get actual meaningful requests out of it.
>>
>> When you state "we have two choices", you imply that it is
On 13.02.2016 22:56, Markus Kroetzsch wrote:
And here is another comment on this interesting topic :-)
I just realised how close the service is to answering the query. It
turns out that you can in fact get the whole set of (currently >324000
result items) together with their GND identifiers as
On 13.02.2016 23:50, Kingsley Idehen wrote:
...
Markus and others interested in this matter,
What about using OFFSET and LIMIT to address this problem? That's what
we advice users of the DBpedia endpoint (and others we publish) to do.
We have to educate people about query implications and
Hi!
> [1] Is the service protected against internet crawlers that find such
> links in the online logs of this email list? It would be a pity if we
> would have to answer this query tens of thousands of times for many
> years to come just to please some spiders who have no use for the result.
Hi!
> you may want to check out the Linked Data Fragment server in Blazegraph:
> https://github.com/blazegraph/BlazegraphBasedTPFServer
Thanks, I will check it out!
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
12.02.2016, 10:43, Markus Krötzsch wrote:
Restricting queries syntactically to be "simpler" is what we did in
Semantic MediaWiki (because MySQL did not support time/memory limits per
query). It is a workaround, but it will not prevent long-running queries
unless you make the syntactic
On 12.02.2016 00:04, Stas Malyshev wrote:
Hi!
We basically have two choices: either we offer a limited interface that only
allows for a narrow range of queries to be run at all. Or we offer a very
general interface that can run arbitrary queries, but we impose limits on time
and memory
On 12.02.2016 10:01, Osma Suominen wrote:
12.02.2016, 10:43, Markus Krötzsch wrote:
Restricting queries syntactically to be "simpler" is what we did in
Semantic MediaWiki (because MySQL did not support time/memory limits per
query). It is a workaround, but it will not prevent long-running
scussion list for the Wikidata project.
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
On 12.02.2016 00:04, Stas Malyshev wrote:
> Hi!
>
>> We basically have two choices: either we offer a limited interface
>> that only allows for a narrow range of queries to be run
Hi!
> For me, it’s perfectly ok when a query runs for 20 minutes, when it
> spares me some hours of setting up a specific environment for one
> specific dataset (and doing it again when I need current data two month
> later). And it would be no issue if the query runs much longer, in
> situations
but I cannot squeeze that query
into a GET request ...
-Ursprüngliche Nachricht-
Von: Wikidata [mailto:wikidata-boun...@lists.wikimedia.org] Im Auftrag von Stas
Malyshev
Gesendet: Donnerstag, 11. Februar 2016 01:35
An: Discussion list for the Wikidata project.
Betreff: Re: [Wikidata] SPAR
h
Gesendet: Donnerstag, 11. Februar 2016 15:05
An: Discussion list for the Wikidata project.
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
Hi Joachim,
Here is a short program that solves your problem:
https://github.com/Wikidata/Wikidata-Toolkit-Examples/blob/master/sr
Hoi,
This is the kind of (technical) feedback that makes sense as it is centred
on need. It acknowledges that more needs to be done as we are not ready for
what we expect of ourselves in the first place.
In this day and age of big data, we are a very public place where a lot of
initiatives
Am 11.02.2016 um 10:17 schrieb Gerard Meijssen:
> Your response is technical and seriously, query is a tool and it should
> function
> for people. When the tool is not good enough fix it.
What I hear: "A hammer is a tool, it should work for people. Tearing down a
building with it takes forever,
Hoi,
What I hear is that the intentions were wrong in that you did not
anticipate people to get actual meaningful requests out of it.
When you state "we have two choices", you imply that it is my choice as
well. It is not. The answer that I am looking for is yes, it does not
function as we would
Hi Joachim,
Here is a short program that solves your problem:
https://github.com/Wikidata/Wikidata-Toolkit-Examples/blob/master/src/examples/DataExtractionProcessor.java
It is in Java, so, you need that (and Maven) to run it, but that's the
only technical challenge ;-). You can run the
On 11.02.2016 15:01, Gerard Meijssen wrote:
Hoi,
What I hear is that the intentions were wrong in that you did not
anticipate people to get actual meaningful requests out of it.
When you state "we have two choices", you imply that it is my choice as
well. It is not. The answer that I am looking
Discussion list for the Wikidata project.
*Betreff:* Re: [Wikidata] SPARQL CONSTRUCT results truncated
On Thu, Feb 11, 2016 at 5:53 PM Gerard Meijssen
<gerard.meijs...@gmail.com <mailto:gerard.meijs...@gmail.com>> wrote:
Hoi,'
Markus when you read my reply on the original
Hi!
> 5.44s empty result
> 8.60s 2090 triples
> 5.44s empty result
> 22.70s 27352 triples
That looks weirdly random. I'll check out what is going on there.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
On Thu, Feb 11, 2016 at 5:53 PM Gerard Meijssen
wrote:
> Hoi,'
> Markus when you read my reply on the original question you will see that
> my approach is different. The first thing that I pointed out was that a
> technical assumption has little to do with what people
Hoi,'
Markus when you read my reply on the original question you will see that my
approach is different. The first thing that I pointed out was that a
technical assumption has little to do with what people need. I indicated
that when this is the approach, the answer is fix it. The notion that a
Hi!
> I try to extract all mappings from wikidata to the GND authority file,
> along with the according wikipedia pages, expecting roughly 500,000 to
> 1m triples as result.
As a starting note, I don't think extracting 1M triples may be the best
way to use query service. If you need to do
I try to extract all mappings from wikidata to the GND authority file, along
with the according wikipedia pages, expecting roughly 500,000 to 1m triples as
result.
However, with various calls, I get much less triples (about 2,000 to 10,000).
The output seems to be truncated in the middle of a
28 matches
Mail list logo