| Gstupp created this task. Gstupp added a project: Wikidata. Herald added a subscriber: Aklapper. |
When I make a GET request to the Wikidata SPARQL endpoint using python requests, the result takes an extraordinarily long time to complete, compared to running the same query through the browser. I figured out this was due to the requests library being unable to determine the encoding of the response, and therefore it tries to "guess" it.
Please see my example below (example 2):
https://gist.github.com/stuppie/e523bf617416e1490c25464d5a485396
If I explicitly tell requests that the encoding is utf8 (link), then the response gets parsed 100x faster and using much less RAM (example 1). I'm not sure what exactly requests is looking for in the headers, or how it should be formatted, but I just wanted to point this out to you all, because maybe there is a simple solution, which is adding something to the headers (or maybe I should be specifying something different in "Accept"?)
Cc: Gstupp, Aklapper, QZanden, Izno, Wikidata-bugs, aude, Mbch331
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
