Hello Mamdouh,

As far as I know that is not possible, but you can also download the whole
dataset as a dump and process it (e.g. query over the text data or set up
your own SPARQL endpoint):
https://www.wikidata.org/wiki/Wikidata:Database_download (JSON or RDF dumps
are probably most helpful).
Depending on your usecase this might be the right direction to look into :)

Feel free to reach out if there are more questions.
Best,
Lucie

On Wed, 10 Apr 2019 at 09:14, Ahmed Mamdouh <[email protected]>
wrote:

> Greetings All,
>
> Hope this e-mail finds you well. I am currently doing a master project in
> NLP in JKU under the supervision of Prof. Bruno Buchberger the famous
> Austrian Mathematician.
>
> I am facing a problem where I can’t get enough data for my project. So is
> there anything that can be done to extend the limit of queries as they
> timeout ?
>
> Thanks in advance,
> Mamdouh
> _______________________________________________
> Wikidata mailing list
> [email protected]
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>


-- 
Lucie-Aimée Kaffee
_______________________________________________
Wikidata mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to