I'm looking to use the https://query.wikidata.org/ interface to export to
csv all wikidatas with this property P2002.
https://www.wikidata.org/w/index.php?title=Special:WhatLinksHere/Property:P2002&limit=500&from=21542767&back=20967485
I am looking for the Wikipedia Article_Name + the value assoc
ted. It's got the
> item ID, the label and the value of P2002.
>
> --
> Mbch331
>
> When you have eliminated the impossible, whatever remains, however
> improbable, must be the truth.
> (Sir Arthur Conan Doyle)
>
> Op 9-11-2015 om 17:09 schreef Hampton Snowba
Maybe I misunderstood. I think the item label is actually what's used in
the wikipedia article url, just convert spaces to underscores?
Thanks!
On Mon, Nov 9, 2015 at 11:47 AM, Hampton Snowball wrote:
> Thank you. Is there a way to export it though with the Wikipedia Article
>
re beginner at SPARQL and was happy I could get this result.
>
> --
> Mbch331
>
> When you have eliminated the impossible, whatever remains, however
> improbable, must be the truth.
> (Sir Arthur Conan Doyle)
>
> Op 9-11-2015 om 18:13 schreef Hampton Snowball:
>
> May
?Twitter = "ADFP_Peru") }
> {?item rdfs:label ?item_label filter (lang(?item_label) = "en") .}
> {
> ?article schema:about ?item .
> FILTER (SUBSTR(str(?article), 1, 25) = "https://en.wikipedia.org/";)
> }
> }
>
>
>
> On Mon,
I think a lot of the freebase handles can be out of date and incorrect in
what I previously saw.
On Mon, Nov 9, 2015 at 2:56 PM, Tom Morris wrote:
> Freebase has another 18,000 Twitter handles which are linked to IMDB, G+,
> etc which don't have English Wikipedia links (as well as 13K which are
Hello,
I am interested in a subset of wikidata and I am trying to find the best
way to get it without getting a larger dataset then necessary.
Is there a way to just get the "bios" that appear on the wikidata pages
below the name of the person/organization, as well as the link to the
english wiki
a.org/wiki/H%C3%BClya
Thanks in advance!
On Sun, Jan 31, 2016 at 3:53 PM, Edgard Marx wrote:
> Hey,
> you can simple use RDFSlice (https://bitbucket.org/emarx/rdfslice/overview)
> directly on the dump file (
> https://dumps.wikimedia.org/wikidatawiki/entities/20160125/)
>
> b
>
> SELECT *
> WHERE
> {
>?s <http://schema.org/description> ?o .
>filter(lang(?o)='en').
> }
>
> ** For all language bios:*
>
> SELECT *
> WHERE
> {
><http://www.wikidata.org/entity/Q1652291> <
> http://schema.org/des
Thanks.
I only plan on using a query to extract from all
english wikidata "articles" or all articles though anyway, hopefully the
other queries will work.
On Mon, Feb 1, 2016 at 4:33 PM, Stas Malyshev
wrote:
> Hi!
>
> > *The first one, which seems to be only for 1 record, just as a test
> > see
gt;
> On Sun, Jan 31, 2016 at 7:43 PM, Hampton Snowball <
> hamptonsnowb...@gmail.com> wrote:
>
>> Hello,
>>
>> I am interested in a subset of wikidata and I am trying to find the best
>> way to get it without getting a larger dataset then necessary.
>
Of course I meant sorry if this is a dumb question :)
On Mon, Feb 1, 2016 at 7:13 PM, Hampton Snowball
wrote:
> Sorry if this is a dump question (I'm not a developer). To run the
> command on the rdfslice program in mentions (" java -jar rdfslice.jar
> -source | -pat
last number changes).
I thought it might might be a memory issue. Increasing memory with the
-Xmx2G command (or 3G, 4G) I haven't had luck with. Any tips would be
appreciated.
Thanks
On Mon, Feb 1, 2016 at 7:28 PM, Hampton Snowball
wrote:
> Of course I meant sorry if this is a
#x27;t had luck with. Any tips would be
>> appreciated.
>>
>> Thanks
>>
>> On Mon, Feb 1, 2016 at 7:28 PM, Hampton Snowball <
>> hamptonsnowb...@gmail.com> wrote:
>>
>>> Of course I meant sorry if this is a dumb question :)
>>&
Hello,
I am trying to query the wikidata service for IMDB + the wikipedia of the
person. English only wikipedia is okay too.
I previously was able to use the following to include the
english wikipedia in the query, which is included below. I've tried the
query below, but believe I am doing someth
I took another query from this list and have been trying to modify it to
get stock symbols and there wikipedia pages. But I am only getting 64
results. Am I doing something wrong, filtering the results?
http://tinyurl.com/go8zpvy
Thanks in advance!
Hampton
tly, instead they use it as a qualifier on P414, eg
> https://www.wikidata.org/wiki/Q156578 or
> https://www.wikidata.org/wiki/Q156238
>
> These won't show up on a straight search. I'm not familiar with how
> searching for qualifiers is best done, but at least it narrows down
Thank you Andrew. You are pointing out that simply trying to export p249
alone is not working properly or needs a different query, I see.
On Thu, Oct 13, 2016 at 3:02 PM, Hampton Snowball wrote:
> When I got to the 'what links here' for p249 seems like a lot more
> (hund
Hi Stas - Thank you so much for your response! It seems the difference
between 1 and 2 is that for the same company name, their are multiple
symbols?
In terms of labeling the symbols - i'd be nice to have that label
information (nyse) in another column, however I can get away with this as
is so th
19 matches
Mail list logo