[Wikidata] Re: Weekly Summary #645

2024-09-19 Thread Magnus Manske via Wikidata
The Human Cell Atlas presentation link appears to be broken, the correct
one is https://tiago.bio.br/phd_defense/

On Mon, Sep 16, 2024 at 4:41 PM Mohammed Sadat Abdulai <
mohammed.abdu...@wikimedia.de> wrote:

>
>
> * Here's your quick overview of what has been happening around Wikidata in
> theweek leading up to 2024-09-16. Please help Translate
> .
> Missed the previous one? See issue #644
> *
>
> * Discussions *
>
>- New requests for permissions/Bot: Framabot 5
>
> 
>- Task: update a typography error in the French description of homonym
>pages, seen on 1
>
> 
>.
>- New request for comments: Additional rights for bureaucrats
>
> 
>- The proposal suggests allowing Wikidata bureaucrats to remove admin
>rights, which they currently cannot do, to streamline processes, reduce
>reliance on stewards, and align with practices of other wikis.
>- Proposal: Mul labels - proposal of massive addition
>
> 
>- The proposal suggests massively adding "mul" labels to Wikidata items for
>given and family names, using a bot to streamline the process and reduce
>redundant labels.
>
> * Events
>  *
>
>- Next Linked Data for Libraries LD4 Wikidata Affinity Group
>
> 
>session 17 September, 2024: We have our next LD4 Wikidata Affinity Group
>Session on Tuesday, 17 September, 2024 at 9am PT / 12pm ET / 17:00 UTC /
>6pm CET (Time zone converter
>). Christa Strickler will
>be our first Project Series lead with her joint project with the Wikidata
>Religion & Theology Community of Practice to contribute biographical data
>to Wikidata from the IRFA database using the Mix’n’Match tool. We are
>excited to learn more about this project, provide a forum for discussion
>and shared learning, and lend a hand while building new skills. Event 
> page: Session
>2 (September 17) - Working session using Mix‘n’Match to add Wikidata items
>
> 
>- Wikidata Day 2024 (Seattle)
>
>- Agenda: Wikidata Twelfth Birthday, Training and Edit-a-thon. When:
>Saturday, October 26, from 12:30–4:30pm PDT
>
> * Press, articles, blog posts, videos *
>
>- Blogs
>   - Looking for Aotearoa's next roving Wikipedian, a Wikidata Te Papa
>   research expeditions publication & the Wikidata WikiProject IBC 
> follow-up
>   workshop
>   
> 
>   - The Aotearoa Wikipedian at Large worked with multiple institutions in
>   2024, contributing to Wikidata by improving museum exhibition models,
>   creating articles, and collaborating on various projects, including
>   biological field trips and entomology, while also engaging with the 
> local
>   Christchurch editing community.
>   - Wikimedians-in-residence assigned to add lexicographical data of
>   5 endangered languages of West Bengal
>   
> 
>   - The West Bengal Wikimedians User Group, in collaboration with Jadavpur
>   University, has appointed five linguistics students as
>   Wikimedians-in-residence to add lexicographical data for five endangered
>   languages of West Bengal to Wikidata, contributing to their preservation
>   and digital accessibility.
>   - Cooperation between National Library and Wikimedia CR was
>   presented at Wikimania 2024
>   
> 
>   - Wikimedia Czech Republic presented their long-standing collaboration 
> with
>   the National Library at Wikimania 2024, highlighting joint educational 
> and
>   community initiatives, along with additional sessions on media education
>   and successful campaigns during the event.
>   - Vacancy Wikimedian in Residence for Wikipedia on Aruba - Aruba on
>   Wikipedi

[Wikidata] Re: force a Mix'n'match catalog re-scraping

2022-05-30 Thread Magnus Manske via Wikidata
Hi Jean-Baptiste,

by default, suitable catalogs are re-scraped once every three months. I
will see if I can change it to one month in this case.

I have also manually started auto-scraping from this page:
https://mix-n-match.toolforge.org/#/jobs/1679
so the catalog should be updated soon.

Cheers,
Magnus

On Wed, May 18, 2022 at 7:49 AM Jean-Baptiste Pressac <
jean-baptiste.pres...@univ-brest.fr> wrote:

> Hello,
>
> Is there a way to force the re-scraping of a Mix'n'match catalog ? The
> catalog of a database I manage
> (https://mix-n-match.toolforge.org/#/catalog/1679) has not been scrapped
> since one month while the website is regularly updated.
>
> Subsidiary question : Is there a way to set the frequency of the
> re-scraping of a catalog ?
>
> Thank you,
>
> --
> Jean-Baptiste Pressac
>
> Conception, traitement et analyse de bases de données
> Production et diffusion de corpus numériques
>
> Centre de Recherche Bretonne et Celtique
> Unité d’Appui et de Recherche (UAR) 3554
> Bureau C310
> 20 rue Duquesne
> CS 93837
> 29238 Brest cedex 3
>
> tel : +33 (0)2 98 01 68 95
> fax : +33 (0)2 98 01 63 93
> ___
> Wikidata mailing list -- wikidata@lists.wikimedia.org
> To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: [Wikimedia-l] Re: Toolhub 1.0 is launched! Discover software tools used at Wikimedia

2021-10-21 Thread Magnus Manske via Wikidata
Hi all,

I think toolhub is great!

I wrote a blog post about integrating some tools of mine that are missing
from the hub right now:
http://magnusmanske.de/wordpress/?p=658

Cheers,
Magnus

On Fri, Oct 15, 2021 at 10:12 PM Bryan Davis  wrote:

> Leila Zia wrote:
> > Hi Bodhisattwa,
> >
> > See below.
> >
> > On Fri, Oct 15, 2021 at 1:41 PM Bodhisattwa Mandal
> > <bodhisattwa.rgkmc(a)gmail.com> wrote:
> > >
> > >  Also, is there any way to add info on existing tools?
> > I learned from Bryan yesterday that this is possible via toolsadmin
> > (Bryan helped with one of our tools yesterday via
> > https://toolsadmin.wikimedia.org/tools/id/wikinav for that particular
> > tool). Toolhub then updates the list of tools every hour and you
> > should see your tool in toolhub. I hope this helps.
>
> Leila is correct that if a record is being imported from the list of
> toolinfo.json that Toolforge publishes at
> https://toolsadmin.wikimedia.org/tools/toolinfo/v1/toolinfo.json it can
> be updated using the https://toolsadmin.wikimedia.org/ interface.
>
> More generally, toolinfo records can be edited at the source they entered
> Toolhub from. This is either a toolinfo.json file read by Toolhub's web
> crawler component, or Toolhub's UI & API. The UI is an API client, so
> Toolhub does not differentiate here.
>
> >> While adding tools into the hub, it would be great if there remains an
> option to add the contributors of the tool in addition to the primary
> developer.
>
> I think this is similar to the question at <
> https://meta.wikimedia.org/wiki/Talk:Toolhub#Multiple_authors>.
>
> I would really appreciate additional questions being sent either on the <
> https://meta.wikimedia.org/wiki/Talk:Toolhub> talk page or filed as
> Phabricator tasks under <https://phabricator.wikimedia.org/tag/toolhub/>.
> I love that y'all are exploring the tool and asking good questions, but it
> is difficult to respond to messages cross posted to multiple mailing lists
> (some of which I'm not subscribed to).
>
> Bryan
> --
> Bryan Davis  Technical Engagement  Wikimedia Foundation
> Principal Software Engineer   Boise, ID USA
> [[m:User:BDavis_(WMF)]]  irc: bd808
> ___
> Wikidata mailing list -- wikidata@lists.wikimedia.org
> To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


Re: [Wikidata] Property to encode the skills of a person

2020-10-10 Thread Magnus Sälgö
Interesting I guess we should somehow coordinate what we do in this area with 
Skills and occupation codes...

I had a meeting yesterday (see 
T264852<https://phabricator.wikimedia.org/T264852>) with a Swedish organization 
Jobtech<https://jobtechdev.se/en/> and I presented Wikidata to see if it could 
add value,

Jobtech is an inititive were the Swedish Public employement 
service<https://arbetsformedlingen.se/other-languages/english-engelska> and 
other organisationis looking into building a data driven platform that should 
help match people looking for work and employees looking for skills

Jobtech has there own Taxonomy<https://jobtechdev.se/en/docs/apis/taxonomy/> 
and I will look into if Wikidata can add value to them and/or creating Wikidata 
property for Jobtech

As Thad point out there are already some properties for skills and I added last 
week the Swedish one for occupation codes SSYK 
Property:P8654<https://www.wikidata.org/wiki/Property:P8654>

Anyone with thoughts in this area please contact me I can see that a global 
employee market can gain from describing skills and occupations and education 
with linked data and maybe Wikidata can be part of this ...

Regards
Magnus Sälgö
Stockholm, Sweden
User: salgo60<https://www.wikidata.org/wiki/User:Salgo60>
Linked in<https://www.linkedin.com/in/magnus-s%C3%A4lg%C3%B6-148890/>


From: Wikidata  on behalf of Wiljes, Cord 

Sent: Saturday, October 10, 2020 12:07 AM
To: Discussion list for the Wikidata project 
Subject: Re: [Wikidata] Property to encode the skills of a person


Hi Thad,



thank you for your very helpful answer and great advice. I would prefer not to 
use “interested in”, because that is often different from “having a skill” 
(though there certainly is some statistical correlation). I am interested in 
art, but have absolutely no skill in drawing. On the other hand, I have 
substantial skill in doing tax declarations, but absolutely no interest.



I understand that Wikidata aims to capture facts that can – or could -  be 
verified by external sources. Therefore, skills are only relevant for Wikidata 
if they are certified by an institution. However, “interested in” seems even 
more subjective and difficult to verify.



Best wishes,

Cord





Von: Wikidata [mailto:wikidata-boun...@lists.wikimedia.org] Im Auftrag von Thad 
Guidry
Gesendet: Freitag, 9. Oktober 2020 21:41
An: Discussion list for the Wikidata project 
Betreff: Re: [Wikidata] Property to encode the skills of a person



Hi Cord,



We do not have such a property for various reasons but historically because of 
fear of extra vandalism (which I don't completely agree with), difficulty with 
adding references to support the statement claims (I agree that's hard and why 
certified_as is being discussed below and why P4968 was added to help), and 
other reasons.



I would suggest to look at the following Property proposals:



https://www.wikidata.org/wiki/Wikidata:Property_proposal/certified_as

https://www.wikidata.org/wiki/Wikidata:Property_proposal/ESCO_Skill



as well as:



https://www.wikidata.org/wiki/Property:P4968

https://www.wikidata.org/wiki/Property:P1576

https://www.wikidata.org/wiki/Property:P101

https://www.wikidata.org/wiki/Property:P106

https://www.wikidata.org/wiki/Property:P8258

https://www.wikidata.org/wiki/Property:P2650


I think your closest ally to help with an immediate problem would be to reframe 
it as "this person -> interested in 
P2650<https://www.wikidata.org/wiki/Property:P2650> -> food toxicology"  with 
the idea that they not only interested but also skilled or specialized in some 
field of study or area of research"  It's about the best you can do for 
now...but perhaps the other listed properties above help you more depending on 
the context.  For instance, a prominent notable professor or researcher can be 
said to be "skilled" in "ancient history" but it is more likely their "field of 
work" or "field of training" is in "ancient history".



For languages, you can already use  languages spoken, written or signed 
P1412<https://www.wikidata.org/entity/P1412> and native language 
P103<http://native%20language%20P103>



It's always best to look at the properties for this type 
P1963<https://www.wikidata.org/wiki/Property:P1963> on any particular entity 
type, such as looking and scrolling down on Q5 
human<https://www.wikidata.org/wiki/Q5> or Q901 
scientist<https://www.wikidata.org/wiki/Q901> which already lists many of those 
properties above.  Whatever is in properties for this 
type<https://www.wikidata.org/wiki/Property:P1963> is further used as a 
dropdown statement hint and its based on if you apply a instance 
of<https://www.wikidata.org/wiki/Property:P31> statement and fill in a more 
specific type fo

Re: [Wikidata] a question of data modeling about family ties

2020-10-05 Thread Magnus Sälgö
1) the father of Djingis Khan is not in the tree

answer: he is in the query isnt that enough?

2) The fact that there are two different properties, one for the child-father 
link and one for the child-mother link, makes the query much more difficult in 
this case.

answer: you can model things in many ways why is not just using the child 
property enough and skip the parents properties for mother/father 😉

Regards
Magnus Sälgö
Stockholm, Sweden


From: Wikidata  on behalf of Bruno 
Belhoste 
Sent: Monday, October 5, 2020 7:10 PM
To: wikidata@lists.wikimedia.org 
Subject: Re: [Wikidata] a question of data modeling about family ties

It works well, but it's a descendant tree (the father of Djingis Khan is not in 
the tree). My question is how to explore family trees in the opposite 
direction, I mean from children to parents, and more generally to get all the 
relatives of a given person. The fact that there are two different properties, 
one for the child-father link and one for the child-mother link, makes the 
query much more difficult in this case.

Le 05/10/2020 à 15:58, Magnus Sälgö a écrit :
Have you tested gas:traversalDirection below we follow child 
gas:traversalDirection "Undirected"

example https://w.wiki/fDz

#People related to Djingis Khan
#defaultView:Graph
PREFIX gas: <http://www.bigdata.com/rdf/gas#><http://www.bigdata.com/rdf/gas#>

SELECT ?item ?itemLabel ?pic ?WikiTreef ?FindAGravef ?linkTo {
  SERVICE gas:service {
gas:program gas:gasClass "com.bigdata.rdf.graph.analytics.SSSP" ;
gas:in wd:Q720 ;
gas:traversalDirection "Undirected" ;
gas:out ?item ;
gas:out1 ?depth ;
gas:maxIterations 4 ;
gas:linkType wdt:P40 .
  }
  OPTIONAL { ?item wdt:P40 ?linkTo }
  OPTIONAL { ?item wdt:P18 ?pic }
  SERVICE wikibase:label {bd:serviceParam wikibase:language "sv" }
}


From: Wikidata 
<mailto:wikidata-boun...@lists.wikimedia.org>
 on behalf of Olaf Simons 
<mailto:olaf.sim...@pierre-marteau.com>
Sent: Monday, October 5, 2020 1:53 PM
To: Discussion list for the Wikidata project. 
<mailto:wikidata@lists.wikimedia.org>
Subject: Re: [Wikidata] a question of data modeling about family ties

Dear Wikidata list,

one of our big contributors, Bruno Belhoste in Paris, just gave me a weird 
question which I feel I cannot answer that fast.

We, FactGrid, are a Wikidata child and so it came that we adopted structures 
like the father/mother differentiation. Bruno's question is now whether it 
would not have been wise to just create just a "parent" option instead of the 
gender fork that splits the lineages. Let me insert his mail:


> Dear Olaf,
>
> I have a question of data modeling in FactGrid concerning family ties.
> We have one property for children (P150) and also one for siblings
> (P203), but two different properties for parents: father (P141) and
> mother (P142). The problem is: how can you get all the members of the
> family of somebody? One have (1) to explore both the matrilinear trees
> and the patrilinear trees at each generation to get all the ancestors
> and (2) to get all the descents of these ancestors. It is very complex.
> It would be much simpler to have only one property including the father
> and the mother.
>
> In case we choose this solution, it would be easy to transform all the
> triples with P141 or P142 into triples with the new property. What do
> you think about that? Have you another solution for querying family links?
>
> All the best,
>
> Bruno

I wonder what people with more data knowledge think about the proposal (which 
would affect just our site, not Wikidata...) (so: no worries, we can do 
experimental things without affecting our parent - I hesitate to speak of a 
father or mother - Wikidata).

Best,
Olaf




Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Schloss Friedenstein, Pagenhaus
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha

___
Wikidata mailing list
Wikidata@lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikidata



___
Wikidata mailing list
Wikidata@lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikidata


___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] a question of data modeling about family ties

2020-10-05 Thread Magnus Sälgö
Have you tested gas:traversalDirection below we follow child 
gas:traversalDirection "Undirected"

example https://w.wiki/fDz

#People related to Djingis Khan
#defaultView:Graph
PREFIX gas: 

SELECT ?item ?itemLabel ?pic ?WikiTreef ?FindAGravef ?linkTo {
  SERVICE gas:service {
gas:program gas:gasClass "com.bigdata.rdf.graph.analytics.SSSP" ;
gas:in wd:Q720 ;
gas:traversalDirection "Undirected" ;
gas:out ?item ;
gas:out1 ?depth ;
gas:maxIterations 4 ;
gas:linkType wdt:P40 .
  }
  OPTIONAL { ?item wdt:P40 ?linkTo }
  OPTIONAL { ?item wdt:P18 ?pic }
  SERVICE wikibase:label {bd:serviceParam wikibase:language "sv" }
}


From: Wikidata  on behalf of Olaf Simons 

Sent: Monday, October 5, 2020 1:53 PM
To: Discussion list for the Wikidata project. 
Subject: Re: [Wikidata] a question of data modeling about family ties

Dear Wikidata list,

one of our big contributors, Bruno Belhoste in Paris, just gave me a weird 
question which I feel I cannot answer that fast.

We, FactGrid, are a Wikidata child and so it came that we adopted structures 
like the father/mother differentiation. Bruno's question is now whether it 
would not have been wise to just create just a "parent" option instead of the 
gender fork that splits the lineages. Let me insert his mail:


> Dear Olaf,
>
> I have a question of data modeling in FactGrid concerning family ties.
> We have one property for children (P150) and also one for siblings
> (P203), but two different properties for parents: father (P141) and
> mother (P142). The problem is: how can you get all the members of the
> family of somebody? One have (1) to explore both the matrilinear trees
> and the patrilinear trees at each generation to get all the ancestors
> and (2) to get all the descents of these ancestors. It is very complex.
> It would be much simpler to have only one property including the father
> and the mother.
>
> In case we choose this solution, it would be easy to transform all the
> triples with P141 or P142 into triples with the new property. What do
> you think about that? Have you another solution for querying family links?
>
> All the best,
>
> Bruno

I wonder what people with more data knowledge think about the proposal (which 
would affect just our site, not Wikidata...) (so: no worries, we can do 
experimental things without affecting our parent - I hesitate to speak of a 
father or mother - Wikidata).

Best,
Olaf




Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Schloss Friedenstein, Pagenhaus
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Questions about Mix'n'match tool

2020-05-08 Thread Magnus Manske
These days, all Mix'n'match edits should contain a backlink to the
Mix'n'match entry in the edit summary (I think).

MnM will never "overwrite" human edits. It might add a second statement for
a property to an item, but for some catalogs/properties that's a valid
thing to do, so I'm hesitant to block such an option.

Also, the value on Wikidata might be wrong, and the MnM edit is correct. In
which case, the constraint violation will highlight the issue, and it can
be resolved on both Wikidata and MnM.

In related news, I have been collecting issues between Wikidata and MnM, in
case someone wants to help resolve them (some might be resolved already):
https://tools.wmflabs.org/mix-n-match/#/issues


On Thu, May 7, 2020 at 7:16 PM Tom Morris  wrote:

> Speaking of workflow, this Mix'n'Match report page
> <https://www.wikidata.org/wiki/User:Magnus_Manske/Mix%27n%27match_report/98> 
> says
> "*If you fix something from this list on Wikidata, please fix it on
> Mix'n'match as well, if applicable*" without giving any directions or
> hints as to how one might accomplish that. I just cleaned up "Multiple
> Wikidata items with external ID OL4859603A
> <https://openlibrary.org/authors/OL4859603A/Pierre_Vidal> : Pierre Vidal
> (Q18002076) <https://www.wikidata.org/wiki/Q18002076>, Pierre Vidal
> (Q3387281) <https://www.wikidata.org/wiki/Q3387281>".
>
> How would I tell Mix'n'Match about that? Or, better yet, why can't the
> tool be taught not to overwrite human edits (particularly reverts)? To be
> fair, the original confusion is understandable, because the two
> entries/gentlemen are very confusable.
>
> Tom
>
> On Wed, May 6, 2020 at 4:28 AM Magnus Manske 
> wrote:
>
>> Hi,
>>
>> I am the author of Mix'n'match, so I hope I can answer your questions.
>>
>> Match mode:
>> By default, "match mode" only shows unmatched entries, example:
>> https://tools.wmflabs.org/mix-n-match/#/random/473
>>
>> You can force pre-matched entries, but currently they won't show the
>> automatic predictions:
>> https://tools.wmflabs.org/mix-n-match/#/random/473/prematched
>>
>> If you/others like it, I can have predictions show for auto-matched as
>> well, and/or mix unmatched and pre-matched in results.
>>
>> "Next entry" (aka skip) will never change the bucket
>> (pre-matched/unmatched).
>>
>>
>> Mobile game:
>> The mobile game shows unmatched only. There is currently no override.
>> This would be easy to change as well, if desired, though I'd probably
>> have to show/highlight the pre-matched entry somehow.
>>
>> "Skip" will never change the bucket (pre-matched/unmatched).
>>
>>
>> Visual tool:
>> The visual tool will show entries from both pre-matched and unmatched.
>>
>> "Load another one" (aka "skip") will never change the bucket
>> (pre-matched/unmatched).
>>
>> I hope that answers your questions, please let me know if there is
>> anything else I can do!
>>
>> Cheers,
>> Magnus
>>
>> On Wed, May 6, 2020 at 7:26 AM Palonen, Tuomas Y E <
>> tuomas.palo...@helsinki.fi> wrote:
>>
>>> Hello,
>>>
>>> I am an information specialist at the National Library of Finland, where
>>> I am linking our General Finnish Ontology YSO (30,000+ concepts with terms
>>> in Finnish-Swedish-English) to Wikidata at the moment. I just joined this
>>> mailing list. I am currently using Mix'n'match tool and would have a couple
>>> of questions. I would be very happy for any answers or contact info to
>>> someone who might have the answers. Thank you very much! Here are my
>>> questions:
>>>
>>>- Is there a connection in Mix'n'match between the Preliminarily
>>>matched/Unmatched division and any of the three: Match mode, Mobile
>>>matching, Visual tool?
>>>- More precisely, are the link suggestions provided by Match mode /
>>>Mobile matching / Visual tool created only from Preliminary matched list,
>>>only from Unmatched list or from both?
>>>- Also, if I reject a link suggestion in Match mode / Mobile
>>>matching / Visual tool, will that concept be added to the Unmatched list?
>>>
>>> This would be important for my own (and possibly anybody else's)
>>> Mix'n'match work flow / method. If it was up to me, I would suggest
>>> rejected concepts not be added to t

Re: [Wikidata] Questions about Mix'n'match tool

2020-05-07 Thread Magnus Manske
Maybe best to continue on my talk page:
https://www.wikidata.org/wiki/User_talk:Magnus_Manske

On Thu, May 7, 2020 at 1:31 PM Palonen, Tuomas Y E <
tuomas.palo...@helsinki.fi> wrote:

> Hey,
>
> Many thanks for your answers, Magnus! For now, it's good to know that
> rejecting/skipping doesn't change the bucket.  I'm testing all these tools
> at the moment writing down general notions and development wishes. I could
> present these later after I've put some time into this.
>
> On the other hand, being new to the game, should we continue this
> discussion here or should I contact you, Magnus, directly? In other words,
> is this mailing list the right place to discuss fine details of a specific
> Wikidata linking tool or no? I'd be happy to get any guidance and wouldn't
> want to trouble everyone with all too specific issues if such is the case.
> Thank you!
>
> Best,
> Tuomas / National Library of Finland
> --
> *Lähettäjä:* Wikidata  käyttäjän
> Magnus Manske  puolesta
> *Lähetetty:* keskiviikko 6. toukokuuta 2020 11.00
> *Vastaanottaja:* Discussion list for the Wikidata project <
> wikidata@lists.wikimedia.org>
> *Aihe:* Re: [Wikidata] Questions about Mix'n'match tool
>
> Hi,
>
> I am the author of Mix'n'match, so I hope I can answer your questions.
>
> Match mode:
> By default, "match mode" only shows unmatched entries, example:
> https://tools.wmflabs.org/mix-n-match/#/random/473
>
> You can force pre-matched entries, but currently they won't show the
> automatic predictions:
> https://tools.wmflabs.org/mix-n-match/#/random/473/prematched
>
> If you/others like it, I can have predictions show for auto-matched as
> well, and/or mix unmatched and pre-matched in results.
>
> "Next entry" (aka skip) will never change the bucket
> (pre-matched/unmatched).
>
>
> Mobile game:
> The mobile game shows unmatched only. There is currently no override.
> This would be easy to change as well, if desired, though I'd probably have
> to show/highlight the pre-matched entry somehow.
>
> "Skip" will never change the bucket (pre-matched/unmatched).
>
>
> Visual tool:
> The visual tool will show entries from both pre-matched and unmatched.
>
> "Load another one" (aka "skip") will never change the bucket
> (pre-matched/unmatched).
>
> I hope that answers your questions, please let me know if there is
> anything else I can do!
>
> Cheers,
> Magnus
>
> On Wed, May 6, 2020 at 7:26 AM Palonen, Tuomas Y E <
> tuomas.palo...@helsinki.fi> wrote:
>
> Hello,
>
> I am an information specialist at the National Library of Finland, where I
> am linking our General Finnish Ontology YSO (30,000+ concepts with terms in
> Finnish-Swedish-English) to Wikidata at the moment. I just joined this
> mailing list. I am currently using Mix'n'match tool and would have a couple
> of questions. I would be very happy for any answers or contact info to
> someone who might have the answers. Thank you very much! Here are my
> questions:
>
>- Is there a connection in Mix'n'match between the Preliminarily
>matched/Unmatched division and any of the three: Match mode, Mobile
>matching, Visual tool?
>- More precisely, are the link suggestions provided by Match mode /
>Mobile matching / Visual tool created only from Preliminary matched list,
>only from Unmatched list or from both?
>- Also, if I reject a link suggestion in Match mode / Mobile matching
>/ Visual tool, will that concept be added to the Unmatched list?
>
> This would be important for my own (and possibly anybody else's)
> Mix'n'match work flow / method. If it was up to me, I would suggest
> rejected concepts not be added to the Unmatched list, but that's just me. I
> am planning to write a report of my linking project in the future, where I
> will most likely include some development suggestions / wishes. One
> immediate wish is that it would be great to tag or somehow put aside link
> suggestions that require more research and cannot be decided on the spot
> (for example, Wikidata items may include poor Finnish/Swedish terms and may
> require some corrections before I can do the linking).
>
> Thanks for your help!
> Best regards,
> Tuomas / National Library of Finland
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Questions about Mix'n'match tool

2020-05-06 Thread Magnus Manske
Hi,

I am the author of Mix'n'match, so I hope I can answer your questions.

Match mode:
By default, "match mode" only shows unmatched entries, example:
https://tools.wmflabs.org/mix-n-match/#/random/473

You can force pre-matched entries, but currently they won't show the
automatic predictions:
https://tools.wmflabs.org/mix-n-match/#/random/473/prematched

If you/others like it, I can have predictions show for auto-matched as
well, and/or mix unmatched and pre-matched in results.

"Next entry" (aka skip) will never change the bucket
(pre-matched/unmatched).


Mobile game:
The mobile game shows unmatched only. There is currently no override.
This would be easy to change as well, if desired, though I'd probably have
to show/highlight the pre-matched entry somehow.

"Skip" will never change the bucket (pre-matched/unmatched).


Visual tool:
The visual tool will show entries from both pre-matched and unmatched.

"Load another one" (aka "skip") will never change the bucket
(pre-matched/unmatched).

I hope that answers your questions, please let me know if there is anything
else I can do!

Cheers,
Magnus

On Wed, May 6, 2020 at 7:26 AM Palonen, Tuomas Y E <
tuomas.palo...@helsinki.fi> wrote:

> Hello,
>
> I am an information specialist at the National Library of Finland, where I
> am linking our General Finnish Ontology YSO (30,000+ concepts with terms in
> Finnish-Swedish-English) to Wikidata at the moment. I just joined this
> mailing list. I am currently using Mix'n'match tool and would have a couple
> of questions. I would be very happy for any answers or contact info to
> someone who might have the answers. Thank you very much! Here are my
> questions:
>
>- Is there a connection in Mix'n'match between the Preliminarily
>matched/Unmatched division and any of the three: Match mode, Mobile
>matching, Visual tool?
>- More precisely, are the link suggestions provided by Match mode /
>Mobile matching / Visual tool created only from Preliminary matched list,
>only from Unmatched list or from both?
>- Also, if I reject a link suggestion in Match mode / Mobile matching
>/ Visual tool, will that concept be added to the Unmatched list?
>
> This would be important for my own (and possibly anybody else's)
> Mix'n'match work flow / method. If it was up to me, I would suggest
> rejected concepts not be added to the Unmatched list, but that's just me. I
> am planning to write a report of my linking project in the future, where I
> will most likely include some development suggestions / wishes. One
> immediate wish is that it would be great to tag or somehow put aside link
> suggestions that require more research and cannot be decided on the spot
> (for example, Wikidata items may include poor Finnish/Swedish terms and may
> require some corrections before I can do the linking).
>
> Thanks for your help!
> Best regards,
> Tuomas / National Library of Finland
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] propographical data and insert performance

2020-04-12 Thread Magnus Manske
Hi,

I can't speak to the wikibase capabilities directly, but QS via API will
always take a bit of time.

One could adapt my Rust core of QuickStatements [1], which also comes with
an (experimental but quite advanced) QS syntax parser, generate the JSON
for each item, and manually insert it into the `page` table.
Parsing and JSON generation would be very fast, and the data addition could
be bulk SQL (eg, INSERT thousands of VALUE sets in one command).

Then you'd have to run the metadata update script that comes with
MediaWiki/wikibase to get all the links etc. correct and updated. Probably
something similar for the SPARQL. And some minor tweaks in the database I
guess.

Caveat: It would take a bit of fiddling to adapt the Rust QS to this. Might
be worth it as a general solution though, if people would be interested in
this.

Cheers,
Magnus

[1] https://github.com/magnusmanske/petscan_rs

On Sun, Apr 12, 2020 at 6:18 AM Dr. Jesper Zedlitz 
wrote:

> Hopefully this is the right mailing list for my topic.
>
> The German Verein für Computergenealogie is the largest genealogical
> society in Germany with more than 3,700 members. We are currently
> considering whether Wikibase is a suitable system for us. Most
> interesting is the use for our *propographical data*.
>
> Prosopographical data can be divided into three classes:
>
> a) well-known and well-studied personalities, typically authors
> b) lesser-known but well-studied personalities that can be clearly and
> easily identified in historical sources
> c) persons whose identifiability in various sources (such as church
> records, civil record, city directory) has to be established using
> (mostly manual) record linkage
>
> Data from (a) can be found in the GND of the German National Libarary.
> For data from class (b) systems such as FactGrid exists. The Verein für
> Computergenealogie mostly works with data from class (c). We have a huge
> amount of that kind of data, more than 40 million records. Currently it
> is stored in several MySQL and MongoDB databases.
>
> This leads me to the crucial question: Is the performance of Wikibase
> sufficient for such an amount of data? One record for a person will
> typically result in maybe ten statements in Wikibase. Using
> QuickStatements or the WDI library I have not been able to insert more
> than two or three statements per second. It would take month to import
> the data.
>
> Another question is whether the edit history of the entries can be
> preserved. For some data set the edit history goes back to 2004.
>
> I hope someone can give me hints on these questions.
>
> Best wishes
> Jesper
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] WDQS and SPARQL Endpoint Compatibility

2020-03-30 Thread Magnus Sälgö
They are but it seems to be no prio see comments:

  *   Nov 21 2019, 8:32 PM
  *   Feb 20 2020, 2:35 PM

on my task from Gehel<https://phabricator.wikimedia.org/p/Gehel/> 
T234431#5682417<https://phabricator.wikimedia.org/T234431#5682417>

Sorry again for the delay. We are focusing on stabilizing WDQS at the moment. 
We don't have a good understanding of the impact of federation on the stability 
of the service, especially since this 'SPARQL engine they have is "shaky"'. So 
it is unlikely we will had more complexity until we are in a much more stable 
situation.

T234431 Get Tora on the SPARQL Federation 
whitelist<https://phabricator.wikimedia.org/T234431#5682417>
Hi @Gehel just an indication if we could plan that this will be working or not 
would be of great help... I hope to focus on better connecting the Swedish 
National Archive with Wikidata at the end of this week see T237950: Feedback 
errors in Wikidata <-> TORA as I feel Wikidata has a better SPARQL engine 
compared to the TORA project I feel getting TORA on the whitelist will help us 
a lot of ...
phabricator.wikimedia.org

As the TORA connection will never be heavy used but is important when cleaning 
data its sad... and I have given up

Regards
Magnus Sälgö
Stockholm, Sweden


From: Wikidata  on behalf of Maarten 
Dammers 
Sent: Monday, March 30, 2020 11:15 PM
To: wikidata@lists.wikimedia.org 
Subject: Re: [Wikidata] WDQS and SPARQL Endpoint Compatibility

Since Stas left last year, unfortunately nobody from the WMF has done
anything with
https://www.wikidata.org/wiki/Wikidata:SPARQL_federation_input . I don't
know if the new SPARQL people are even aware of this page.

My bot produces a weekly federation report at
https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/Federation_report


Maarten

On 30-03-20 22:41, Lucas Werkmeister wrote:
> The current whitelist is documented at
> https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual/SPARQL_Federation_endpoints
> and new additions can be proposed at
> https://www.wikidata.org/wiki/Wikidata:SPARQL_federation_input.
>
> Cheers,
> Lucas
>
> On 30.03.20 20:31, Kingsley Idehen wrote:
>> All,
>>
>> I am opening up this thread to discuss the generic support of SPARQL
>> endpoints by WDQS. Correct me if I am wrong, but right now it can use
>> SPARQL-FED against a select number of registered endpoints?
>>
>> As you all know, the LOD Cloud Knowledge Graph is a powerful repository
>> of loosely-coupled, data, information, and knowledge. One that could
>> really help humans and software agents in the collective quest to defeat
>> the COVID19 disease.
>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] can one create properties with QuickStatements?

2020-03-21 Thread Magnus Manske
Hi Olaf,

I don't think there is a way with the current code.

I am working on a Rust parser (instead of PHP) that could be extended more
easily.

Cheers,
Magnus

On Fri, Mar 20, 2020 at 2:03 PM Olaf Simons 
wrote:

> A questions for those who have created Wikibase installations
>
> When creating a new Wikibase: How do you create Properties en masse -
> especially useful if you want to implement all the properties of a partner
> project?
>
> Is there a way to do that with QuickStatements?
>
> Best,
> Olaf
>
>
>
>
> Dr. Olaf Simons
> Forschungszentrum Gotha der Universität Erfurt
> Schloss Friedenstein, Pagenhaus
> 99867 Gotha
>
> Büro: +49-361-737-1722
> Mobil: +49-179-5196880
>
> Privat: Hauptmarkt 17b/ 99867 Gotha
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Reduced loading times for Wikidata/Wikimedia Commons

2020-02-11 Thread Magnus Manske
Well done!

On Tue, Feb 11, 2020 at 11:07 AM Max Klemm  wrote:

> Hello all,
>
> While cleaning (reviewing and rewriting) the code of Wikidata and
> Wikimedia Commons backend in October 2019, The Wikidata team at WMDE
> together with WMF worked on reducing the loading time of pages. We managed
> to reduce the loading time of every Wikidata page by about 0.1-0.2 seconds.
> This is due to a reduction of the modules (sets of code responsible for a
> certain function) that need to be loaded every time a page is opened by
> someone.  Instead of 260 modules, which needed to be loaded before, only 85
> modules need to be loaded now when the page is called. By doing so, it is
> easier to load Wikidata pages for people who only have a slow internet
> connection.
>
> Link to picture on Commons:
> https://commons.wikimedia.org/wiki/File:Reduced_loading_times_cut.png
>
> Description: Size decrease of the initialization loader on Wikidata pages (see
> on Grafana
> 
> )
>
> Reducing the amount of modules called when loading the page equals a
> reduction of about 130 GB of network traffic for all users every day, or
> 47TB per year. The reduction of network traffic translates into a reduction
> of electricity use, thus, this change is also good for the environment.
> Additionally, the interdependencies between the modules were reduced from
> 4MB to 1MB, which improved the loading time per page as well.
>
> Many thanks to everyone involved in this improvement! If you want to get
> more details about the actions we performed, you can have a look at the
> Phabricator board
> .
> If you are developing scripts or tools on top of the Wikidata UI, some
> documentation will walk you through the architecture of RessourceLoader
> , what’s page
> load performance
> 
> and how to create module bundles with ResourceLoader
> .
>
>
> For further questions or feedback, feel free to contact us on this page
> .
>
> Cheers,
>
> Max for the Wikidata team
>
>
> --
> Max Klemm
> Working Student Community Communication for Technical Wishes
>
> Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Phone: +49 (0)30 219 158 26-0https://wikimedia.de
>
> Imagine a world in which every single human being can freely share in the sum 
> of all knowledge. Help us to achieve our vision!https://spenden.wikimedia.de
>
> Wikimedia Deutschland – Gesellschaft zur Förderung Freien Wissens e. V. 
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter 
> der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für 
> Körperschaften I Berlin, Steuernummer 27/029/42207.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Status of Wikidata Query Service

2020-02-10 Thread Magnus Manske
Hi,

if you need some "Wikibase item diff" function, have a look at the Rust
crate I am co-authoring:
https://gitlab.com/tobias47n9e/wikibase_rs

It comes with diff code:
https://gitlab.com/tobias47n9e/wikibase_rs/-/blob/master/src/entity_diff.rs

Should not be too hard to build eg a simple diff command line tool from
that.

Cheers,
Magnus

On Fri, Feb 7, 2020 at 1:33 PM Guillaume Lederrey 
wrote:

> Hello all!
>
> First of all, my apologies for the long silence. We need to do better in
> terms of communication. I'll try my best to send a monthly update from now
> on. Keep me honest, remind me if I fail.
>
> First, we had a security incident at the end of December, which forced us
> to move from our Kafka based update stream back to the RecentChanges
> poller. The details are still private, but you will be able to get the full
> story soon on phabricator [1]. The RecentChange poller is less efficient
> and this is leading to high update lag again (just when we thought we had
> things slightly under control). We tried to mitigate this by improving the
> parallelism in the updater [2], which helped a bit, but not as much as we
> need.
>
> Another attempt to get update lag under control is to apply back pressure
> on edits, by adding the WDQS update lag to the Wikdiata maxlag [6]. This is
> obviously less than ideal (at least as long as WDQS updates are lagging as
> often as they are), but does allow the service to recover from time to
> time. We probably need to iterate on this, provide better granularity,
> differentiate better between operations that have an impact on update lag
> and those which don't.
>
> On the slightly better news side, we now have a much better understanding
> of the update process and of its shortcomings. The current process does a
> full diff between each updated entity and what we have in blazegraph. Even
> if a single triple needs to change, we still read tons of data from
> Blazegraph. While this approach is simple and robust, it is obviously not
> efficient. We need to rewrite the updater to take a more event streaming /
> reactive approach, and only work on the actual changes. This is a big chunk
> of work, almost a complete rewrite of the updater, and we need a new
> solution to stream changes with guaranteed ordering (something that our
> kafka queues don't offer). This is where we are focusing our energy at the
> moment, this looks like the best option to improve the situation in the
> medium term. This change will probably have some functional impacts [3].
>
> Some misc things:
>
> We have done some work to get better metrics and better understanding of
> what's going on. From collecting more metrics during the update [4] to
> loading RDF dumps into Hadoop for further analysis [5] and better logging
> of SPARQL requests. We are not focusing on this analysis until we are in a
> more stable situation regarding update lag.
>
> We have a new team member working on WDQS. He is still ramping up, but we
> should have a bit more capacity from now on.
>
> Some longer term thoughts:
>
> Keeping all of Wikidata in a single graph is most probably not going to
> work long term. We have not found examples of public SPARQL endpoints with
> > 10 B triples and there is probably a good reason for that. We will
> probably need to split the graphs at some point. We don't know how yet
> (that's why we loaded the dumps into Hadoop, that might give us some more
> insight). We might expose a subgraph with only truthy statements. Or have
> language specific graphs, with only language specific labels. Or something
> completely different.
>
> Keeping WDQS / Wikidata as open as they are at the moment might not be
> possible in the long term. We need to think if / how we want to implement
> some form of authentication and quotas. Potentially increasing quotas for
> some use cases, but keeping them strict for others. Again, we don't know
> how this will look like, but we're thinking about it.
>
> What you can do to help:
>
> Again, we're not sure. Of course, reducing the load (both in terms of
> edits on Wikidata and of reads on WDQS) will help. But not using those
> services makes them useless.
>
> We suspect that some use cases are more expensive than others (a single
> property change to a large entity will require a comparatively insane
> amount of work to update it on the WDQS side). We'd like to have real data
> on the cost of various operations, but we only have guesses at this point.
>
> If you've read this far, thanks a lot for your engagement!
>
>   Have fun!
>
>   Guillaume
>
>
>
>
> [1] https://phabricator.wikimedia.org/T241410
> [2] https://pha

Re: [Wikidata] Nobel Prizes and consensus in Wikidata

2019-09-27 Thread Magnus Sälgö
FYI we have SPARQL Federation with Nobelprize.com see 
T200668<https://phabricator.wikimedia.org/T200668>

[cid:3673ede4-ebd4-443c-8e45-e5b655f39ac7]
T200668 Set up Nobel Data as federated search with 
Wikidata<https://phabricator.wikimedia.org/T200668>
Feedback Hans Mehlin - Nobel Media AB. Kul! Wikidata är högt upp min min 
önskelista. Vet att jag har fullt upp med annat till mitten av oktober. Hoppas 
sedan få mandat att arbeta mer med våra datamängder.
phabricator.wikimedia.org

And a Listeria list that every night compare Wikidata <-> with Nobelprize.com
https://www.wikidata.org/wiki/User:Salgo60/ListeriaNobelData3

Regards
Magnus Sälgö
Stockholm, Sweden
salg...@msn.com


From: Wikidata  on behalf of Peter 
Patel-Schneider 
Sent: Saturday, September 28, 2019 6:36 AM
To: wikidata@lists.wikimedia.org 
Subject: Re: [Wikidata] Nobel Prizes and consensus in Wikidata

Indeed.   Thanks for the example.  I'll probably incorporate it in my
talk at WikidataCon.


As far as I know there is no general method for nudging towards
consensus for cases like these.  The onus appears to me to be on whoever
is entering the information to look for similar situations and model
them all the same.  (In this case it appears that a recent change to the
Nobel Peace Prize was made to remove it being a subclass of Nobel Prize,
actually reducing commonality.)

But what can be done in the future?  One way to go is to ask that
editors be more careful when editing items that might belong to a group,
and try to model them the same as other members of the group.  Another
way to go is to ask that editors be more careful when editing items that
have parts/instances/subclasses and check that all the other items are
modeled the same way.

I prefer something similar to the second way, where editors of classes
and properties (or just about anything that is going to be the common
target of a property, but instance and subclass and subproperty seem to
me to be the most important such properties) are asked to be careful to
specify the relationship between the class or property and the other
items that target it.  So whoever does major editing on Nobel Prize
should add a comment on the relationship between the various Nobel
Prizes and Nobel Prize. (Having such information is quite common for
concepts in Cyc.)

Actually Nobel Prize isn't the greatest example for my preference
because there doesn't seem to be any Wikidata items for the even the
famous Nobel Prizes.   Suppose there was a Wikidata item for Einstein's
Nobel Prize in Physics.  Then its relationship to Nobel Prize would
provide guidance for the relationship between the Nobel Prize in Physics
and Nobel Prizes itself.


I find modeling deficiencies like this in lots of places in Wikidata.
That's not a severe problem if you have the resources of Google to throw
at curating Wikidata information.  But if you don't have this level of
resources available for curating Wikidata information then these sorts
of infelicities are a significant barrier to using Wikidata.


Peter F. Patel-Schneider



On 9/27/19 12:34 PM, Aidan Hogan wrote:
> Hey all,
>
> Andra recently mentioned about finding laureates in Wikidata, and it
> reminded me that some weeks ago I was trying to come up with a SPARQL
> query to find all Nobel Prize Winners in Wikidata.
>
> What I ended up with was:
>
> SELECT ?winner
> WHERE {
>   ?winner wdt:P166 ?prize .
>   ?prize (wdt:P361|wdt:P31|wdt:P279) wd:Q7191 .
> }
>
>
> More specifically, looking into the data I found:
>
> Nobel Peace Prize (Q35637)
>  part of (P361)
>   Nobel Prize (Q7191) .
>
> Nobel Prize in Literature (Q37922)
>  subclass of (P279)
>   Nobel Prize (Q7191) .
>
> Nobel Prize in Economics (Q47170)
>  instance of (P31)
>Nobel Prize (Q7191) ;
>  part of (P361)
>Nobel Prize (Q7191) .
>
> Nobel Prize in Chemistry (Q44585)
>  instance of (P31)
>Nobel Prize (Q7191) ;
>  part of (P361)
>Nobel Prize (Q7191) .
>
> Nobel Prize in Physics (Q38104)
>  subclass of (P31)
>Nobel Prize (Q7191) ;
>  part of (P361)
>Nobel Prize (Q7191) .
>
> In summary, of the six types of Nobel prizes, three different
> properties are used in five different combinations to state that they
> "are", in fact, Nobel prizes. :)
>
> Now while it would be interesting to discuss the relative merits of
> P31 vs. P279 vs. P361 vs. some combination thereof in this case and
> similar such cases, I guess I am more interested in the general
> problem of the lack of consensus that such a case exhibits.
>
> What processes (be they social, technical, or some combination
> thereof) are currently in place to reach consensus in these cases in
> Wikidata?
>
> What could be put in place in future to highlight and reach consensus?
&g

Re: [Wikidata] Proposal for the introduction of a practicable Data Quality Indicator in Wikidata (next round)

2019-08-27 Thread Magnus Sälgö
Uwe I feel this is more and more important with quality and provenance and also 
communicate inside Wikidata the quality of our data.

 I have added maybe the best source for biographies in Sweden P3217 in Wikidata 
on 7500 person. In Wikipedia those 7500 objects are used on > 200 different 
languages in Wikipedia we need to have a ”layer” explaining that data confirmed 
 with P3217 ”SBL from Sweden” has very high trust

See https://phabricator.wikimedia.org/T222142

I can also see this quality problem that  Nobelprize.org<http://Nobelprize.org> 
and Wikidata has > 30 differencies and its sometimes difficult to understand 
the quality of the sources in Wikidata plus that 
Nobelprize.com<http://Nobelprize.com> has no sources makes the equation 
difficult
https://phabricator.wikimedia.org/T200668

Regards
Magnus Sälgö
0046-705937579
salg...@msn.com<mailto:salg...@msn.com>

A blogpost I wrote 
https://minancestry.blogspot.com/2018/04/wikidata-has-design-problem.html<https://minancestry.blogspot.com/2018/04/wikidata-has-design-problem.html?m=1>

28 aug. 2019 kl. 03:49 skrev Uwe Jung 
mailto:jung@gmail.com>>:

Hello,

many thanks for the answers to my contribution from 24.8.
I think that all four opinions contain important things to consider.

@David Abián
I have read the article and agree that in the end the users decide which data 
is good for them or not.

@GerardM
It is true that in a possible implementation of the idea, the aspect of 
computing load must be taken into account right from the beginning.

Please check that I have not given up on the idea yet. With regard to the 
acceptance of Wikidata, I consider a quality indicator of some kind to be 
absolutely necessary. There will be a lot of ordinary users who would like to 
see something like this.

At the same time I completely agree with David;(almost) every chosen indicator 
is subject to a certain arbitrariness in the selection. There won't be one easy 
to understand super-indicator.
So, let's approach things from the other side. Instead of a global indicator, a 
separate indicator should be developed for each quality dimension to be 
considered. With some dimensions this should be relatively easy. For others it 
could take years until we have agreed on an algorithm for their calculation.

Furthermore, the indicators should not represent discrete values but a 
continuum of values. No traffic light statements (i.e.: good, medium, bad) 
should be made. Rather, when displaying the qualifiers, the value could be 
related to the values of all other objects (e.g. the value x for the current 
data object in relation to the overall average for all objects for this 
indicator). The advantage here is that the total average can increase over 
time, meaning that the position of the value for an individual object can also 
decrease over time.

Another advantage: Users can define the required quality level themselves. If, 
for example, you have high demands on accuracy but few demands on the 
completeness of the statements, you can do this.

However, it remains important that these indicators (i.e. the evaluation of the 
individual item) must be stored together with the item and can be queried 
together with the data using SPARQL.

Greetings

Uwe Jung

Am Sa., 24. Aug. 2019 um 13:54 Uhr schrieb Uwe Jung 
mailto:jung@gmail.com>>:
Hello,

As the importance of Wikidata increases, so do the demands on the quality of 
the data. I would like to put the following proposal up for discussion.

Two basic ideas:

  1.  Each Wikidata page (item) is scored after each editing. This score should 
express different dimensions of data quality in a quickly manageable way.
  2.  A property is created via which the item refers to the score value. 
Certain qualifiers can be used for a more detailed description (e.g. time of 
calculation, algorithm used to calculate the score value, etc.).

The score value can be calculated either within Wikibase after each data change 
or "externally" by a bot. For the calculation can be used among other things: 
Number of constraints, completeness of references, degree of completeness in 
relation to the underlying ontology, etc. There are already some interesting 
discussions on the question of data quality which can be used here ( see  
https://www.wikidata.org/wiki/Wikidata:Item_quality; 
https://www.wikidata.org/wiki/Wikidata:WikiProject_Data_Quality, etc).

Advantages

  *   Users get a quick overview of the quality of a page (item).
  *   SPARQL can be used to query only those items that meet a certain quality 
level.
  *   The idea would probably be relatively easy to implement.

Disadvantage:

  *   In a way, the data model is abused by generating statements that no 
longer describe the item itself, but make statements about the representation 
of this item in Wikidata.
  *   Additional computing power must be provided for the regular calculation 
of all changed i

Re: [Wikidata] "Wikidata item" link to be moved in the menu column on Wikimedia projects

2019-08-08 Thread Magnus Sälgö
Suggestion display the Q number in the link i.e. the user doesnt have to click 
the link to see the Q  number

Motivation: more and more institution start use the Wikidata Qnumber and as we 
today display VIAF numbers, LCCN, GND numbers etc. for authority data I think 
we should make it easier to see that this WIkipedia article has a specific Q 
number

Regards
Magnus Sälgö
Stockholm, Sweden
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] [Wikitech-l] [Wikidata-tech] [Breaking change] Important for Wikidata tools maintainers: wb_terms table to be dropped at the end of May

2019-07-05 Thread Magnus Manske
Thank you!

On Fri, Jul 5, 2019 at 10:40 AM Alaa Sarhan 
wrote:

> Hello all,
>
> This is an update regarding the progress and dates of migration of wb_terms
> table replacement solution in Wikidata production environment.
>
> We have successfully put Wikidata production in the stage where property
> terms are stored in and written to both the old store (wb_terms table) and
> the new replacement store. Retrieving property terms is still being done
> using the old store.
>
>
> *The previously announced dates are no longer effective.* No changes to
> tools are needed yet. Tools can continue to read from the old store
> (wb_terms table) for the moment. There will be a later announcement
> regarding the date when tools have to switch to reading property terms from
> the new store.
>
> The next step will be to go to the stage of retrieving property terms from
> the new store, while we keep storing them in both stores. That step is
> blocked by a problem we discovered while testing that switch on beta
> cluster, and we are working on solving it at the moment (
> https://phabricator.wikimedia.org/T226008).
>
> As for item terms, and in the light of the new information about switching
> master node to a better host (https://phabricator.wikimedia.org/T227062)
> that can actually host the migration of them till the end, we have as well
> decided to push out item terms migration on hold until after that failover
> is done and is stable.
>
> The migration of all item terms will take weeks to finish, but it isn’t
> clear yet how long exactly. We will run it in several stages and there will
> be separate announcements regarding those stages to announce and inform
> about how to deal with it, in case it affects your work.
>
> You can find more information regarding those dates and how to prepare for
> them in https://phabricator.wikimedia.org/T221765
> , and we have dedicated
> https://phabricator.wikimedia.org/project/view/4014
>  to receive and help
> with any questions from tool builders that need to update their tools
> accordingly.
>
> (email & wiki) In order to keep all discussions in one place, we kindly ask
> you to react or ask your questions
> https://phabricator.wikimedia.org/T221764
> .
>
> Thanks,
>
> On Wed, 15 May 2019 at 15:06, Léa Lacroix 
> wrote:
>
> > Hello all,
> >
> > This is an update regarding the dates of test environment and migration
> of wb_terms
> > table replacement solution
> > .
> >
> > Due to various complications that the developers in the Wikidata team
> have
> > been working on solving over the last few weeks, we unfortunately will
> have
> > to push the dates for when a test environment for tools builders will be
> > ready, which was supposed to be ready today, and the following dates for
> > starting migration of wb_terms data into the new schema in production.
> >
> > The new dates are:
> >
> >- 29th of May: Test environment for tool builders will be ready
> >- 12th of June: Property Terms migration starts
> >- 19th of June: Read property terms from new schema on Wikidata
> >- 26th of June: Item terms migration begins
> >- 3rd of July: Read item terms from one of the two schemas (as
> >explained in this task )
> >
> > You can find more information regarding those dates and how to prepare
> for
> > them in this task , and we
> > have dedicated a board
> >  to receive and
> help
> > with any questions from tool builders that need to update their tools
> > accordingly.
> >
> > In order to keep all discussions in one place, we kindly ask you to react
> > or ask your questions on Phabricator
> > .
> >
> > As a reminder, if you want to discuss with the developers, ask questions
> > and get help in order to update your tools, you can join the IRC
> Mediawiki
> > meeting, today at 15:00 UTC on #wikimedia-tech.
> >
> > Thanks,
> >
> > Léa
> >
> > On Wed, 24 Apr 2019 at 16:30, Léa Lacroix 
> > wrote:
> >
> >> Hello all,
> >>
> >> This is an important announcement for all the tool builders and
> >> maintainers who access Wikidata’s data by *querying directly Labs
> >> database replicas*.
> >>
> >> In May-June 2019, the Wikidata development team will drop the wb_terms
> >> table from the database in favor of a new optimized schema. Over years,
> >> this table has become too big, causing various issues.
> >>
> >> This change requires the tools using wb_terms to be updated. Developers
> >> and maintainers will need to *adapt their code* to the new schema before
> >> the migration starts and switch to the new code when the migration
> starts.
> >>
> >> The migration will start on *May 29th*. On May 15th, a test system will
> >> be

Re: [Wikidata] Shape Expressions arrive on Wikidata on May 28th

2019-05-29 Thread Magnus Sälgö
Very interesting as Wikidata starts beeing part of more external data “flows” I 
would also like to see that we easy can tell
* this is the schema we use inside Wikidata
* this is external related schemas other organisations has created for this 
type of data
* how this WD schema relate to an external schema. What parts we map etc.

In Sweden the goverment speaks about basic data (swe. Grunddata) that they will 
define that I hope the data in Wikidata can “plug-in” and add value to.

Regards
Magnus Sälgö
0046-705937579
salg...@msn.com<mailto:salg...@msn.com>

29 maj 2019 kl. 09:44 skrev Léa Lacroix 
mailto:lea.lacr...@wikimedia.de>>:

Thanks for your feedback! There is already a ticket about adding a new data 
type allowing to link EntitySchemas from statements: 
https://phabricator.wikimedia.org/T214884
If we don't encounter any major technical issue, this could be done in the 
incoming weeks.

On Tue, 28 May 2019 at 19:18, James Heald 
mailto:jpm.he...@gmail.com>> wrote:
Hi Léa,

Thanks to all the team for this.

I've proposed a property,

https://www.wikidata.org/wiki/Wikidata:Property_proposal/Shape_Expression_for_class

To make this work, is it possible to have a Shape Expression as the
value of a statement on Wikidata (and the RDF dump, and WDQS) ?

Is there a timescale in which this should become possible ?

Thanks,

James.




On 28/05/2019 17:04, Léa Lacroix wrote:
> Hello all,
>
> As previously announced, we just released shape expressions on Wikidata.
> You can for example have a look at E10, the shape for human
> <https://www.wikidata.org/wiki/EntitySchema:E10>, or create a new
> EntitySchema <https://www.wikidata.org/wiki/Special:NewEntitySchema>.
>
> A few useful links:
>
> - WikiProject ShEx
> <https://www.wikidata.org/wiki/Wikidata:WikiProject_ShEx>
> - introduction to ShEx <http://shex.io/shex-primer/>
> - more details about the language <http://shex.io/shex-semantics/>
> - More information about how to create a Schema
> 
> <https://www.wikidata.org/wiki/Wikidata:WikiProject_ShEx/How_to_get_started%3F>
> - Phabricator tag: shape-expressions
> <https://phabricator.wikimedia.org/tag/shape_expressions/>
> - User script
> <https://www.wikidata.org/wiki/User:Zvpunry/EntitySchemaHighlighter.js>
> to highlight items and properties in the schema code and turn the IDs into
> links
>
> If you have any question or encounter issues, feel free to ping me. Cheers,
>
> Léa
>
> On Sun, 19 May 2019 at 15:32, Léa Lacroix 
> mailto:lea.lacr...@wikimedia.de>> wrote:
>
>> Hello all,
>>
>> After several months of development and testing together with the WikiProject
>> ShEx <https://www.wikidata.org/wiki/Wikidata:WikiProject_ShEx>, Shape
>> Expressions are about to be enabled on Wikidata.
>> *First of all, what are Shape Expressions?*
>>
>> ShEx (Q29377880) <https://www.wikidata.org/wiki/Q29377880> is a concise,
>> formal modeling and validation language for RDF structures. Shape
>> Expressions can be used to define shapes within the RDF graph. In the case
>> of Wikidata, this would be sets of properties, qualifiers and references
>> that describe the domain being modeled.
>>
>> See also:
>>
>> - a short video about ShEx
>> <https://www.youtube.com/watch?v=AR75KhEoRKg> made by community
>> members during the Wikimedia hackathon 2019
>> - introduction to ShEx <http://shex.io/shex-primer/>
>> - more details about the language <http://shex.io/shex-semantics/>
>>
>> *What can it be used for?*
>>
>> On Wikidata, the main goal of Shape Expressions would be to describe what
>> the basic structure of an item would be. For example, for a human, we
>> probably want to have a date of birth, a place of birth, and many other
>> important statements. But we would also like to make sure that if a
>> statement with the property “children” exists, the value(s) of this
>> property should be humans as well. Schemas will describe in detail what is
>> expected in the structure of items, statements and values of these
>> statements.
>>
>> Once Schemas are created for various types of items, it is possible to
>> test some existing items against the Schema, and highlight possible errors
>> or lack of information. Subsets of the Wikidata graph can be tested to see
>> whether or not they conform to a specific shape through the use of
>> validation tools. Therefore, Schemas will be very useful to help the
>> editors improving the data quality. We imagine this to be especially useful
>> for wiki projects to more easily discuss and ensure 

Re: [Wikidata] Birth dates in Wikidata

2019-03-13 Thread Magnus Manske
On Tue, Mar 12, 2019 at 6:36 PM Nicolas VIGNERON 
wrote:

> Hi,
>
> I'm not sure where the error come from.
> It doesn't come from the source, Trismegistos doesn't say that this person
> is born in 1999 (1999 is a publication date here, at least in the
> interface).
> I'm not even sure it comes from Mix'n'match (TM 7726 seems to be WD Q55088584,
> user:Tagishsimon did the match last June[1] and it seems correct).
> The problem is probably elsewhere. Probably when the extraction of
> Trismegistos was done, the birthdate and the publication date were mixed
> up...
>
> The best way is to contact Magnus who runs Mix'n'match and the bot
> importing data (or even better, the person who extracted Trismegistos but I
> don't know how to find them).
>
> I have removed the birth date "flags" from the Mix'n'match catalog 991.
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Gender statistics on the Danish Wikipedia

2019-03-05 Thread Magnus Manske
Update: dawiki category "Personer" seems to have some category tree cycles
in higher depths. Here are articles from that category (one layer deep)
with no P31 in the item:
http://petscan.wmflabs.org/?psid=8128462


On Tue, Mar 5, 2019 at 1:00 PM Magnus Manske 
wrote:

> If you make the gender optional, you also get the items without gender:
> http://tinyurl.com/yygze9da
>
> "People on Danish Wikipedia but not on Wikidata" is either:
> * a subset of "Danish Wikipedia articles not on Wikidata". You can get all
> of these (currently, 91) via my tool:
> https://tools.wmflabs.org/wikidata-todo/duplicity.php?wiki=dawiki&mode=list
> * "people on Danish Wikipedia with an item but no P31". Using dawiki
> category "Personer", I am currently running PetScan but it's slow, will
> keep you posted
> (for all dawiki items without P31 or P279, see http://tinyurl.com/y4u6lwyj
> )
>
> On Tue, Mar 5, 2019 at 11:55 AM  wrote:
>
>> Dear any Wikidata Query Service expert,
>>
>>
>> In connection with an editathon, I have made statistics of the number of
>> women and men on the Danish Wikipedia. I have used WDQS for that and the
>> query is listed below:
>>
>> SELECT ?count ?gender ?genderLabel
>> WITH {
>>SELECT ?gender (COUNT(*) AS ?count) WHERE {
>>  ?item wdt:P31 wd:Q5 .
>>  ?item wdt:P21 ?gender .
>>  ?article schema:about ?item.
>>  ?article schema:isPartOf <https://da.wikipedia.org/>
>>}
>>GROUP BY ?gender
>> } AS %results
>> WHERE {
>>INCLUDE %results
>>SERVICE wikibase:label { bd:serviceParam wikibase:language "da,en". }
>> }
>> ORDER BY DESC(?count)
>> LIMIT 25
>>
>> http://tinyurl.com/y8twboe5
>>
>> As the statistics could potentially create some discussion (and ready
>> seems to have) I am wondering whether there are some experts that could
>> peer review the SPARQL query and tell me if there are any issues. I hope
>> I have not made a blunder...
>>
>> The minor issues I can think of are:
>>
>> - Missing gender in Wikidata. We have around 360 of these.
>>
>> - People on the Danish Wikipedia not on Wikidata. Probably tens-ish or
>> hundreds-ish!?
>>
>> - People not being humans. The gendered items I sampled were all
>> fictional humans.
>>
>>
>> We previously reached 17.2% females. Now we are below 17% due to
>> mass-import of Japanese football players, - as far as we can see.
>>
>>
>> best regards
>> Finn Årup Nielsen
>> http://people.compute.dtu.dk/faan/
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Gender statistics on the Danish Wikipedia

2019-03-05 Thread Magnus Manske
If you make the gender optional, you also get the items without gender:
http://tinyurl.com/yygze9da

"People on Danish Wikipedia but not on Wikidata" is either:
* a subset of "Danish Wikipedia articles not on Wikidata". You can get all
of these (currently, 91) via my tool:
https://tools.wmflabs.org/wikidata-todo/duplicity.php?wiki=dawiki&mode=list
* "people on Danish Wikipedia with an item but no P31". Using dawiki
category "Personer", I am currently running PetScan but it's slow, will
keep you posted
(for all dawiki items without P31 or P279, see http://tinyurl.com/y4u6lwyj )

On Tue, Mar 5, 2019 at 11:55 AM  wrote:

> Dear any Wikidata Query Service expert,
>
>
> In connection with an editathon, I have made statistics of the number of
> women and men on the Danish Wikipedia. I have used WDQS for that and the
> query is listed below:
>
> SELECT ?count ?gender ?genderLabel
> WITH {
>SELECT ?gender (COUNT(*) AS ?count) WHERE {
>  ?item wdt:P31 wd:Q5 .
>  ?item wdt:P21 ?gender .
>  ?article schema:about ?item.
>  ?article schema:isPartOf 
>}
>GROUP BY ?gender
> } AS %results
> WHERE {
>INCLUDE %results
>SERVICE wikibase:label { bd:serviceParam wikibase:language "da,en". }
> }
> ORDER BY DESC(?count)
> LIMIT 25
>
> http://tinyurl.com/y8twboe5
>
> As the statistics could potentially create some discussion (and ready
> seems to have) I am wondering whether there are some experts that could
> peer review the SPARQL query and tell me if there are any issues. I hope
> I have not made a blunder...
>
> The minor issues I can think of are:
>
> - Missing gender in Wikidata. We have around 360 of these.
>
> - People on the Danish Wikipedia not on Wikidata. Probably tens-ish or
> hundreds-ish!?
>
> - People not being humans. The gendered items I sampled were all
> fictional humans.
>
>
> We previously reached 17.2% females. Now we are below 17% due to
> mass-import of Japanese football players, - as far as we can see.
>
>
> best regards
> Finn Årup Nielsen
> http://people.compute.dtu.dk/faan/
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] library catalog entries in Wikidata

2018-12-12 Thread Magnus Sälgö
Please read this
https://www.wikidata.org/wiki/Wikidata:WikiCite/Roadmap#Four_scenarios_for_the_future_of_WikiCite

and the Book Project thoughts
https://www.wikidata.org/wiki/Wikidata:WikiProject_Books


>From WIkicite 2017 you have a good overview of the problems/challenges
https://media.ed.ac.uk/media/Wikidata+and+Books+-+Andrea+Zanni+at+WikiCite+2017/1_5w2jzi4o
[http://cdnapi.kaltura.com/p/2010292/sp/201029200/thumbnail/entry_id/1_5w2jzi4o/width/460]<https://media.ed.ac.uk/media/Wikidata+and+Books+-+Andrea+Zanni+at+WikiCite+2017/1_5w2jzi4o>

Wikidata and Books - Andrea Zanni at WikiCite 2017 - Media Hopper Create - The 
University of Edinburgh Media Platform - 
media.ed.ac.uk<https://media.ed.ac.uk/media/Wikidata+and+Books+-+Andrea+Zanni+at+WikiCite+2017/1_5w2jzi4o>
WikiCite 2017 was a 3-day conference, summit and hack day hosted in Vienna, 
Austria, on May 23-25, 2017.It expands efforts started with WikiCite 2016 to 
design a central bibliographic repository, as well as tools and strategies to 
improve information quality and verifiability in Wikimedia projects. Its goal 
is to bring together Wikimedia contributors, data modelers, information and 
library ...
media.ed.ac.uk




I guess you speak about the Kosovo National Library ??!?!?  do you have any 
more info. Format: ?
Plans BIBFrame: ?
Number of books: ?
Number of books already in WIkidata: ?

My advice is wait and see what people decide is best for Wikidata

Regards
Magnus Sälgö
Stockholm, Sweden

twitter: salgo60<https://twitter.com/salgo60>




From: Wikidata  on behalf of Arianit 
Dobroshi 
Sent: Wednesday, December 12, 2018 3:54 PM
To: wikidata@lists.wikimedia.org
Subject: [Wikidata] library catalog entries in Wikidata

Hi, I'm new here.

Would it make sense to import the national library catalog entries into 
Wikidata?

Thanks,
Arianit




___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Indexing all item properties in ElasticSearch

2018-07-27 Thread Magnus Manske
Hi, and thanks for working on this!

My subjective view:
* We don't need P2860/P1433 indexed, at least not at the moment
* I would really like dates (mainly, born/died), especially if they work
for "greater units", that is, I search for a year and get an item back,
even though the statament is month- or day-precise

Cheers,
Magnus

On Thu, Jul 26, 2018 at 10:48 PM Stas Malyshev 
wrote:

> Hi!
>
> Today we are indexing in ElasticSearch almost all string properties
> (except a few) and select item properties (P31 and P279). We've been
> asked to extend this set and index more item properties
> (https://phabricator.wikimedia.org/T199884). We did not do it from the
> start because we did not want to add too much data to the index at once,
> and wanted to see how the index behaves. To evaluate what this change
> would mean, some statistics:
>
> All usage of item properties in statements is about 231 million uses
> (according to sqid tool database). Of those, about 50M uses are
> "instance of" which we are already indexing. Another 98M uses belong to
> two properties - published in (P1433) and cites (P2860). Leaving about
> 86M for the rest of the properties.
>
> So, if we index all the item properties except P2860 and P1433, we'll be
> a little more than doubling the amount of data we're storing for this
> field, which seems OK. But if we index those too, we'll be essentially
> quadrupling it - which may be OK too, but is bigger jump and one that
> may potentially cause some issues.
>
> So, we have two questions:
> 1. Do we want to enable indexing for all item properties? Note that if
> you just want to find items with certain statement values, Wikidata
> Query Service matches this use case best. It's only in combination with
> actual fulltext search where on-wiki search is better.
>
> 2. Do we need to index P2860 and P1433 at all, and if so, would it be ok
> if we omit indexing for now?
>
> Would be glad to hear thoughts on the matter.
>
> Thanks,
> --
> Stas Malyshev
> smalys...@wikimedia.org
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] digital personal bibliography

2018-05-25 Thread Magnus Manske
QuickStatements does fully automated edits, including item creation, when
you are absolutely sure that the items you create do not already exist in
Wikidata.

Mix'n'match is a hybrid manual/automated process that is used if items
might already exist in Wikidata. It is much slower and more manually
involved, and doesn't add most metadata, but it helps to avoid duplicate
items.

On Fri, May 25, 2018 at 10:33 AM Andreas Dittrich 
wrote:

> @Andra Waagmeester: Thank you for your hint to the wikicite-community: I
> got some useful answers their.
> @Magnus Manske: Is mix'n'match more or less doing the same as the
> quickstatement <https://tools.wmflabs.org/quickstatements>-tool? I think,
> I didn't get it, sorry. (My 'catalog'/bibliography is not yet online.)
>
> Thank you both for your answers!
>
> Am Do., 24. Mai 2018 um 12:20 Uhr schrieb Magnus Manske <
> magnusman...@googlemail.com>:
>
>> You could make a catalog of all her works in Mix'n'match [1] (import at
>> [2]), to avoid creating duplicate items.
>>
>> FWIW, this is what we have so far for her:
>> http://tinyurl.com/yc8tujs5
>>
>> [1] https://tools.wmflabs.org/mix-n-match/
>> [2] https://tools.wmflabs.org/mix-n-match/import.php
>>
>>
>> On Thu, May 24, 2018 at 11:13 AM Andra Waagmeester 
>> wrote:
>>
>>> I would recommend posting this also to the Wikicite mailinglist, which
>>> is community that discusses this on Wikidata : wikicite-discuss@
>>> wikimedia.org
>>>
>>> On Thu, May 24, 2018 at 11:41 AM, Andreas Dittrich 
>>> wrote:
>>>
>>>> Dear List,
>>>>
>>>> currently I am working on a digital bibliography of all published texts
>>>> by the Austrian writer Ilse Aichinger. My intention is to make this list
>>>> publicly available as a digitally structured database. I found that
>>>> wikidata could be the right place for this. (Am I right?)
>>>>
>>>> At first I wanted to use the FRBR vocabulary to describe the relations
>>>> between the texts, but librarians recommended to use BIBFRAME, as this
>>>> seems to get to be the standard vocabulary – and wikidata also uses this
>>>> vocabulary, as I learned from the wikidata-project group
>>>> Wikidata:WikiProject_Books
>>>> <https://www.wikidata.org/wiki/Wikidata:WikiProject_Books>. ---
>>>> Currently I am recording the bibliographic units locally in BIBTEX format
>>>> (because I started in this format). Later on I will transform the dataset
>>>> to BIBFRAME and the wikidata-bibframe-vocabulary so that I can load it to
>>>> wikidata.
>>>>
>>>> Now I would like to ask you: Are there comparable, perhaps even
>>>> exemplary projects for bibliographies on wikidata? Which ones? Should I
>>>> suggest or announce the project somewhere? Do you have recommendations
>>>> regarding the workflow (e.g. is quickstatements
>>>> <https://tools.wmflabs.org/quickstatements>-tool the best way to push
>>>> my data to wikidata)? Is someone here with experience regarding building a
>>>> bibliography in wikidata?
>>>>
>>>> With best regards,
>>>>  Andrew
>>>>
>>>> ___
>>>> Wikidata mailing list
>>>> Wikidata@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>>
>>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] digital personal bibliography

2018-05-24 Thread Magnus Manske
You could make a catalog of all her works in Mix'n'match [1] (import at
[2]), to avoid creating duplicate items.

FWIW, this is what we have so far for her:
http://tinyurl.com/yc8tujs5

[1] https://tools.wmflabs.org/mix-n-match/
[2] https://tools.wmflabs.org/mix-n-match/import.php


On Thu, May 24, 2018 at 11:13 AM Andra Waagmeester  wrote:

> I would recommend posting this also to the Wikicite mailinglist, which is
> community that discusses this on Wikidata : wikicite-disc...@wikimedia.org
>
>
> On Thu, May 24, 2018 at 11:41 AM, Andreas Dittrich 
> wrote:
>
>> Dear List,
>>
>> currently I am working on a digital bibliography of all published texts
>> by the Austrian writer Ilse Aichinger. My intention is to make this list
>> publicly available as a digitally structured database. I found that
>> wikidata could be the right place for this. (Am I right?)
>>
>> At first I wanted to use the FRBR vocabulary to describe the relations
>> between the texts, but librarians recommended to use BIBFRAME, as this
>> seems to get to be the standard vocabulary – and wikidata also uses this
>> vocabulary, as I learned from the wikidata-project group
>> Wikidata:WikiProject_Books
>> . ---
>> Currently I am recording the bibliographic units locally in BIBTEX format
>> (because I started in this format). Later on I will transform the dataset
>> to BIBFRAME and the wikidata-bibframe-vocabulary so that I can load it to
>> wikidata.
>>
>> Now I would like to ask you: Are there comparable, perhaps even exemplary
>> projects for bibliographies on wikidata? Which ones? Should I suggest or
>> announce the project somewhere? Do you have recommendations regarding the
>> workflow (e.g. is quickstatements
>> -tool the best way to push my
>> data to wikidata)? Is someone here with experience regarding building a
>> bibliography in wikidata?
>>
>> With best regards,
>>  Andrew
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] How to find the Dbpedia data for a Wikidata item?

2018-04-23 Thread Magnus Knuth
DBpedia contains these links. You can use the DBpedia SPARQL endpoint to get 
the DBpedia URI with a simple query.

SELECT ?s { ?s owl:sameAs <http://www.wikidata.org/entity/Q465> } [1]


[1] 
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=SELECT+%3Fs+%7B%3Fs+owl%3AsameAs+%3Chttp%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ465%3E%7D

> Am 23.04.2018 um 06:41 schrieb PWN :
> 
> If one knows the Q code (or URI) for an entity on Wikidata, how can one find 
> the Dbpedia Id and the information linked to it?
> Thank you.
> 
> Sent from my iPad
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
> 

-- 
Magnus Knuth

Universität Leipzig, Institut für Angewandte Informatik (InfAI), DBpedia 
Association

mail: kn...@informatik.uni-leipzig.de
tel: +49 177 3277537
webID: http://magnus.13mm.de/


___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Generating info-boxes from Wikidata: the importance of values!

2018-03-13 Thread Magnus Knuth
Hey all,

thanks for sharing the paper, this is an interesting topic. I just wanted to 
point to some (own) prior work on entity summarization which is related to what 
you have done:
https://link.springer.com/chapter/10.1007/978-3-642-35173-0_24

All the best
Magnus

> Am 08.03.2018 um 19:00 schrieb Aidan Hogan :
> 
> Hey Raphaël,
> 
> Thanks for the comments and the reference! And sorry we missed discussion of 
> your paper (which indeed looks at largely the same problem in a slightly 
> different context). If there's a next time, we will be sure to include it in 
> the related work.
> 
> I am impressed btw to see a third-party evaluation of a Google tool. Also it 
> seems Google has room for improvement. :)
> 
> Cheers,
> Aidan
> 
> On 07-03-2018 13:43, Raphaël Troncy wrote:
>> Hey Aidan,
>> Great work, I loved it! You may want to (cite and) look at what we did 4 
>> years ago where we tried to reverse engineer a bit what Google is doing when 
>> choosing properties (and values) to show in its rich panels alongside 
>> popular entities.
>> The paper is entitled "What Are the Important Properties
>> of an Entity? Comparing Users and Knowledge Graph Point of View", 
>> https://www.eurecom.fr/~troncy/Publications/Assaf_Troncy-eswc14.pdf
>> ... and the code is on github to replicate: https://github.com/ahmadassaf/KBE
>>   Raphaël
>> Le 07/03/2018 à 05:53, Aidan Hogan a écrit :
>>> Hi all,
>>> 
>>> Tomás and I would like to share a paper that might be of interest to the 
>>> community. It presents some preliminary results of a work looking at fully 
>>> automated methods to generate Wikipedia info-boxes from Wikidata. The main 
>>> focus is on deciding what information from Wikidata to include, and in what 
>>> order. The results are based on asking users (students) to rate some 
>>> prototypes of generated info-boxes.
>>> 
>>> Tomás Sáez, Aidan Hogan "Automatically Generating Wikipedia Infoboxes from 
>>> Wikidata". In the Proceedings of the Wiki Workshop at WWW 2018, Lyon, 
>>> France, April 24, 2018.
>>> 
>>> - Link: http://aidanhogan.com/docs/infobox-wikidata.pdf
>>> 
>>> We understand that populating info-boxes is an important goal of Wikidata 
>>> and hence we thought we'd share some lessons learned.
>>> 
>>> Obviously a lot of work is being put into populating info-boxes from 
>>> Wikidata, but the main methods at the moment seem to be template-based and 
>>> require a lot of manual labour; plus the definition of these templates 
>>> seems to be a difficult problem for classes such as person (where different 
>>> information will have different priorities for people of different 
>>> professions, notoriety, etc.).
>>> 
>>> We were just interested to see how far we could get with a fully automated 
>>> approach using some generic ranking methods. Also we thought that something 
>>> like this could perhaps be used to generate a "default" info-box for 
>>> articles with no info-box and no associated template mapping. The paper 
>>> presents preliminary results along those lines.
>>> 
>>> One interesting result is that a major factor in the evaluation of the 
>>> generated info-boxes was the importance of the value. For example, Barack 
>>> Obama has lots of awards, but perhaps only something like the Nobel Peace 
>>> Prize might be of relevance to show in the info-box (<- being intended as 
>>> an illustrative example rather than a concrete assertion of course!). 
>>> Another example is that sibling might not be an important attribute in a 
>>> lot of cases, but when that sibling is Barack Obama, then that deserves to 
>>> be in the info-box (<- how such cases could be expressed in a purely 
>>> template-based approach, we are not sure, but it would seem difficult).
>>> 
>>> We assess the importance of values with PageRank. Assessing the importance 
>>> not only of attributes, but of values, turned out to be a major influence 
>>> on how highly our evaluators assessed the quality of the generated 
>>> info-boxes.
>>> 
>>> This initial/isolated observation might be interesting since, to the best 
>>> of our understanding, the current wisdom on populating info-boxes from 
>>> Wikidata focuses on what attributes to present and in which order, but does 
>>> not consider the importance of values (aside from the Wikidata rank 
>>> feature, which we believe is more intended to assess relevance/ti

Re: [Wikidata] About OCLC and DBpedia Links

2018-03-06 Thread Magnus Sälgö
If Dbpedia has “same as” then Wikidata doesn’t have to duplicate that 
information you can ask dbpedia what is same as Q7724

Regards
Magnus Sälgö
Stockholm, Sweden

6 mars 2018 kl. 19:49 skrev Ettore RIZZA 
mailto:ettoreri...@gmail.com>>:

First of all, thank you all for your answers.

@Magnus and Thad: it's a bit what I suspected. Since the URL to WorldCat can be 
rebuilt from the Library of congress authority ID, I guess someone thought it 
would be a duplicate.

But 1) I'm not sure that there is a 1 to 1 mapping between all Worldcat 
Identities and the Library of Congress 2) It would be rather strange that a 
Library of Congress ID would also serve as an ID for a "competitor" (ie OCLC, 
which maintains Worldcat and VIAF) 3) One would then wonder why Wikipedia 
provides both links to the Library of Congress Authority ID and Worldcat 
Identities.

With respect for the fact that Wikidata already contains links to VIAF and that 
VIAF contains links to Worldcat Identities, this transitivity reasoning could 
apply to many other Authority IDs, I think.

@Sebastian: Il would be great! I'll follow this project closely, just as I'm 
already following your 
papers<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcontent.iospress.com%2Farticles%2Fsemantic-web%2Fsw277&data=02%7C01%7C%7Cb20d7c5a98504fcaa29e08d583930f50%7C84df9e7fe9f640afb435%7C1%7C0%7C636559589991689411&sdata=R1LeZSVSLmdZ%2FiJh6evIwrukAgtQQhzz7OxrJNL%2FXVg%3D&reserved=0>.
 And it is precisely because I know that there is a desire for "rapprochement" 
on both sides that I asked why there is absolutely nothing in Wikidata that 
links to DBpedia (or Yago), whereas DBpedia contains a lot of owl: sameAs to 
Wikidata. All this must have been discussed somewhere I suppose. Still, I do 
not even find a property proposal for "DBpedia link".

2018-03-06 18:59 GMT+01:00 Sebastian Hellmann 
mailto:hellm...@informatik.uni-leipzig.de>>:

Hi Ettore,

we just released a very early prototype of the new DBpedia:

http://88.99.242.78/hdt/en_wiki_de_sv_nl_fr-replaced.nt.bz2<https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2F88.99.242.78%2Fhdt%2Fen_wiki_de_sv_nl_fr-replaced.nt.bz2&data=02%7C01%7C%7Cb20d7c5a98504fcaa29e08d583930f50%7C84df9e7fe9f640afb435%7C1%7C0%7C636559589991689411&sdata=oml9kGY8ikCsVFH0jmSFx3txDjQGyT6lj%2FSxangnIZM%3D&reserved=0>

I attached the first 1000 triples. The data is a merge of Wikidata + 5  
DBpedias from the 5 largest Wikipedia versions. Overall, there are many issues, 
but we have a test-driven data engineering process combined with Scrum and 
biweekly releases, next one is on March 15th. The new IDs are also stable by 
design.

We discussed how to effectively reuse all technologies we have for Wikidata and 
also Wikipedia and are applying with this project at the moment: 
https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmeta.wikimedia.org%2Fwiki%2FGrants%3AProject%2FDBpedia%2FGlobalFactSync&data=02%7C01%7C%7Cb20d7c5a98504fcaa29e08d583930f50%7C84df9e7fe9f640afb435%7C1%7C0%7C636559589991689411&sdata=cOIU%2FCe93h%2Ft4b1PdIDhMJWL%2FZB2XKnere%2BulFSZAFE%3D&reserved=0>

(Endorsements on the main page and comments on the talk page are welcome).

We really hope that the project gets accepted, so we can deploy the 
technologies behind DBpedia to the Wikiverse, e.g. we found over 900k 
triples/statements with references in the English Wikipedia's Infoboxes alone.

We still have to do documentation and hosting of the new releases, but then it 
would indeed be a good time to add the links to DBpedia, if nobody objects. 
Also some people mentioned that we could load the DBpedia Ontology into 
Wikidata to provide an alternate class hierarchy. In DBpedia we loaded 5 or 6 
classification schemes (Yago, Umbel, etc.), which are useful for different kind 
of queries.


All the best,
Sebastian




On 06.03.2018 18:14, Ettore RIZZA wrote:
Dear all,

I asked myself a series of questions about the links between Wikidata and other 
knowledge/data bases, namely those of OCLC and DBpedia. For example:

- Why Wikidata has no property "Worldcat 
Identities<https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fworldcat.org%2Fidentities%2F&data=02%7C01%7C%7Cb20d7c5a98504fcaa29e08d583930f50%7C84df9e7fe9f640afb435%7C1%7C0%7C636559589991689411&sdata=Bx%2FA%2BPh9nQzNuzrRgRg5dC3s%2BJeKKWNGKUSRguq4Ks4%3D&reserved=0>"
 while the English edition of Wikipedia systematically mentions this identity 
(when it exists) in its section "Autorithy control"  ?

- Why do VIAF links to all editions of Wikipedia, but not (simply) to Wikidata ?

- Why is there no link to DBpedia when the opposite is true ?

These questions may seem very different from

Re: [Wikidata] About OCLC and DBpedia Links

2018-03-06 Thread Magnus Sälgö
If you have an external identifier you often can find the record you want

 Worldcat can use the lccn property

See http://worldcat.org/identities/lccn-n79005597/
That is Wikidata Q7724<https://www.wikidata.org/wiki/Q7724> that has property
P244 lcauth
https://m.wikidata.org/wiki/Property:P244

If you check VIAF you see The Wikidata record https://viaf.org/viaf/54154627/ 
and also Lccn 79005597
https://viaf.org/processed/LC%7Cn%20%2079005597

You can also use the Lccn number and find the Wikidata record
https://tools.wmflabs.org/sqid/#/view?find=P244:n79005597

Hope it make sense its loosely coupled system and then it maybe is not the 
best structure.

Regards
Magnus Sälgö
Stockholm, Sweden

6 mars 2018 kl. 18:15 skrev Ettore RIZZA 
mailto:ettoreri...@gmail.com>>:

Dear all,

I asked myself a series of questions about the links between Wikidata and other 
knowledge/data bases, namely those of OCLC and DBpedia. For example:

- Why Wikidata has no property "Worldcat 
Identities<https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fworldcat.org%2Fidentities%2F&data=02%7C01%7C%7C24ed3d8b41df4df00fc008d58385ecec%7C84df9e7fe9f640afb435%7C1%7C0%7C636559533579975247&sdata=DhHWzrQOoGjvmELLEAEXk49jehINSaIzcaSI2%2F9yWeg%3D&reserved=0>"
 while the English edition of Wikipedia systematically mentions this identity 
(when it exists) in its section "Autorithy control"  ?

- Why do VIAF links to all editions of Wikipedia, but not (simply) to Wikidata ?

- Why is there no link to DBpedia when the opposite is true ?

These questions may seem very different from each other, but they ultimately 
concern a common subject and are all very basic. I suspect they had to be 
discussed somewhere, maybe at the dawn of Wikidata. However, I find nothing in 
the archives of this Newsletter, nor in the discussions on Wikidata.

Could someone point me to some documentation on these issues ?

Cheers,

Ettore Rizza
___
Wikidata mailing list
Wikidata@lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.wikimedia.org%2Fmailman%2Flistinfo%2Fwikidata&data=02%7C01%7C%7C24ed3d8b41df4df00fc008d58385ecec%7C84df9e7fe9f640afb435%7C1%7C0%7C636559533580131494&sdata=ptVK%2Fz0vOXxJG6QDWeEaHZNhTtr6jo0VUs2Yul%2BcqfQ%3D&reserved=0
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] GlobalFactSync

2018-02-16 Thread Magnus Knuth
Hello,

yes these classes and properties [1] are currently missing in the mappings. 
Though there is currently some ongoing work which is not pushed to the mappings 
wiki yet.
Nevertheless, the more challenging part of the project will be mapping the 
template parameters and Wikidata properties.

Best regards
Magnus

[1] http://mappings.dbpedia.org/server/ontology/wikidata/missing/

> Am 05.02.2018 um 21:49 schrieb Ettore RIZZA :
> 
> Hi,
> 
> Our plan here is to map all Wikidata properties to the DBpedia Ontology and 
> then have the info to compare coverage of Wikidata with all infoboxes across 
> languages.
> 
> This is a really exciting project that would improve both Wikidata and 
> DBpedia. I would be interested to know more, especially on what has already 
> been done in terms of mapping and what remains to do.
> 
> I see, for example, that DBpedia has a list of missing properties and 
> classes, but I don't know if it's up to date.
> 
> Best regards,
> 
> Ettore Rizza
> 
> 
> 2018-01-15 19:57 GMT+01:00 Magnus Knuth :
> Dear all,
> 
> last year, we applied for a Wikimedia grant to feed qualified data from 
> Wikipedia infoboxes (i.e. missing statements with references) via the DBpedia 
> software into Wikidata. The evaluation was already quite good, but some parts 
> were still missing and we would like to ask for your help and feedback for 
> the next round. The new application is here: 
> https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync
> 
> The main purpose of the grant is:
> 
> - Wikipedia infoboxes are quite rich, are manually curated and have 
> references. DBpedia is already extracting that data quite well (i.e. there is 
> no other software that does it better). However, extracting references is not 
> a priority on our agenda. They would be very useful to Wikidata, but there 
> are no user requests for this from DBpedia users.
> 
> - DBpedia also has all the infos of all infoboxes of all Wikipedia editions 
> (>10k pages), so we also know quite well, where Wikidata is used already and 
> where information is available in Wikidata or one language version and 
> missing in another.
> 
> - side-goal: bring the Wikidata, Wikipedia and DBpedia communities closer 
> together
> 
> Here is a diff between the old an new proposal:
> 
> - extraction of infobox references will still be a goal of the reworked 
> proposal
> 
> - we have been working on the fusion and data comparison engine (the part of 
> the budget that came from us) for a while now and there are first results:
> 
> 6823 birthDate_gain_wiki.nt
> 3549 deathDate_gain_wiki.nt
>   362541 populationTotal_gain_wiki.nt
>   372913 total
> 
> We only took three properties for now and showed the gain where no Wikidata 
> statement was available. birthDate/deathDate is already quite good. Details 
> here: 
> https://drive.google.com/file/d/1j5GojhzFJxLYTXerLJYz3Ih-K6UtpnG_/view?usp=sharing
> 
> Our plan here is to map all Wikidata properties to the DBpedia Ontology and 
> then have the info to compare coverage of Wikidata with all infoboxes across 
> languages.
> 
> - we will remove the text extraction part from the old proposal (which is 
> here for you reference: 
> https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact). This 
> will still be a focus during our work in 2018, together with Diffbot and the 
> new DBpedia NLP department, but we think that it distracted from the core of 
> the proposal. Results from the Wikipedia article text extraction can be added 
> later once they are available and discussed separately.
> 
> - We proposed to make an extra website that helps to synchronize all 
> Wikipedias and Wikidata with DBpedia as its backend. While the external 
> website is not an ideal solution, we are lacking alternatives. The Primary 
> Sources Tool is mainly for importing data into Wikidata, not so much 
> synchronization. The MediaWiki instances of the Wikipedias do not seem to 
> have any good interfaces to provide suggestions and pinpoint missing info. 
> Especially to this part, we would like to ask for your help and suggestions, 
> either per mail to the list or on the talk page: 
> https://meta.wikimedia.org/wiki/Grants_talk:Project/DBpedia/GlobalFactSync
> 
> We are looking forward to a fruitful collaboration with you and we thank you 
> for your feedback!
> 
> All the best
> Magnus

-- 
Magnus Knuth

Universität Leipzig
Institut für Informatik
Abt. Betriebliche Informationssysteme, AKSW/KILT
Augustusplatz 10
04109 Leipzig DE

mail: kn...@informatik.uni-leipzig.de
tel: +49 177 3277537
webID: http://magnus.13mm.de/


___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Call for support: GlobalFactSync

2018-01-29 Thread Magnus Knuth
Dear Wikidatans,

last year, we applied for a Wikimedia grant to feed qualified data from 
Wikipedia infoboxes (i.e. missing statements with references) via the DBpedia 
software into Wikidata. The evaluation was already quite good, but some parts 
were still missing. Now we apply again with a modified though similar proposal 
and we would like to ask for your support and feedback.

GlobalFactSync will produce a tool that identifies and visualises differences 
in infobox facts across available language versions, it suggests changes to 
Wikipedia and Wikidata editors when redacting infoboxes and Wikidata items 
according to data available from other Wikimedia projects. The project grant 
proposal can be found here: 
https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync

Of course, any comment is more than welcome on the discussion page. The 
proposal is still in a draft form (deadline January 31st), we would be very 
happy to hear your comments.

All the best,
Magnus, Sebastian and Julia

-- 
Magnus Knuth

Universität Leipzig
Institut für Informatik
Abt. Betriebliche Informationssysteme, AKSW/KILT
Augustusplatz 10
04109 Leipzig DE

mail: kn...@informatik.uni-leipzig.de
tel: +49 177 3277537
webID: http://magnus.13mm.de/


___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] [Wikitech-l] Wikidata vandalism dashboard (for Wikipedians)

2018-01-28 Thread Magnus Manske
Quick note: Looks great, but "Changes in descriptions" is always on, even
after clicked off...

On Sun, Jan 28, 2018 at 5:54 PM Amir Ladsgroup  wrote:

> Hello,
> People usually ask me how they can patrol edits that affect their Wikipedia
> or their language. The proper way to do so is by using watchlist and
> recentchanges (with "Show Wikidata edits" option enabled) in Wikipedias but
> sometimes it shows too many unrelated changes.
>
> Also, it would be good to patrol edits for languages you know because the
> descriptions are being shown and editable in the Wikipedia app making it
> vulnerable to vandalism (so many vandalism in this area goes unnoticed for
> a while and sometimes gets fixed by another reader which is suboptimal).
>
> So Lucas [1] and I had a pet project to allow you see unpatrolled edits
> related to a language in Wikidata. It has some basic integration with ORES
> and if you see a good edit and mark it as patrolled it goes away from this
> list. What I do usually is to check this page twice a day for Persian
> langauge which given the size of it, that's enough.
>
> It's in https://tools.wmflabs.org/wdvd/index.php the source code is in
> https://github.com/Ladsgroup/Vandalism-dashboard and you can report
> issues/bug/feature requests in
> https://github.com/Ladsgroup/Vandalism-dashboard/issues
>
> Please spread the word and any feedback about this tool is very welcome :)
>
> [1]: 
> https://www.wikidata.org/wiki/User:Lucas_Werkmeister_(WMDE)
>
> 
> Best
> ___
> Wikitech-l mailing list
> wikitec...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] GlobalFactSync

2018-01-15 Thread Magnus Knuth
Dear all,

last year, we applied for a Wikimedia grant to feed qualified data from 
Wikipedia infoboxes (i.e. missing statements with references) via the DBpedia 
software into Wikidata. The evaluation was already quite good, but some parts 
were still missing and we would like to ask for your help and feedback for the 
next round. The new application is here: 
https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync

The main purpose of the grant is:

- Wikipedia infoboxes are quite rich, are manually curated and have references. 
DBpedia is already extracting that data quite well (i.e. there is no other 
software that does it better). However, extracting references is not a priority 
on our agenda. They would be very useful to Wikidata, but there are no user 
requests for this from DBpedia users.

- DBpedia also has all the infos of all infoboxes of all Wikipedia editions 
(>10k pages), so we also know quite well, where Wikidata is used already and 
where information is available in Wikidata or one language version and missing 
in another.

- side-goal: bring the Wikidata, Wikipedia and DBpedia communities closer 
together

Here is a diff between the old an new proposal:

- extraction of infobox references will still be a goal of the reworked proposal

- we have been working on the fusion and data comparison engine (the part of 
the budget that came from us) for a while now and there are first results:

6823 birthDate_gain_wiki.nt
3549 deathDate_gain_wiki.nt
  362541 populationTotal_gain_wiki.nt
  372913 total

We only took three properties for now and showed the gain where no Wikidata 
statement was available. birthDate/deathDate is already quite good. Details 
here: 
https://drive.google.com/file/d/1j5GojhzFJxLYTXerLJYz3Ih-K6UtpnG_/view?usp=sharing

Our plan here is to map all Wikidata properties to the DBpedia Ontology and 
then have the info to compare coverage of Wikidata with all infoboxes across 
languages.

- we will remove the text extraction part from the old proposal (which is here 
for you reference: 
https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact). This 
will still be a focus during our work in 2018, together with Diffbot and the 
new DBpedia NLP department, but we think that it distracted from the core of 
the proposal. Results from the Wikipedia article text extraction can be added 
later once they are available and discussed separately.

- We proposed to make an extra website that helps to synchronize all Wikipedias 
and Wikidata with DBpedia as its backend. While the external website is not an 
ideal solution, we are lacking alternatives. The Primary Sources Tool is mainly 
for importing data into Wikidata, not so much synchronization. The MediaWiki 
instances of the Wikipedias do not seem to have any good interfaces to provide 
suggestions and pinpoint missing info. Especially to this part, we would like 
to ask for your help and suggestions, either per mail to the list or on the 
talk page: 
https://meta.wikimedia.org/wiki/Grants_talk:Project/DBpedia/GlobalFactSync

We are looking forward to a fruitful collaboration with you and we thank you 
for your feedback!

All the best
Magnus

-- 
Magnus Knuth

Universität Leipzig
Institut für Informatik
Abt. Betriebliche Informationssysteme, AKSW/KILT
Augustusplatz 10
04109 Leipzig DE

mail: kn...@informatik.uni-leipzig.de
tel: +49 177 3277537
webID: http://magnus.13mm.de/


___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix n Match question

2017-10-17 Thread Magnus Manske
Well, you could have started by telling me *which* catalog is yours, but I
guess it's "Photographers' Identities Catalog" (602).

It looks like instead of the property (P2750) you entered "Q23892012" for
some reason. Fixed now.

I have synchronized between Wikidata and the catalog, but you seem to be
missing essentially all the entries already on Wikidata. That is on
purpose, I assume.

You also appear to have broken all entry names with non-ASCII characters
(you have "Neurdein Fr" instead of "Neurdein Frères").

*sigh* I can fix it if you send me the file...

Cheers,
Magnus

On Tue, Oct 17, 2017 at 3:47 AM David Lowe  wrote:

> It uploaded properly, but I've got a couple f questions:
> When I confirm a match, I get an "Invalid snak property" error.
> Also, what is to happen with these matches? I have a property for my PIC
> IDs (P2750), but there seems to be no way to update the wikidata entry with
> my IDs. I fear I've done something wrong!
>
>
> *David Lowe | The New York Public Library**Specialist II, Photography
> Collection*
>
> *Photographers' Identities Catalog <http://pic.nypl.org>*
>
> On Mon, Oct 16, 2017 at 6:26 PM, David Lowe  wrote:
>
>> Many thanks, Magnus! I look forward to working/playing with this.
>>
>> Best,
>> David
>>
>>
>> *David Lowe | The New York Public Library**Specialist II, Photography
>> Collection*
>>
>> *Photographers' Identities Catalog <http://pic.nypl.org>*
>>
>> On Mon, Oct 16, 2017 at 5:26 PM, Magnus Manske <
>> magnusman...@googlemail.com> wrote:
>>
>>> Hi David,
>>>
>>> the upload page at
>>> https://tools.wmflabs.org/mix-n-match/import.php
>>> won't take your matches, but they can be imported from Wikidata with a
>>> click.
>>>
>>> If the upload is too big for the page, mail me the file and I'll do it
>>> the old-fashioned way ;-)
>>>
>>> Cheers,
>>> Magnus
>>>
>>> On Mon, Oct 16, 2017 at 10:19 PM David Lowe  wrote:
>>>
>>>> Magnus, or anyone else who may be able to advise:
>>>>
>>>> I'd like to add the Photographers' Identities Catalog (PIC) entries to
>>>> Mix n Match. I have about 128,000 entries for photographers in PIC, of
>>>> which I already have matched ~14,000 to Wikidata entries. My PIC IDs are
>>>> already in the corresponding Wikidata entries. I assume I should remove
>>>> these from the file before I upload it to Mix n Match, but wanted to check
>>>> first.
>>>>
>>>> Thanks in advance,
>>>> David
>>>>
>>>>
>>>> *David Lowe | The New York Public Library**Specialist II, Photography
>>>> Collection*
>>>>
>>>> *Photographers' Identities Catalog <http://pic.nypl.org>*
>>>> ___
>>>> Wikidata mailing list
>>>> Wikidata@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>>
>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>>
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix n Match question

2017-10-16 Thread Magnus Manske
Hi David,

the upload page at
https://tools.wmflabs.org/mix-n-match/import.php
won't take your matches, but they can be imported from Wikidata with a
click.

If the upload is too big for the page, mail me the file and I'll do it the
old-fashioned way ;-)

Cheers,
Magnus

On Mon, Oct 16, 2017 at 10:19 PM David Lowe  wrote:

> Magnus, or anyone else who may be able to advise:
>
> I'd like to add the Photographers' Identities Catalog (PIC) entries to Mix
> n Match. I have about 128,000 entries for photographers in PIC, of which I
> already have matched ~14,000 to Wikidata entries. My PIC IDs are already in
> the corresponding Wikidata entries. I assume I should remove these from the
> file before I upload it to Mix n Match, but wanted to check first.
>
> Thanks in advance,
> David
>
>
> *David Lowe | The New York Public Library**Specialist II, Photography
> Collection*
>
> *Photographers' Identities Catalog <http://pic.nypl.org>*
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Linking to wikidata pages

2017-10-09 Thread Magnus Manske
Oh and Leyla, I'm in Sulston building on Genome campus :-)

On Mon, Oct 9, 2017 at 2:21 PM Magnus Manske 
wrote:

> Hi Leyla,
>
> you don't need permission just for linking to Wikidata!
>
> And the query looks fine, I just ran it without limit, ~200K results. No
> problem.
>
> Cheers,
> Magnus
>
> On Mon, Oct 9, 2017 at 2:16 PM Leyla Garcia  wrote:
>
>> Hi all,
>>
>> We would like to link from our pages to wikidata pages. Who should we
>> contact in this regard? We would need a contact other than the mailing
>> list, if possible.
>>
>> I also want to make sure we will not disturb the SPARQL endpoint service
>> with our query.  We want to retrieve all pages pointing to a UniProt
>> entry regardless of the taxon. So far we have this query
>>
>> SELECT ?item ?itemLabel ?UniProt_ID ?taxonID WHERE {
>>?item wdt:P352 ?UniProt_ID ;
>>  wdt:P703 ?taxon .
>>?taxon wdt:P31 wd:Q16521 ;
>>   wdt:P685 ?taxonID .
>>
>>SERVICE wikibase:label { bd:serviceParam wikibase:language
>> "[AUTO_LANGUAGE],en". }
>> }
>>
>> I tried it with a limit of 100 and it worked fine but wondering what
>> would be the recommended way if we want them all.
>>
>> By the way, what would be the way to get that query using the query
>> helper? I did not managed so I wrote it manually.
>>
>> Regards,
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Linking to wikidata pages

2017-10-09 Thread Magnus Manske
Hi Leyla,

you don't need permission just for linking to Wikidata!

And the query looks fine, I just ran it without limit, ~200K results. No
problem.

Cheers,
Magnus

On Mon, Oct 9, 2017 at 2:16 PM Leyla Garcia  wrote:

> Hi all,
>
> We would like to link from our pages to wikidata pages. Who should we
> contact in this regard? We would need a contact other than the mailing
> list, if possible.
>
> I also want to make sure we will not disturb the SPARQL endpoint service
> with our query.  We want to retrieve all pages pointing to a UniProt
> entry regardless of the taxon. So far we have this query
>
> SELECT ?item ?itemLabel ?UniProt_ID ?taxonID WHERE {
>?item wdt:P352 ?UniProt_ID ;
>  wdt:P703 ?taxon .
>?taxon wdt:P31 wd:Q16521 ;
>   wdt:P685 ?taxonID .
>
>SERVICE wikibase:label { bd:serviceParam wikibase:language
> "[AUTO_LANGUAGE],en". }
> }
>
> I tried it with a limit of 100 and it worked fine but wondering what
> would be the recommended way if we want them all.
>
> By the way, what would be the way to get that query using the query
> helper? I did not managed so I wrote it manually.
>
> Regards,
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Identifying tool edits in Wikidata revision comments

2017-10-05 Thread Magnus Manske
Also, [[User:QuickStatementsBot]] acts as an agent for edits submitted
through the QuickStatements tool, as well as some of my automated bot edits
("...invoked by...").

On Thu, Oct 5, 2017 at 12:55 AM Tilman Bayer  wrote:

> Hi Andrew,
>
> I would also look at edit tags [1], which capture many edits from
> OAuth-based tools that don't leave an identifying word in the edit
> summary field (example: reCh [2]).
> You can query these using the change tag table [3].
>
> [1] https://www.wikidata.org/wiki/Special:Tags (may take some time to
> load)
>
> [2] e.g.
>
> https://www.wikidata.org/w/index.php?limit=50&title=Special%3AContributions&contribs=user&target=%D4%B1%D5%B7%D5%A2%D5%B8%D5%BF&namespace=&tagfilter=&year=2017&month=2
>
> [3] https://www.mediawiki.org/wiki/Manual:Change_tag_table
>
> On Thu, Sep 28, 2017 at 7:34 PM, Andrew Hall  wrote:
> > Hello,
> >
> > I’m doing some analyses in which I want to identify Wikidata edits done
> via
> > editing tools (e.g. via QuickStatements, etc…). To identify these edits,
> > I've first flagged and removed bot edits and then I’ve generated a list
> of
> > the 1000 most popular revision comment words (ignoring case and some
> > punctuation characters as part of this process). Within this list of
> words,
> > I've identified 15 words that I believe indicate tool edits. I’ve
> included
> > these 15 words below.
> >
> > Does anyone know of tool edits that would be missed if I search for
> > revisions that contain one of these 15 words in their comments? Put
> another
> > way, are there editing tools not listed below? If so, can I identify
> edits
> > from those tools from revision comments?
> >
> > #quickstatements
> > #petscan
> > #autolist2
> > autoedit
> > nameguzzler
> > labellister
> > #itemcreator
> > #dragrefjs
> > [[useryms/lc|lcjs]]
> > #wikidatagame
> > [[wikidataprimary
> > #mix'n'match
> > mix'n'match
> > #distributedgame
> > [[userjitrixis/nameguzzlerjs|nameguzzler]]
> >
> >
> > Thanks in advance,
> > Andrew Hall
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
>
>
> --
> Tilman Bayer
> Senior Analyst
> Wikimedia Foundation
> IRC (Freenode): HaeB
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Which external identifiers are worth covering?

2017-09-08 Thread Magnus Manske
Is anyone working on an "auto-resolve" bot? If you have VIAF (but nothing
else), you can resolve other identifiers via the VIAF site; similarly, if
you have only GND, you could try to reverse-lookup VIAF.

I think a list of items that have zero external identifiers, ordered by
"importance" (incoming wikidata links, number of statements etc) would also
be helpful.

On Fri, Sep 8, 2017 at 10:52 AM Jane Darnell  wrote:

> As a basic rule for "which external identifiers are worth covering", I
> would begin with any  national identifiers we have for people (politicians,
> artists, writers, theologians, scientists, etc), then national identifiers
> for organizations (government-related, GNP-related businesses, nonprofits,
> educational institutions, etc), then national identifiers for places
> (census-defined population centers, battle-scenes, etc)
>
> In my opnion, the question should not be "which identifier has the most
> coverage" but "which items have the most identifiers"
>
>
> On Thu, Sep 7, 2017 at 9:26 PM, Andrew Gray 
> wrote:
>
>> Hi Marco,
>>
>> I guess this depends what you mean by "exhaustive". Exhaustive in that
>> every Wikidata item has ID X, or exhaustive in that we have every
>> instance of ID X in Wikidata?
>>
>> The first is probably not going to happen, as the vast majority of
>> external identifiers have a defined scope for what they identify. Some
>> are pretty broad - VIAF is essentially "everyone who exists in a
>> library catalogue as an author or subject" - but still have a limit.
>> We're never really going to reach a situation where there is a single
>> identifier type that covers everyone, unless we're linking across to
>> another Wikidata-type comprehensive knowledgebase, and even then we'd
>> need to ensure we're in a position where they already cover everything
>> in Wikidata.
>>
>> The second can (and has) been done - the largest one I know of offhand
>> for people is the Oxford DNB (60k items) but for non-people we have
>> complete coverage of eg Swedish district codes, P1841 (160k items).
>> It's a bit of a slog to get these completed and then maintained, since
>> the last 5-10% tend to be more challenging complicated cases, but one
>> or two determined people can make it happen. And of course it's not
>> appropriate for many identifiers, as they may issue IDs for things
>> that we don't intend to have in Wikidata, so we will never completely
>> cover them.
>>
>> I should quickly plug the "expected completeness" property which is
>> really useful for identifiers - P2429 - as this can quickly show
>> whether something is a) completely on Wikidata; b) not complete yet
>> but eventually might be; or c) probably never will be. Not very widely
>> rolled out yet, though...
>>
>> Andrew.
>>
>>
>> On 7 September 2017 at 19:51, Marco Fossati 
>> wrote:
>> > Hi everyone,
>> >
>> > As a data quality addict, I've been investigating the coverage of
>> external
>> > identifiers linked to Wikidata items about people.
>> >
>> > Given the numbers on SQID [1] and some SPARQL queries [2, 3], it seems
>> that
>> > even the second most used ID (VIAF) only covers *25%* of people items
>> circa.
>> > Then, there is a long tail of IDs that are barely used at all.
>> >
>> > So here is my question:
>> > *which external identifiers deserve an effort to achieve exhaustive
>> > coverage?*
>> >
>> > Looking forward to your valuable feedback.
>> > Cheers,
>> >
>> > Marco
>> >
>> > [1] https://tools.wmflabs.org/sqid/#/browse?type=properties "Select
>> > datatype" set to "ExternalId", "Used for class" set to "human Q5"
>> > [2] total people: http://tinyurl.com/ybvcm5uw
>> > [3] people with a VIAF link: http://tinyurl.com/ya6dnpr7
>> >
>> > ___
>> > Wikidata mailing list
>> > Wikidata@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>>
>> --
>> - Andrew Gray
>>   and...@generalist.org.uk
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Some Mix'n'match mappings not stored in Wikidata?

2017-08-21 Thread Magnus Manske
On Mon, Aug 21, 2017 at 12:37 PM Osma Suominen 
wrote:

> Thanks Magnus, I hadn't noticed this sync function.
>
> When I open the sync page, it says "24 connections on Wikidata, but not
> here". I clicked on "Update Mix'n'match", and after a while it says
> "Done", but when I refresh the page, it still says there are 24
> connections. When I started it said 27, then after one update it was 26,
> then 24, but now I can't get it any lower. I wonder what's going on?
>

There may be items with YSO IDs on Wikidata that are not in Mix'n'match.


>
> It also says "3 connections here, but not on Wikidata". When I click on
> Update Wikidata, I get to the Quick Statements tool, with three
> statements that I suppose would add the missing connections. But when I
> click on "Do it" nothing happens. I also checked the 3 generated YSO ID
> statements, but they were already in Wikidata, added several days ago!
>

When you say "nothing happens", did you scroll down? There's a bug that
shows the manual again, so the edits are going on "below the fold".

The "diff" functionality for sync uses the Wikidata SPARQL interface, which
might lack the odd statement. Or the Q item in Mix'n'match is now deleted,
or a redirect, in which case editing won't work.


>
> So something seems fishy here, the numbers don't quite add up and
> apparently not everything can be synced between Mix'n'match and
> Wikidata. But I think the important information got synced, so all is
> (relatively) well.
>

Automation can only cope with so much human activity :-(


> Part of the reason may be that there are a few YSO concepts that are
> linked to from Wikidata entities using the same YSO ID property, but
> these are not included in the "YSO Places" catalog in Mix'n'match
> because they are not places. I suspect at least some of those "24
> connections on Wikidata, but not here" may be like that - they don't
> match any of the IDs in the YSO Places catalog.
>
> -Osma
>
> Magnus Manske kirjoitti 21.08.2017 klo 12:25:
> > There is a sync function per catalog, in your case:
> >
> > https://tools.wmflabs.org/mix-n-match/#/sync/473
> >
> > This is also linked from the "Action" drop-down menu for the respective
> > catalog.
> >
> > I am running it now, so should be low numbers (14 not on Wikidata when I
> > got there).
> >
> > On Mon, Aug 21, 2017 at 10:07 AM Osma Suominen
> > mailto:osma.suomi...@helsinki.fi>> wrote:
> >
> > Hi,
> >
> > We're more than halfway through mapping YSO places to Wikidata. Most
> of
> > the remaining are places that don't exist in Wikidata, and adding
> them
> > is quite labor-intensive so we will have to consider our strategy.
> >
> > Anyway, I did some checking of what remains unmapped and noticed a
> > potential problem: some mappings for places that we have mapped using
> > Mix'n'match have not actually been stored in Wikidata. For example
> Q36
> > Poland ("Puola" in YSO Places) is such a case. In Mix'n'match it is
> > shown as manually matched (see attached screenshot), but in Wikidata
> the
> > corresponding YSO ID property doesn't actually exist for the entity.
> I
> > checked the change history of the Q36 entity and couldn't find
> anything
> > relevant there, so it seems that the mapping was never stored in
> > Wikidata. Maybe there was a transient error of some kind?
> >
> > Another such case was Q1754 Stockholm ("Tukholma" in YSO places). But
> > for that one we removed the existing mapping in Mix'n'match and set
> it
> > again, and now it is properly stored in Wikidata.
> >
> > Mix'n'match currently reports 4228 mappings for YSO places, while a
> > SPARQL query for the Wikidata endpoint returns 4221 such mappings.
> So I
> > suspect that this only affects a small number of entities.
> >
> > Is it possible to compare the Mix'n'match mappings with what actually
> > exists in Wikidata, and somehow re-sync them? Or just to get the
> > mappings out from Mix'n'match and compare them with what exists in
> > Wikidata, so that the few missing mappings may be added there
> manually?
> >
> > Thanks,
> > Osma
> >
> >
> > --
> > Osma Suominen
> > D.Sc. (Tech), Information Systems Specialist
> > National Libr

Re: [Wikidata] Some Mix'n'match mappings not stored in Wikidata?

2017-08-21 Thread Magnus Manske
There is a sync function per catalog, in your case:

https://tools.wmflabs.org/mix-n-match/#/sync/473

This is also linked from the "Action" drop-down menu for the respective
catalog.

I am running it now, so should be low numbers (14 not on Wikidata when I
got there).

On Mon, Aug 21, 2017 at 10:07 AM Osma Suominen 
wrote:

> Hi,
>
> We're more than halfway through mapping YSO places to Wikidata. Most of
> the remaining are places that don't exist in Wikidata, and adding them
> is quite labor-intensive so we will have to consider our strategy.
>
> Anyway, I did some checking of what remains unmapped and noticed a
> potential problem: some mappings for places that we have mapped using
> Mix'n'match have not actually been stored in Wikidata. For example Q36
> Poland ("Puola" in YSO Places) is such a case. In Mix'n'match it is
> shown as manually matched (see attached screenshot), but in Wikidata the
> corresponding YSO ID property doesn't actually exist for the entity. I
> checked the change history of the Q36 entity and couldn't find anything
> relevant there, so it seems that the mapping was never stored in
> Wikidata. Maybe there was a transient error of some kind?
>
> Another such case was Q1754 Stockholm ("Tukholma" in YSO places). But
> for that one we removed the existing mapping in Mix'n'match and set it
> again, and now it is properly stored in Wikidata.
>
> Mix'n'match currently reports 4228 mappings for YSO places, while a
> SPARQL query for the Wikidata endpoint returns 4221 such mappings. So I
> suspect that this only affects a small number of entities.
>
> Is it possible to compare the Mix'n'match mappings with what actually
> exists in Wikidata, and somehow re-sync them? Or just to get the
> mappings out from Mix'n'match and compare them with what exists in
> Wikidata, so that the few missing mappings may be added there manually?
>
> Thanks,
> Osma
>
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] High dispatch lag

2017-07-20 Thread Magnus Manske
I could limit batches in version 2, but version 1 just runs in the browser,
so it would be hard to limit it to one instance.

Maybe per-user rate limiting should rather be done on the Wikidata API
level, if at all?

On Thu, Jul 20, 2017 at 2:19 PM Sjoerd de Bruin 
wrote:

> I hope some developer can say something about that. I think you should
> make it impossible to run multiple instances of QuickStatements at once,
> that is causing most issues. And please check if editing rate is limited.
>
> Greetings,
>
> Sjoerd de Bruin
> sjoerddebr...@me.com
>
> Op 20 jul. 2017, om 15:16 heeft Magnus Manske 
> het volgende geschreven:
>
> Hi,
>
> is it known why exactly this is happening? Wikidata editing too fast?
> en.wp updating too slow?
>
> I could deactivate both versions of quickstatements, but not for specific
> "items with enwp sitelinks", only all-or-nothing; I hope it's not extreme
> enough for that yet?
>
> On Thu, Jul 20, 2017 at 1:29 PM Sjoerd de Bruin 
> wrote:
>
>> Hello everyone,
>>
>> There has been a higher dispatch lag[1] since the end of last month, see
>> https://grafana.wikimedia.org/dashboard/db/wikidata-dispatch. The
>> English Wikipedia is now three days behind Wikidata, according to
>> https://www.wikidata.org/wiki/Special:DispatchStats. I would like to ask
>> everyone to keep mass edits to items with sitelinks to a minimum low until
>> the dispatch lag is on a reasonable level.
>>
>> Greetings,
>>
>> Sjoerd de Bruin
>> sjoerddebr...@me.com
>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] High dispatch lag

2017-07-20 Thread Magnus Manske
Hi,

is it known why exactly this is happening? Wikidata editing too fast? en.wp
updating too slow?

I could deactivate both versions of quickstatements, but not for specific
"items with enwp sitelinks", only all-or-nothing; I hope it's not extreme
enough for that yet?

On Thu, Jul 20, 2017 at 1:29 PM Sjoerd de Bruin 
wrote:

> Hello everyone,
>
> There has been a higher dispatch lag[1] since the end of last month, see
> https://grafana.wikimedia.org/dashboard/db/wikidata-dispatch. The English
> Wikipedia is now three days behind Wikidata, according to
> https://www.wikidata.org/wiki/Special:DispatchStats. I would like to ask
> everyone to keep mass edits to items with sitelinks to a minimum low until
> the dispatch lag is on a reasonable level.
>
> Greetings,
>
> Sjoerd de Bruin
> sjoerddebr...@me.com
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] New step towards structured data for Commons is now available: federation

2017-07-06 Thread Magnus Manske
Fantastic news!

Now if you could set up a SPARQL instance for those two...

(the reward for doing good work is more work!)


On Thu, Jul 6, 2017 at 2:10 PM Léa Lacroix  wrote:

> Hello all,
>
> As you may know, WMF, WMDE and volunteers are working together on the 
> structured
> data for Commons
>  project.
> We’re currently working on a lot of technical groundwork for this project.
> One big part of that is allowing the use of Wikidata’s items and properties
> to describe media files on Commons. We call this feature federation. We
> have now developed the necessary code for it and you can try it out on a
> test system and give feedback.
>
> We have one test wiki that represents Commons (
> http://structured-commons.wmflabs.org) and another one simulating
> Wikidata (http://federated-wikidata.wmflabs.org). You can see an example
>  where the
> statements use items and properties from the faked Wikidata. Feel free to
> try it by adding statements to to some of the files on the test system.
> (You might need to create some items on
> http://federated-wikidata.wmflabs.org if they don’t exist yet. We have
> created a few for testing.)
> If you have any questions or concern, please let us know.
> Thanks,
>
> --
> Léa Lacroix
> Project Manager Community Communication for Wikidata
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
> www.wikimedia.de
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt
> für Körperschaften I Berlin, Steuernummer 27/029/42207.
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Weird Wikidata entity descriptions in Mix'n'match

2017-06-27 Thread Magnus Manske
Reload once, should be fixed now.

On Tue, Jun 27, 2017 at 12:41 PM Osma Suominen 
wrote:

> Hi Magnus!
>
> Thanks for confirming. It's no big deal if you know about it, but is
> very confusing when you see it happening for the first time.
>
> -Osma
>
> Magnus Manske kirjoitti 27.06.2017 klo 14:34:
> > That is an issue with the rendering system I have encountered before. I
> > thought I had it fixed. The Wikidata entries are fine, it just shows
> > "old" information.
> >
> > Until I fix it, reload the page and it will show correctly.
> >
> > On Tue, Jun 27, 2017 at 8:17 AM Osma Suominen  > <mailto:osma.suomi...@helsinki.fi>> wrote:
> >
> > Hi,
> >
> > While doing mappings for YSO places in Mix'n'match, my colleague
> noticed
> > that the descriptions shown for the Wikidata entities are mismatched
> and
> > come from completely different entities. See the attached screenshot.
> >
> > Here are some examples of mismatches:
> >
> > Mariehamn - Island group in Norway ??
> > Manchuria - Province of Finland ??
> > Menkijärvi - Lake in Stockholm County ??
> > Nordic countries - Island in Dodecanese, Greece ??
> >
> > It appears that the descriptions come from different entities from
> those
> > shown. The problem seems to be related to the pagination
> functionality -
> > when switching between pages, sometimes only part of the information
> is
> > updated - either the Wikidata entity names or their descriptions, but
> > not always both.
> >
> > The browser was Firefox 45.2.0 running on Windows 7. I could also see
> > the same problem on Firefox 54.0 running on Ubuntu 16.04. Just
> choosing
> > "Automatically matched" and then flipping between the pages a few
> times
> > seems to be enough to trigger the problem.
> >
> > -Osma
> >
> > --
> > Osma Suominen
> > D.Sc. (Tech), Information Systems Specialist
> > National Library of Finland
> > P.O. Box 26 (Kaikukatu 4)
> > 00014 HELSINGIN YLIOPISTO
> > Tel. +358 50 3199529 <+358%2050%203199529> 
> > osma.suomi...@helsinki.fi <mailto:osma.suomi...@helsinki.fi>
> > http://www.nationallibrary.fi
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org>
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
> >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Weird Wikidata entity descriptions in Mix'n'match

2017-06-27 Thread Magnus Manske
That is an issue with the rendering system I have encountered before. I
thought I had it fixed. The Wikidata entries are fine, it just shows "old"
information.

Until I fix it, reload the page and it will show correctly.

On Tue, Jun 27, 2017 at 8:17 AM Osma Suominen 
wrote:

> Hi,
>
> While doing mappings for YSO places in Mix'n'match, my colleague noticed
> that the descriptions shown for the Wikidata entities are mismatched and
> come from completely different entities. See the attached screenshot.
>
> Here are some examples of mismatches:
>
> Mariehamn - Island group in Norway ??
> Manchuria - Province of Finland ??
> Menkijärvi - Lake in Stockholm County ??
> Nordic countries - Island in Dodecanese, Greece ??
>
> It appears that the descriptions come from different entities from those
> shown. The problem seems to be related to the pagination functionality -
> when switching between pages, sometimes only part of the information is
> updated - either the Wikidata entity names or their descriptions, but
> not always both.
>
> The browser was Firefox 45.2.0 running on Windows 7. I could also see
> the same problem on Firefox 54.0 running on Ubuntu 16.04. Just choosing
> "Automatically matched" and then flipping between the pages a few times
> seems to be enough to trigger the problem.
>
> -Osma
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-19 Thread Magnus Manske
On Mon, Jun 19, 2017 at 12:16 PM Osma Suominen 
wrote:

>
> I couldn't see the "not on Wikidata" button that was mentioned in the
> manual in any of the modes. Has it been removed? It would be useful to
> be able to mark that something is not (yet) in Wikidata, though I
> suppose it could be added by someone else at any time, so this type of
> information may become obsolete over time.
>
> That was indeed removed, as it takes a long time to finish large catalogs
(years), and by that time new  items may have been created, so all the "not
in Wikidata" entries have to be checked again.

My official policy now is to create a new item if one does not exist; the
fact that there is an entry in a (good) third-party catalog alone makes
them notable on Wikidata, but villages and lakes etc. are also notable by
default.
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-19 Thread Magnus Manske
For "casual matching", try the game mode:
https://tools.wmflabs.org/mix-n-match/#/random/473

On Mon, Jun 19, 2017 at 10:16 AM Osma Suominen 
wrote:

> Hi Magnus!
>
> It's even higher now - 45%. Thanks a lot! This helps a lot with the
> verifying.
>
> Also matching of names with parenthetical qualifiers works better now. I
> see that "Ala-Malmi (Helsinki)" was automatched to "Ala-Malmi". However,
> "Ahjo (Kerava)" was not matched to "Ahjo (Kerava)" (Q11849902) but to
> Q1368573 (which is "Ahjo" in Finnish but means a type of metalworking
> workshop, not a specific place). Neither Wikidata entity has a type
> statement, the latter has "subclass-of " statement.
>
> In any case, I think this is now good enough for serious work, so we
> will start verifying the suggested matches. 2.5% (173) already done...
>
> -Osma
>
>
> Magnus Manske kirjoitti 19.06.2017 klo 12:02:
> > I fiddled with it a bit, now 35% automatched.
> >
> > Will try some more, but there are some sanity constraints on the
> > matching. If it finds more than one match for the name, it does not set
> > any match, because random matches on the same name were annoying in the
> > past. There is also a type constraint, which might skip some Wikidata
> > items without appropriate instance/subclass.
> >
> > On Mon, Jun 19, 2017 at 8:09 AM Osma Suominen  > <mailto:osma.suomi...@helsinki.fi>> wrote:
> >
> > Hi Magnus, all,
> >
> > I've been looking a bit closer at the YSO places catalog [1] in
> > Mix'n'match and I'm wondering why only 20% of the places were
> > automatically matched.
> >
> > For example, Nepal (http://www.yso.fi/onto/yso/p107682) was
> > automatically matched to Nepal (Q837).
> >
> > But:
> >
> > Accra (http://www.yso.fi/onto/yso/p138653) was not matched to Accra
> > (Q3761).
> >
> > Aceh (http://www.yso.fi/onto/yso/p147889) was not matched to Aceh
> > (Q1823).
> >
> > Akkunusjoki (http://www.yso.fi/onto/yso/p109251) was not matched to
> > Akkunusjoki (Q12253027).
> >
> > There are many more cases like this. So the precision of the
> automatic
> > matching seems good (all but one were correct so far), but the
> recall is
> > rather low, and even in cases where the label is identical a match
> has
> > not been suggested. Is there anything that could be done about this?
> >
> >
> > Somewhat related to this, it seems that none of the places with
> > parenthetical qualifiers in their names were matched. For example
> "Ahjo
> > (Kerava)" could have been matched to Q11849902 (which has a Finnish
> > label that is identical) and "Ala-Malmi (Helsinki)" could have been
> > matched to Q2829441 ("Ala-Malmi"). Since almost 60% of the place
> names
> > include parenthetical qualifiers - to make them unique despite
> different
> > places having identical names - this means that a lot of potential
> > matches are missing. Could something be done to improve the
> situation?
> >
> >
> > If Mix'n'match is incapable of automatically matching cases like
> this,
> > would it help if I did an automatic matching externally using some
> other
> > tool, and then gave the potential matches as e.g. a CSV file that
> could
> > then be imported into Mix'n'match so that they can be verified there?
> >
> > -Osma
> >
> > [1] https://tools.wmflabs.org/mix-n-match/#/catalog/473
> >
> >
> > Osma Suominen kirjoitti 17.06.2017 klo 13:13:
> >  > Hi Magnus,
> >  >
> >  > Thanks a lot, that was fast! And the results look very good!
> >  >
> >  > I confirmed a couple dozen automated mapping and fixed an
> > incorrect one
> >  > ("Amerikka" was matched to USA, but I changed it to "Americas").
> > Then I
> >  > started hitting rate limit errors. I guess it would be possible
> > to avoid
> >  > those with some extra permissions?
> >  >
> >  > About 20% of the places were automatically matched. Probably most
> > of the
> >  > remaining ones - around 5000 - do not exist in Wikidata because
> > they are
> >  > e.g. towns and villages in Finland. Would it be fair game to
> > create all
> >  > of them in Wikidata?
> >  >
&g

Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-19 Thread Magnus Manske
I fiddled with it a bit, now 35% automatched.

Will try some more, but there are some sanity constraints on the matching.
If it finds more than one match for the name, it does not set any match,
because random matches on the same name were annoying in the past. There is
also a type constraint, which might skip some Wikidata items without
appropriate instance/subclass.

On Mon, Jun 19, 2017 at 8:09 AM Osma Suominen 
wrote:

> Hi Magnus, all,
>
> I've been looking a bit closer at the YSO places catalog [1] in
> Mix'n'match and I'm wondering why only 20% of the places were
> automatically matched.
>
> For example, Nepal (http://www.yso.fi/onto/yso/p107682) was
> automatically matched to Nepal (Q837).
>
> But:
>
> Accra (http://www.yso.fi/onto/yso/p138653) was not matched to Accra
> (Q3761).
>
> Aceh (http://www.yso.fi/onto/yso/p147889) was not matched to Aceh (Q1823).
>
> Akkunusjoki (http://www.yso.fi/onto/yso/p109251) was not matched to
> Akkunusjoki (Q12253027).
>
> There are many more cases like this. So the precision of the automatic
> matching seems good (all but one were correct so far), but the recall is
> rather low, and even in cases where the label is identical a match has
> not been suggested. Is there anything that could be done about this?
>
>
> Somewhat related to this, it seems that none of the places with
> parenthetical qualifiers in their names were matched. For example "Ahjo
> (Kerava)" could have been matched to Q11849902 (which has a Finnish
> label that is identical) and "Ala-Malmi (Helsinki)" could have been
> matched to Q2829441 ("Ala-Malmi"). Since almost 60% of the place names
> include parenthetical qualifiers - to make them unique despite different
> places having identical names - this means that a lot of potential
> matches are missing. Could something be done to improve the situation?
>
>
> If Mix'n'match is incapable of automatically matching cases like this,
> would it help if I did an automatic matching externally using some other
> tool, and then gave the potential matches as e.g. a CSV file that could
> then be imported into Mix'n'match so that they can be verified there?
>
> -Osma
>
> [1] https://tools.wmflabs.org/mix-n-match/#/catalog/473
>
>
> Osma Suominen kirjoitti 17.06.2017 klo 13:13:
> > Hi Magnus,
> >
> > Thanks a lot, that was fast! And the results look very good!
> >
> > I confirmed a couple dozen automated mapping and fixed an incorrect one
> > ("Amerikka" was matched to USA, but I changed it to "Americas"). Then I
> > started hitting rate limit errors. I guess it would be possible to avoid
> > those with some extra permissions?
> >
> > About 20% of the places were automatically matched. Probably most of the
> > remaining ones - around 5000 - do not exist in Wikidata because they are
> > e.g. towns and villages in Finland. Would it be fair game to create all
> > of them in Wikidata?
> >
> > -Osma
> >
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-16 Thread Magnus Manske
Now at https://tools.wmflabs.org/mix-n-match/#/catalog/473

Location data as well, example:
https://tools.wmflabs.org/mix-n-match/#/entry/22733305


On Fri, Jun 16, 2017 at 2:40 PM Osma Suominen 
wrote:

> Hi Magnus!
>
> That's excellent news! Thanks a lot!
>
> I'm currently preparing a CSV dump of YSO places. Most of the entries
> have coordinates. I will send it to you soon for inclusion as a catalog
> in Mix'n'match.
>
> -Osma
>
> Magnus Manske kirjoitti 16.06.2017 klo 00:00:
> > Just to update everyone in this thread, I have added location support
> > for Mix'n'match. This will show on entries with a location, e.g.:
> >
> > https://tools.wmflabs.org/mix-n-match/#/entry/1655814
> >
> > All Mix'n'match locations (just short of half a million at the moment)
> > can be seen as a layer in WikiShootMe, e.g.:
> >
> > https://goo.gl/kqfjoj
> >
> > Cheers,
> > Magnus
> >
> > On Tue, Jun 13, 2017 at 5:52 PM Neubert, Joachim  > <mailto:j.neub...@zbw.eu>> wrote:
> >
> > Hi Osma,
> >
> > sorry for jumping in late. I've been at ELAG last week, talking
> > about a very similar topic (Wikidata as authority linking hub,
> > https://hackmd.io/p/S1YmXWC0e). Our use case was porting an existing
> > mapping between RePEc author IDs and GND IDs into Wikidata (and
> > furtheron extending it there). In that course, we had to match as
> > many persons as possible on the GND as well as on the RePEc side
> > (via Mix'n'match), before creating new items. The code used for
> > preparing the (quickstatements2) insert statements is linked from
> > the slides.
> >
> > Additionally, I've added ~12,000 GND IDs to Wikidata via their
> > existing VIAF identifiers (derived from a federated query on a
> > custom VIAF endpoint and the public WD endpoint -
> >
> https://github.com/zbw/sparql-queries/blob/master/viaf/missing_gnd_id_for_viaf.rq
> ).
> > This sounds very similar to your use case; also another query which
> > can derive future STW ID properties from the existing STW-GND
> > mapping
> > (
> https://github.com/zbw/sparql-queries/blob/master/stw/wikidata_mapping_candidates_via_gnd.rq
> > - currently hits a timeout at the WD subquery, but worked before). I
> > would be happy if that could be helpful.
> >
> > The plan to divide the m'n'm catalogs (places vs. subjects) makes
> > sense for me, we plan the same for STW. I'm not sure, if a
> > restriction to locations (Q17334923, or something more specific)
> > will match also all subclasses, but Magnus could perhaps take care
> > of that when you send him the files.
> >
> > Cheers, Joachim
> >
> >  > -Ursprüngliche Nachricht-
> >  > Von: Wikidata [mailto:wikidata-boun...@lists.wikimedia.org
> > <mailto:wikidata-boun...@lists.wikimedia.org>] Im Auftrag von
> >  > Osma Suominen
> >  > Gesendet: Dienstag, 6. Juni 2017 12:19
> >  > An: Discussion list for the Wikidata project.
> >  > Betreff: [Wikidata] Mix'n'Match with existing (indirect) mappings
> >  >
> >  > Hi Wikidatans,
> >  >
> >  > After several delays we are finally starting to think seriously
> > about mapping the
> >  > General Finnish Ontology YSO [1] to Wikidata. A "YSO ID"
> >  > property (https://www.wikidata.org/wiki/Property:P2347) was
> added to
> >  > Wikidata some time ago, but it has been used only a few times so
> far.
> >  >
> >  > Recently some 6000 places have been added to "YSO Places" [2], a
> new
> >  > extension of YSO, which was generated from place names in YSA and
> > Allärs,
> >  > our earlier subject indexing vocabularies. It would probably make
> > sense to map
> >  > these places to Wikidata, in addition to the general concepts in
> > YSO. We have
> >  > already manually added a few links from YSA/YSO places to
> > Wikidata for newly
> >  > added places, but this approach does not scale if we want to link
> > the thousands
> >  > of existing places.
> >  >
> >  > We also have some indirect sources of YSO/Wikidata mappings:
> >  >
> >  > 1. YSO is mapped to LCSH, and Wikidata also to LCSH (using P244,
> > LC/NACO
> >  > Authority File ID).

Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-15 Thread Magnus Manske
Just to update everyone in this thread, I have added location support for
Mix'n'match. This will show on entries with a location, e.g.:

https://tools.wmflabs.org/mix-n-match/#/entry/1655814

All Mix'n'match locations (just short of half a million at the moment) can
be seen as a layer in WikiShootMe, e.g.:

https://goo.gl/kqfjoj

Cheers,
Magnus

On Tue, Jun 13, 2017 at 5:52 PM Neubert, Joachim  wrote:

> Hi Osma,
>
> sorry for jumping in late. I've been at ELAG last week, talking about a
> very similar topic (Wikidata as authority linking hub,
> https://hackmd.io/p/S1YmXWC0e). Our use case was porting an existing
> mapping between RePEc author IDs and GND IDs into Wikidata (and furtheron
> extending it there). In that course, we had to match as many persons as
> possible on the GND as well as on the RePEc side (via Mix'n'match), before
> creating new items. The code used for preparing the (quickstatements2)
> insert statements is linked from the slides.
>
> Additionally, I've added ~12,000 GND IDs to Wikidata via their existing
> VIAF identifiers (derived from a federated query on a custom VIAF endpoint
> and the public WD endpoint -
> https://github.com/zbw/sparql-queries/blob/master/viaf/missing_gnd_id_for_viaf.rq).
> This sounds very similar to your use case; also another query which can
> derive future STW ID properties from the existing STW-GND mapping (
> https://github.com/zbw/sparql-queries/blob/master/stw/wikidata_mapping_candidates_via_gnd.rq
> - currently hits a timeout at the WD subquery, but worked before). I would
> be happy if that could be helpful.
>
> The plan to divide the m'n'm catalogs (places vs. subjects) makes sense
> for me, we plan the same for STW. I'm not sure, if a restriction to
> locations (Q17334923, or something more specific) will match also all
> subclasses, but Magnus could perhaps take care of that when you send him
> the files.
>
> Cheers, Joachim
>
> > -Ursprüngliche Nachricht-
> > Von: Wikidata [mailto:wikidata-boun...@lists.wikimedia.org] Im Auftrag
> von
> > Osma Suominen
> > Gesendet: Dienstag, 6. Juni 2017 12:19
> > An: Discussion list for the Wikidata project.
> > Betreff: [Wikidata] Mix'n'Match with existing (indirect) mappings
> >
> > Hi Wikidatans,
> >
> > After several delays we are finally starting to think seriously about
> mapping the
> > General Finnish Ontology YSO [1] to Wikidata. A "YSO ID"
> > property (https://www.wikidata.org/wiki/Property:P2347) was added to
> > Wikidata some time ago, but it has been used only a few times so far.
> >
> > Recently some 6000 places have been added to "YSO Places" [2], a new
> > extension of YSO, which was generated from place names in YSA and Allärs,
> > our earlier subject indexing vocabularies. It would probably make sense
> to map
> > these places to Wikidata, in addition to the general concepts in YSO. We
> have
> > already manually added a few links from YSA/YSO places to Wikidata for
> newly
> > added places, but this approach does not scale if we want to link the
> thousands
> > of existing places.
> >
> > We also have some indirect sources of YSO/Wikidata mappings:
> >
> > 1. YSO is mapped to LCSH, and Wikidata also to LCSH (using P244, LC/NACO
> > Authority File ID). I digged a bit into both sets of mappings and found
> that
> > approximately 1200 YSO-Wikidata links could be generated from the
> > intersection of these mappings.
> >
> > 2. The Finnish broadcasting company Yle has also created some mappings
> > between KOKO (which includes YSO) and Wikidata. Last time I looked at
> those,
> > we could generate at least 5000 YSO-Wikidata links from them.
> > Probably more nowadays.
> >
> >
> > Of course, indirect mappings are a bit dangerous. It's possible that
> there are
> > some differences in meaning, especially with LCSH which has a very
> different
> > structure (and cultural context) than YSO. Nevertheless I think these
> could be a
> > good starting point, especially if a tool such as Mix'n'Match could be
> used to
> > verify them.
> >
> > Now my question is, given that we already have or could easily generate
> > thousands of Wikidata-YSO mappings, but the rest would still have to be
> semi-
> > automatically linked using Mix'n'Match, what would be a good way to
> > approach this? Does Mix'n'Match look at existing statements (in this
> case YSO
> > ID / P2347) in Wikidata when you load a new catalog, or ignore them?
> >
> > I can think of at least these approach

Re: [Wikidata] Multilingual and synonym support for M'n'm / was: Mix'n'Match with existing (indirect) mappings

2017-06-14 Thread Magnus Manske
On Tue, Jun 13, 2017 at 6:25 PM Neubert, Joachim  wrote:

> Hi Magnus, Osma,
>
>
>
> I suppose the scenario Osma pointed out is quite common for knowledge
> organization systems and in particular thesauri: Matching could take
> advantage of multilingual labels and also of synonyms, which are defined in
> the KOS.
>
>
>
> For the populating STW Thesaurus for Economics ID (P3911), my preliminary
> plan was to match with all multilingual labels and synonyms as search
> string in a custom WD endpoint (Fuseki, with full text search support), and
> display in the ranked SPARQL results of the search with a column with a
> valid insert statement that can be copied and pasted into QuickStatements2.
>
>
>
> Since Stas just announced an extension for WDQS with fulltext search (if I
> haven’t misunderstood his mail of 2017-06-12), it is perhaps now possible
> to do this kind of matching in WDQS.
>
>
>
> It would be great if such an extended matching could be integrated into
> M’n’m.
>
To clarify, Mix'n'match already searches language-neutral, e.g. for
automatch.

Storing multiple labels per entry in the Mix'n'match database, and then
checking all-against-all, would require some large-scale rewiring.
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-07 Thread Magnus Manske
I won't be getting into coordinate cleanup ;-)

Coordinates would have to be compatible with
https://www.wikidata.org/wiki/Property:P625

On Wed, Jun 7, 2017 at 12:10 PM Susanna Ånäs  wrote:

> We will also need a coordinate transformation since all official Finnish
> coordinates are in EPSG:3067. Before or in MixnMatch.
>
> Susanna
>
> 2017-06-07 14:03 GMT+03:00 Osma Suominen :
>
>> 07.06.2017, 13:10, Magnus Manske kirjoitti:
>>
>>> Does that imply coordinates in Mix'n'match? Because there is no support
>>> for that yet, though I could add it. Do you have an example catalog
>>> (existing or to-be-created)?
>>>
>>
>> For YSO places, it would be possible to create a Mix'n'Match catalog
>> where the majority of places have coordinates. YSO places doesn't itself
>> contain coordinates, but the Finnish places within it have been mapped to
>> the Place Name Registry (Paikannimirekisteri) maintained by National Land
>> Survey of Finland (Maanmittauslaitos), which includes point coordinates for
>> all places. So it would be possible to pick the coordinates from there for
>> the 4400 or so places that have been mapped, if that helps with the linking
>> in Mix'n'Match.
>>
>>
>> -Osma
>>
>> --
>> Osma Suominen
>> D.Sc. (Tech), Information Systems Specialist
>> National Library of Finland
>> P.O. Box 26 (Kaikukatu 4)
>> 00014 HELSINGIN YLIOPISTO
>> Tel. +358 50 3199529
>> osma.suomi...@helsinki.fi
>> http://www.nationallibrary.fi
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Tools for artworks and Depicts property

2017-06-07 Thread Magnus Manske
Can do, but this can get quite complicated. Example:
https://www.wikidata.org/wiki/Q2917717#P180

There have to be "target search" (e.g. "bull"), zero to many qualifiers
(some qualifier properties may be used several times in a single statement,
like "applies to part"), some of these should be offered by default because
they occur often ("applies to part"), with optional default values
("foreground", "right"), while allowing arbitrary values ("sky")...

Any development should also take the upcoming Commons Wikibase into
account, so the code can be re-used to annotate any Commons image.

This all calls for a separate tool, rather than shoehorning it into e.g.
the Distributed Game. Some planning, on-wiki, with "community buy-in" might
not be too outrageous a suggestion, surely?

On Tue, Jun 6, 2017 at 8:22 PM Pharos  wrote:

> Hi Wikidata-niks,
>
> As part of the Metropolitan Museum of Art project, I am interested in
> facilitating more public editing of Wikidata items for artwork through
> external tools, including that by relative newbies who might have an
> interest in art history.
>
> One basic property for artworks that is particularly suited for this field
> is Depicts (P180), for example saying that a particular painting depicts a
> particular person (or building, or mountain, or divinity, or type of
> clothing).
>
> We can do this to some extent now with Listeria and its 'wdedit' option,
> but this requires js customization and significant wiki background on the
> user''s part.
>
> I was thinking something like the Wikidata Distributed Game might be
> interesting and broadly accessible to the public, but that tool currently
> only allows multiple-choice edits, and doesn't have a text entry box option.
>
> Would it be possible to have some WiDaR-sh tool that could fill this niche
> for artworks?
>
> I think it could be of very broad usefulness and interest to art
> communities.
>
> Thanks,
> Pharos
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-07 Thread Magnus Manske
Does that imply coordinates in Mix'n'match? Because there is no support for
that yet, though I could add it. Do you have an example catalog (existing
or to-be-created)?

On Tue, Jun 6, 2017 at 6:30 PM Susanna Ånäs  wrote:

> I thought of something like this:
> https://drive.google.com/file/d/0BxuJSZymOK8-R1Q0SXpmVGk3dkE/view
>
> Susanna
>
> 2017-06-06 19:21 GMT+03:00 Alex Stinson :
>
>> @Sandra: are you suggesting another layer on top of something like
>> https://tools.wmflabs.org/wikishootme/ ?
>>
>> Cheers,
>>
>> Alex
>>
>> On Tue, Jun 6, 2017 at 10:22 AM, Susanna Ånäs 
>> wrote:
>>
>>> Would anyone be interested in creating a map interface for matching
>>> places in Mix'n'Match?
>>>
>>> Just a thought...
>>>
>>> Susanna
>>>
>>> 2017-06-06 17:17 GMT+03:00 Osma Suominen :
>>>
>>>> Magnus Manske kirjoitti 06.06.2017 klo 17:06:
>>>>
>>>>> By the way, we also have multilingual labels that could perhaps
>>>>> improve
>>>>> the automatic matching. YSO generally has fi/sv/en, YSO places has
>>>>> fi/sv. Can you make use of these too if I provided them in
>>>>> additional
>>>>> columns?
>>>>>
>>>>> Sorry, mix'n'match only does single language labels.
>>>>>
>>>>
>>>> Ok, then I have to think which language to pick for Mix'n'Match use.
>>>> For YSO, Finnish and Swedish labels are generally the best quality, but
>>>> probably wouldn't produce as many automated hits as the English ones. Also
>>>> it depends on who is going to do the manual matching.
>>>>
>>>> Any advice on this?
>>>>
>>>> It does redirect like this already. See e.g.
>>>>> http://www.yso.fi/onto/yso/p138653
>>>>>
>>>>> Great! So you could bunch the "old" ones and the new places into one
>>>>> list?
>>>>>
>>>>
>>>> In principle yes, but in practice, I think it would make sense to use
>>>> two lists, because the places are quite different from the general
>>>> concepts. Also the matching could be more focused for the places - don't
>>>> try to match with any Wikidata entity that is not a place.
>>>>
>>>>
>>>> -Osma
>>>>
>>>> --
>>>> Osma Suominen
>>>> D.Sc. (Tech), Information Systems Specialist
>>>> National Library of Finland
>>>> P.O. Box 26 (Kaikukatu 4)
>>>> 00014 HELSINGIN YLIOPISTO
>>>> Tel. +358 50 3199529
>>>> osma.suomi...@helsinki.fi
>>>> http://www.nationallibrary.fi
>>>>
>>>> ___
>>>> Wikidata mailing list
>>>> Wikidata@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>>
>>>
>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>>
>>
>>
>> --
>> Alex Stinson
>> GLAM-Wiki Strategist
>> Wikimedia Foundation
>> Twitter:@glamwiki/@sadads
>>
>> Learn more about how the communities behind Wikipedia, Wikidata and other
>> Wikimedia projects partner with cultural heritage organizations:
>> http://glamwiki.org
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-06 Thread Magnus Manske
On Tue, Jun 6, 2017 at 2:44 PM Osma Suominen 
wrote:

> Hi Magnus!
>
> Thanks for your quick response. Comments inline.
>
> Magnus Manske kirjoitti 06.06.2017 klo 15:57:
> > * If you want to "seed" Mix'n'match with third-party/indirect IDs
> > already in Wikidata, best to not create the catalog yourself, but mail
> > me the data instead
>
> Okay, great! What's the best format? The same as for creating catalogs,
> but with an additional Wikidata ID column with values from the existing
> mappings?
>
That would work fine.

>
> By the way, we also have multilingual labels that could perhaps improve
> the automatic matching. YSO generally has fi/sv/en, YSO places has
> fi/sv. Can you make use of these too if I provided them in additional
> columns?
>
Sorry, mix'n'match only does single language labels.

>
> > * If you want "YSO places" in Wikidata, we will need a new property for
> > that, unless the P2347 formatter URL would redirect automatically to
> > "/yso-paikat/"
>
> It does redirect like this already. See e.g.
> http://www.yso.fi/onto/yso/p138653
>
> Great! So you could bunch the "old" ones and the new places into one list?


> > * You can create a Mix'n'match catalog before there is a property, and
> > link them up later. The catalog will then synchronize
>
> I don't think we need an additional property, but good to know anyway.
>
> -Osma
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Mix'n'Match with existing (indirect) mappings

2017-06-06 Thread Magnus Manske
Hi Osma,

just a few remarks:

* If you want to "seed" Mix'n'match with third-party/indirect IDs already
in Wikidata, best to not create the catalog yourself, but mail me the data
instead

* If you want "YSO places" in Wikidata, we will need a new property for
that, unless the P2347 formatter URL would redirect automatically to
"/yso-paikat/"

* You can create a Mix'n'match catalog before there is a property, and link
them up later. The catalog will then synchronize

Cheers,
Magnus

On Tue, Jun 6, 2017 at 11:19 AM Osma Suominen 
wrote:

> Hi Wikidatans,
>
> After several delays we are finally starting to think seriously about
> mapping the General Finnish Ontology YSO [1] to Wikidata. A "YSO ID"
> property (https://www.wikidata.org/wiki/Property:P2347) was added to
> Wikidata some time ago, but it has been used only a few times so far.
>
> Recently some 6000 places have been added to "YSO Places" [2], a new
> extension of YSO, which was generated from place names in YSA and
> Allärs, our earlier subject indexing vocabularies. It would probably
> make sense to map these places to Wikidata, in addition to the general
> concepts in YSO. We have already manually added a few links from YSA/YSO
> places to Wikidata for newly added places, but this approach does not
> scale if we want to link the thousands of existing places.
>
> We also have some indirect sources of YSO/Wikidata mappings:
>
> 1. YSO is mapped to LCSH, and Wikidata also to LCSH (using P244, LC/NACO
> Authority File ID). I digged a bit into both sets of mappings and found
> that approximately 1200 YSO-Wikidata links could be generated from the
> intersection of these mappings.
>
> 2. The Finnish broadcasting company Yle has also created some mappings
> between KOKO (which includes YSO) and Wikidata. Last time I looked at
> those, we could generate at least 5000 YSO-Wikidata links from them.
> Probably more nowadays.
>
>
> Of course, indirect mappings are a bit dangerous. It's possible that
> there are some differences in meaning, especially with LCSH which has a
> very different structure (and cultural context) than YSO. Nevertheless I
> think these could be a good starting point, especially if a tool such as
> Mix'n'Match could be used to verify them.
>
> Now my question is, given that we already have or could easily generate
> thousands of Wikidata-YSO mappings, but the rest would still have to be
> semi-automatically linked using Mix'n'Match, what would be a good way to
> approach this? Does Mix'n'Match look at existing statements (in this
> case YSO ID / P2347) in Wikidata when you load a new catalog, or ignore
> them?
>
> I can think of at least these approaches:
>
> 1. First import the indirect mappings we already have to Wikidata as
> P2347 statements, then create a Mix'n'Match catalog with the remaining
> YSO concepts. The indirect mappings would have to be verified separately.
>
> 2. First import the indirect mappings we already have to Wikidata as
> P2347 statements, then create a Mix'n'Match catalog with ALL the YSO
> concepts, including the ones for which we already have imported a
> mapping. Use Mix'n'Match to verify the indirect mappings.
>
> 3. Forget about the existing mappings and just create a Mix'n'Match
> catalog with all the YSO concepts.
>
> Any advice?
>
> Thanks,
>
> -Osma
>
> [1] http://finto.fi/yso/
>
> [2] http://finto.fi/yso-paikat/
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529 <+358%2050%203199529>
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Get "subject links" via Wikidata API

2017-04-12 Thread Magnus Manske
Just say "wd:Q12345" (the author) instead of "?author" ?

The backlinks thing works, but is tedious. You'll need to load the items
via action=wbgetentities to check if that link actually means "author", or
some other property.

On Wed, Apr 12, 2017 at 4:52 PM  wrote:

>
> To get the works that an person has written It would use SPARQL with
> something link "SELECT * WHERE { ?work wdt:P50 ?author }".
>
> I could also get the authors of a work via Wikidata MediaWiki API.
>
> My question is whether it is possible to get the works of an author
> given the author. With my knowledge of the API, I would say it is not
> possible, except if you do something "Special:WhatLinksHere"
> (list=backlinks) and process/filter all the results.
>
>
> Finn Årup Nielsen
> http://people.compute.dtu.dk/faan/
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Full Text Search in Query Service

2017-03-31 Thread Magnus Manske
I suppose Gerard means this:
https://en.wikipedia.org/wiki/MediaWiki:Wdsearch.js

Last time I checked, it was enabled by default for everyone on it.wp.

On Fri, Mar 31, 2017 at 3:58 PM Thad Guidry  wrote:

> Hi Gerard,
>
> Can you point me to a URL that describes that functionality by Magnus ?
>
> Thanks,
> -Thad
> +ThadGuidry <https://www.google.com/+ThadGuidry>
>
>
> On Fri, Feb 17, 2017 at 4:54 PM Gerard Meijssen 
> wrote:
>
> Hoi,
> Thad, there has been for a long long time functionality by Magnus that
> does exactly that. I use it for instance on my en.wp profile and will find
> Valerie Sutton..
> Thanks,
>  GerardM
>
> On 17 February 2017 at 23:38, Thad Guidry  wrote:
>
> Daniel & Stas,
>
> Btw, it was be AWESOME if wbsearchentites had a parameter option
> (&allDetails) to also list statements about an entity.
> Right now it only seems to mention a few bits of metadata and not much
> else.
>
> We had that similar parameter available in the Freebase API
>
> -Thad
> +ThadGuidry <https://www.google.com/+ThadGuidry>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Problems installing Wikibase repository and client in the same site

2017-03-25 Thread Magnus Manske
I literally had that problem yesterday myself. You do need "matching"
versions of MediaWiki and Wikibase. Stable/master does NOT work, but
master/master will work fine (but is a bit more fiddly to set up).

On Fri, Mar 24, 2017 at 2:27 PM Lydia Pintscher <
lydia.pintsc...@wikimedia.de> wrote:

> On Fri, Mar 24, 2017 at 3:04 PM, Iván Hernández Cazorla
>  wrote:
> > Hi Lydia!
> > I haven't got access to my computer in that moment, but I think that I
> > install the last version.
> >
> > Thank you for your response.
>
> Ok that might be the issue. If you are using Wikibase master then you
> for now also need MediaWiki master.
>
>
> Cheers
> Lydia
>
> --
> Lydia Pintscher - http://about.me/lydia.pintscher
> Product Manager for Wikidata
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
> www.wikimedia.de
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
> Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] OAuth issue

2017-03-10 Thread Magnus Manske
Never mind, a JSON escape issue with wbeditentity. The API error message
was somewhat misleading ;-)

On Fri, Mar 10, 2017 at 8:23 PM Magnus Manske 
wrote:

> My trusty WiDaR OAuth-based tool is throwing errors since a few days:
>
> "The authorization headers in your request are not valid: Invalid
> signature"
>
> I did see the breaking change at
>
> https://lists.wikimedia.org/pipermail/mediawiki-api-announce/2017-February/000125.html
> but I am not using lgpassword or lgtoken here.
>
> Did something else change? I notice I have
> 'oauth_signature_method' => 'HMAC-SHA1'
> Did someone perhaps change that after the SHA1 foobar?
>
> Anything else it could be? I know I haven't fiddled with my code...
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] OAuth issue

2017-03-10 Thread Magnus Manske
My trusty WiDaR OAuth-based tool is throwing errors since a few days:

"The authorization headers in your request are not valid: Invalid signature"

I did see the breaking change at
https://lists.wikimedia.org/pipermail/mediawiki-api-announce/2017-February/000125.html
but I am not using lgpassword or lgtoken here.

Did something else change? I notice I have
'oauth_signature_method' => 'HMAC-SHA1'
Did someone perhaps change that after the SHA1 foobar?

Anything else it could be? I know I haven't fiddled with my code...
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Label gaps on Wikidata

2017-02-22 Thread Magnus Manske
Relevant: https://arxiv.org/pdf/1702.06235.pdf

On Wed, Feb 22, 2017 at 6:57 AM Gerard Meijssen 
wrote:

> Hoi,
> You know, typically you are right. In the last few days I added members of
> the chamber of deputies of Haiti. I used names from the English Wikipedia
> but I am not sure that the names are correct. In one instance I found that
> the first name was at the back for others I am not sure that we have it
> right.
>
> The problem with rules are the exceptions and for automated approaches you
> have to seriously consider these.
> Thanks,
>GerardM
>
> On 22 February 2017 at 07:46, Konstantinos Stampoulis 
> wrote:
>
> Indeed in many cases a translation is needed, but for some languages and
> specific types of entities what is needed is just a transcription if not
> just a copy from the original language. For example names of humans or
> settlements. I guess for some languages with the same script, one can just
> copy the label, f.e. for a british person from en to fr.
>
>
>
> Konstantinos Stampoulis
> ger...@geraki.gr
> http://www.geraki.gr
>
> 
> Συνεισφέρετε στην Βικιπαίδεια. https://el.wikipedia.org
> ---
> Οι παραπάνω απόψεις είναι προσωπικές και δεν εκφράζουν παρά μόνο εμένα. Το
> μήνυμα θεωρείται εμπιστευτικό μόνο εάν το έχω ζητήσει ρητά, διαφορετικά
> μπορείτε να το χρησιμοποιήσετε σε οποιαδήποτε δημόσια συζήτηση. Δεν έχω
> τίποτε να κρύψω. :-)
>
>
> 2017-02-19 18:16 GMT+02:00 Smolenski Nikola :
>
> Citiranje Romaine Wiki :
> > If you look in the recent changes, most items have labels in English and
> > those are shown in the recent changes and elsewhere (so we know what the
> > item is about without opening first). But not all items have labels, and
> > these items without English label are often items with only a label in
> > Chinese, Arabic, Cyrillic script, Hebrew, etc. This forms a significant
> gap.
> >
> > Is there a way to easily make a transcription from one language to
> another?
> > Or alternatively if there is a database that has such transcriptions?
>
> There is in many cases, however there are some problems associated with
> it. You
> may not know what is the original language to transcribe from, you might
> need a
> translation rather than transcription, if there are multiple labels you
> have no
> way to choose between them.
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Resolver for Sqid?

2017-02-20 Thread Magnus Manske
On Fri, Feb 17, 2017 at 10:27 PM Kingsley Idehen 
wrote:

> On 2/17/17 4:51 PM, Magnus Manske wrote:
>
>
>
> On Fri, Feb 17, 2017 at 8:57 PM Kingsley Idehen 
> wrote:
>
> On 2/16/17 5:52 PM, Magnus Manske wrote:
> > I have extended the resolver to include squid and reasonator as targets:
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=sqid
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=reasonator
>
> Very cool!
>
> Question: What would it take to have DBpedia URIs added to the 100
> external identifier cross references? As you know, there are owl:sameAs
> relations in DBpedia that have Wikidata URIs as objects. We should
> really make the mutually beneficial nature of DBpedia and Wikidata
> clearer [1][2][3], at every turn :)
>
>
> Not quite sure I follow. Do you want to
> * query Wikidata but open the respective DBpedia page instead?
> * query DBpedia but open the respective Wikidata page instead?
> * query and open DBpedia?
>
>
> Much simpler than all of that.
>
> My point is that pages like https://www.wikidata.org/wiki/Q44461, don't
> have any DBpedia cross references in the Identifiers section. Once those
> relations are added to Wikidata I would expect DBpedia URIs to simply show
> up in he pages emitted by Wikidata tools (e.g., Reasonator, Squid etc..).
>
Yes, they would. But that's an issue of creating the respective Wikidata
property, and filling in the values.

What does it have to do with the resolver tool?

> Naturally, if there was an option for Reasonator, Squid etc. to work with
> SPARQL endpoints, generically, that would be a huge bonus too :)
>
> --
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software   (Home Page: http://www.openlinksw.com)
>
> Weblogs (Blogs):
> Legacy Blog: http://www.openlinksw.com/blog/~kidehen/
> Blogspot Blog: http://kidehen.blogspot.com
> Medium Blog: https://medium.com/@kidehen
>
> Profile Pages:
> Pinterest: https://www.pinterest.com/kidehen/
> Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
> Twitter: https://twitter.com/kidehen
> Google+: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn: http://www.linkedin.com/in/kidehen
>
> Web Identities (WebID):
> Personal: http://kingsley.idehen.net/dataspace/person/kidehen#this
> : 
> http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Resolver for Sqid?

2017-02-17 Thread Magnus Manske
On Fri, Feb 17, 2017 at 8:57 PM Kingsley Idehen 
wrote:

> On 2/16/17 5:52 PM, Magnus Manske wrote:
> > I have extended the resolver to include squid and reasonator as targets:
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=sqid
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=reasonator
>
> Very cool!
>
> Question: What would it take to have DBpedia URIs added to the 100
> external identifier cross references? As you know, there are owl:sameAs
> relations in DBpedia that have Wikidata URIs as objects. We should
> really make the mutually beneficial nature of DBpedia and Wikidata
> clearer [1][2][3], at every turn :)
>

Not quite sure I follow. Do you want to
* query Wikidata but open the respective DBpedia page instead?
* query DBpedia but open the respective Wikidata page instead?
* query and open DBpedia?
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Resolver for Sqid?

2017-02-17 Thread Magnus Manske
Done.

On Fri, Feb 17, 2017 at 8:40 AM Markus Kroetzsch <
markus.kroetz...@tu-dresden.de> wrote:

> Thanks, Magnus. Could you also change the SQID taret URL to the stable
> version (remove "dev/" from URL)?
>
> Best,
>
> Markus
>
> On 16.02.2017 23:52, Magnus Manske wrote:
> > I have extended the resolver to include squid and reasonator as targets:
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=sqid
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=reasonator
> >
> > On Thu, Feb 16, 2017 at 10:32 PM Markus Kroetzsch
> > mailto:markus.kroetz...@tu-dresden.de>>
> > wrote:
> >
> > Hi Joachim,
> >
> > Thanks for the suggestion. We are at it. Preview:
> >
> > http://tools.wmflabs.org/sqid/dev/#/view?quick=VIAF:12307054
> >
> > But which of these special keys (like "VIAF") should the service
> > support? (note that there are 1100+ external id properties ...)
> >
> > The other syntax with prop=P227&value=120434059 is not implemented
> yet.
> >
> > Best regards,
> >
> > Markus
> >
> > On 07.02.2017 16:00, Neubert, Joachim wrote:
> > > For wikidata, there exists a resolver at
> > > https://tools.wmflabs.org/wikidata-todo/resolver.php, which allows
> > me to
> > > build URLs such as
> > >
> > >
> > >
> > >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054
> > > , or
> > >
> > >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?prop=P227&value=120434059
> > >
> > >
> > >
> > > in order to address wikidata items directly from their external
> > identifiers.
> > >
> > >
> > >
> > > Squid is more appealing for viewing the items. Does a similar
> > mechanism
> > > exist there?
> > >
> > >
> > >
> > > Cheers, Joachim
> > >
> > >
> > >
> > >
> > >
> > > ___
> > > Wikidata mailing list
> > > Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org>
> > > https://lists.wikimedia.org/mailman/listinfo/wikidata
> > >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org>
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
> >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Resolver for Sqid?

2017-02-16 Thread Magnus Manske
I have extended the resolver to include squid and reasonator as targets:

https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=sqid

https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054&project=reasonator

On Thu, Feb 16, 2017 at 10:32 PM Markus Kroetzsch <
markus.kroetz...@tu-dresden.de> wrote:

> Hi Joachim,
>
> Thanks for the suggestion. We are at it. Preview:
>
> http://tools.wmflabs.org/sqid/dev/#/view?quick=VIAF:12307054
>
> But which of these special keys (like "VIAF") should the service
> support? (note that there are 1100+ external id properties ...)
>
> The other syntax with prop=P227&value=120434059 is not implemented yet.
>
> Best regards,
>
> Markus
>
> On 07.02.2017 16:00, Neubert, Joachim wrote:
> > For wikidata, there exists a resolver at
> > https://tools.wmflabs.org/wikidata-todo/resolver.php, which allows me to
> > build URLs such as
> >
> >
> >
> > https://tools.wmflabs.org/wikidata-todo/resolver.php?quick=VIAF:12307054
> > , or
> >
> >
> https://tools.wmflabs.org/wikidata-todo/resolver.php?prop=P227&value=120434059
> >
> >
> >
> > in order to address wikidata items directly from their external
> identifiers.
> >
> >
> >
> > Squid is more appealing for viewing the items. Does a similar mechanism
> > exist there?
> >
> >
> >
> > Cheers, Joachim
> >
> >
> >
> >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Embedding Wikidata references in Wikipedia pages

2017-02-15 Thread Bart Magnus
Dear all,

I learned from Asaf's Tech Talk
<https://www.youtube.com/watch?v=eVrAx3AmUvA>last week how to embed
statements from Wikidata in Wikipedia pages, which of course I immediately
tried myself. E.g.:
{{#statements:P485|from=Q158840}} on the Dutch page of James Ensor
<https://nl.wikipedia.org/wiki/James_Ensor#Archief>.

Is there a similar way to embed the reference that comes with this
statement (see P:485 (archives at) on https://www.wikidata.org/wiki/Q158840)?
That would enable me to immediately link to the description of the archive.
I now had to do this by manually adding references to the Wikipediapage,
although the information is also part of Wikidata.

Thanks in advance for your help and advice!

Best,

Bart Magnus

PACKED vzw - Expertisecentrum Digitaal Erfgoed
Rue Delaunoystraat 58 #23
B-1080 Brussel
België

e b...@packed.be
t ++32 2 217 14 05
skype: bartmagnus
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Wikidata reconciliation service and Ope Refine

2017-01-27 Thread Magnus Manske
Hi Antonin,

mix'n'match is designed to work with almost any dataset, thus uses the
common denominator, which is names, for matching.

There are mechanisms to match on other properties, but writing an interface
for public consumption for this would be a task that could easily keep an
entire team of programmers busy :-)

If you can give me the whole list to download, I will see what I can do in
terms of auxiliary data matching. Maybe a combination of that, manual
matches (or at least confirmations on name matches), and the OpenRefine
approach will give us maximum coverage.

It appears Kunstenpunt has no Wikidata property yet. Maybe Romaine could
star setting one up? That would help in terms of synchronisation, I believe.

Cheers,
Magnus



On Thu, Jan 26, 2017 at 4:44 PM Antonin Delpeuch (lists) <
li...@antonin.delpeuch.eu> wrote:

> Hi Magnus,
>
> Mix'n'match looks great and I do have a few questions about it. I'd like
> to use it to import a dataset, which looks like this (these are the 100
> first lines):
> http://pintoch.ulminfo.fr/34f8c4cf8a/aligned_institutions.txt
>
> I see how to import it in Mix'n'match, but given all the columns I have
> in this dataset, I think that it is a bit sad to resort to matching on
> the name only.
>
> Do you see any way to do some fuzzy-matching on, say, the URLs provided
> in the dataset against the "official website" property? I think that it
> would be possible with the (proposed) Wikidata interface for OpenRefine
> (if I understand the UI correctly).
>
> In this context, I think it might even be possible to confirm matches
> automatically (when the matches are excellent on multiple columns). As
> the dataset is rather large (400,000 lines) I would not really want to
> validate them one after the other with the web interface. So I would
> need a sort of batch edit. How would you do that?
>
> Finally, once matches are found, it would be great if statements
> corresponding to the various columns could be created in the items (if
> these statements don't already exist). With the appropriate reference to
> the dataset, ideally.
>
> I realise this is a lot to ask - maybe I should just write a bot.
>
> Alina, sorry to hijack your thread. I hope my questions were general
> enough to be interesting for other readers.
>
> Cheers,
> Antonin
>
>
> On 26/01/2017 16:01, Magnus Manske wrote:
> > If you want to match your list to Wikidata, to find which entries
> > already exist, have you considered Mix'n'match?
> > https://tools.wmflabs.org/mix-n-match/
> >
> > You can upload your names and identifiers at
> > https://tools.wmflabs.org/mix-n-match/import.php
> >
> > There are several mechanisms in place to help with the matching. Please
> > contact me if you need help!
> >
> > On Thu, Jan 26, 2017 at 3:58 PM Magnus Manske
> > mailto:magnusman...@googlemail.com>>
> wrote:
> >
> > Alina, I just found your bug report, which you filed under the wrong
> > issue tracker. The git repo (source code, issue tracker etc.) are
> here:
> > https://bitbucket.org/magnusmanske/reconcile
> >
> > The report says it "keeps hanging", which is so vague that it's
> > impossible to debug, especially since the example linked on
> > https://tools.wmflabs.org/wikidata-reconcile/
> > works perfectly fine for me.
> >
> > Does it not work at all for you? Does it work for a time, but then
> > stops? Does it "break" reproducibly on specific queries, or at
> > random? Maybe it breaks for specific "types" only? At what rate are
> > you hitting the tool? Do you have an example query, preferably one
> > that breaks?
> >
> > Please note that this is not an "official" WMF service, only parts
> > of the API are implemented, and there are currently other technical
> > limitations on it.
> >
> > Cheers,
> > Magnus
> >
> > On Thu, Jan 26, 2017 at 3:35 PM Antonin Delpeuch (lists)
> > mailto:li...@antonin.delpeuch.eu>>
> wrote:
> >
> > Hi,
> >
> > I'm also very interested in this. How did you configure your
> > OpenRefine
> > to use Wikidata? (Even if it does not currently work, I am
> > interested in
> > the setup.)
> >
> > There is currently an open issue (with a nice bounty) to improve
> the
> > integration of Wikidata in OpenRefine:
> > https://github.com/OpenRefine/OpenRefine/issues/805
> >
> > Best regards,
>

Re: [Wikidata] Wikidata reconciliation service and Ope Refine

2017-01-26 Thread Magnus Manske
If you want to match your list to Wikidata, to find which entries already
exist, have you considered Mix'n'match?
https://tools.wmflabs.org/mix-n-match/

You can upload your names and identifiers at
https://tools.wmflabs.org/mix-n-match/import.php

There are several mechanisms in place to help with the matching. Please
contact me if you need help!

On Thu, Jan 26, 2017 at 3:58 PM Magnus Manske 
wrote:

> Alina, I just found your bug report, which you filed under the wrong issue
> tracker. The git repo (source code, issue tracker etc.) are here:
> https://bitbucket.org/magnusmanske/reconcile
>
> The report says it "keeps hanging", which is so vague that it's impossible
> to debug, especially since the example linked on
> https://tools.wmflabs.org/wikidata-reconcile/
> works perfectly fine for me.
>
> Does it not work at all for you? Does it work for a time, but then stops?
> Does it "break" reproducibly on specific queries, or at random? Maybe it
> breaks for specific "types" only? At what rate are you hitting the tool? Do
> you have an example query, preferably one that breaks?
>
> Please note that this is not an "official" WMF service, only parts of the
> API are implemented, and there are currently other technical limitations on
> it.
>
> Cheers,
> Magnus
>
> On Thu, Jan 26, 2017 at 3:35 PM Antonin Delpeuch (lists) <
> li...@antonin.delpeuch.eu> wrote:
>
> Hi,
>
> I'm also very interested in this. How did you configure your OpenRefine
> to use Wikidata? (Even if it does not currently work, I am interested in
> the setup.)
>
> There is currently an open issue (with a nice bounty) to improve the
> integration of Wikidata in OpenRefine:
> https://github.com/OpenRefine/OpenRefine/issues/805
>
> Best regards,
> Antonin
>
> On 26/01/2017 12:22, Alina Saenko wrote:
> > Hello everyone,
> >
> > I have a question for people who are using the Wikidata reconciliation
> > service: https://tools.wmflabs.org/wikidata-reconcile/ It was working
> > perfectly in my Open Refine in november 2016, but since december is
> > stopped working. I already have contacted Magnus Manske, but he hasn’t
> > responded yet. Does anyone else experience problems with the service and
> > know how to fix it?
> >
> > I’m using this service to link big lists of Belgian artists (37.000) and
> > performance art organisations (1.000) to Wikidata as a preparation to
> > upload contextual data about these persons and organisations to
> > Wikidata. This data wil come from Kunstenpunt database
> > (http://data.kunsten.be/people). Wikimedia user Romaine
> > (https://meta.wikimedia.org/wiki/User:Romaine) is helping us with this
> > project.
> >
> > Best regards,
> > Alina
> >
> >
> > --
> > Aanwezig ma, di, wo, do
> >
> > PACKED vzw - Expertisecentrum Digitaal Erfgoed
> > Rue Delaunoystraat 58 bus 23
> > B-1080 Brussel
> > Belgium
> >
> > e al...@packed.be <mailto:al...@packed.be>
> > t: +32 (0)2 217 14 05 <+32%202%20217%2014%2005>
> > w www.packed.be <http://www.packed.be/>
> >
> >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Wikidata reconciliation service and Ope Refine

2017-01-26 Thread Magnus Manske
Alina, I just found your bug report, which you filed under the wrong issue
tracker. The git repo (source code, issue tracker etc.) are here:
https://bitbucket.org/magnusmanske/reconcile

The report says it "keeps hanging", which is so vague that it's impossible
to debug, especially since the example linked on
https://tools.wmflabs.org/wikidata-reconcile/
works perfectly fine for me.

Does it not work at all for you? Does it work for a time, but then stops?
Does it "break" reproducibly on specific queries, or at random? Maybe it
breaks for specific "types" only? At what rate are you hitting the tool? Do
you have an example query, preferably one that breaks?

Please note that this is not an "official" WMF service, only parts of the
API are implemented, and there are currently other technical limitations on
it.

Cheers,
Magnus

On Thu, Jan 26, 2017 at 3:35 PM Antonin Delpeuch (lists) <
li...@antonin.delpeuch.eu> wrote:

> Hi,
>
> I'm also very interested in this. How did you configure your OpenRefine
> to use Wikidata? (Even if it does not currently work, I am interested in
> the setup.)
>
> There is currently an open issue (with a nice bounty) to improve the
> integration of Wikidata in OpenRefine:
> https://github.com/OpenRefine/OpenRefine/issues/805
>
> Best regards,
> Antonin
>
> On 26/01/2017 12:22, Alina Saenko wrote:
> > Hello everyone,
> >
> > I have a question for people who are using the Wikidata reconciliation
> > service: https://tools.wmflabs.org/wikidata-reconcile/ It was working
> > perfectly in my Open Refine in november 2016, but since december is
> > stopped working. I already have contacted Magnus Manske, but he hasn’t
> > responded yet. Does anyone else experience problems with the service and
> > know how to fix it?
> >
> > I’m using this service to link big lists of Belgian artists (37.000) and
> > performance art organisations (1.000) to Wikidata as a preparation to
> > upload contextual data about these persons and organisations to
> > Wikidata. This data wil come from Kunstenpunt database
> > (http://data.kunsten.be/people). Wikimedia user Romaine
> > (https://meta.wikimedia.org/wiki/User:Romaine) is helping us with this
> > project.
> >
> > Best regards,
> > Alina
> >
> >
> > --
> > Aanwezig ma, di, wo, do
> >
> > PACKED vzw - Expertisecentrum Digitaal Erfgoed
> > Rue Delaunoystraat 58 bus 23
> > B-1080 Brussel
> > Belgium
> >
> > e al...@packed.be <mailto:al...@packed.be>
> > t: +32 (0)2 217 14 05 <+32%202%20217%2014%2005>
> > w www.packed.be <http://www.packed.be/>
> >
> >
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> >
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Linked data fragment enabled on the Query Service

2016-12-22 Thread Magnus Manske
FWIW, WiFi/DSL access from Germany...

"Brad Pitt": 128 results in 11.2s

"Antarctic rivers": Silently fails with an internal server error, time
keeps running:
https://query.wikidata.org/bigdata/ldf?subject=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ58596&object=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ355304


On Thu, Dec 22, 2016 at 9:37 AM Ruben Verborgh 
wrote:

> Hi Kingsley,
>
> >> will see a substantial increase in server costs
> >> when they try to host that same data as a public SPARQL HTTP service.
> >
> > Again subjective.
>
> No, that's not subjective, that's perfectly measurable.
> And that's exactly what we did in our research.
>
> The problem with the SPARQL protocol as an API
> is that the per-request cost is a) higher
> and b) much more variable than any other API.
>
> Everywhere else on the Web,
> APIs shield data consumers from the backend,
> limiting the per-request complexity.
> That's why they thrive and SPARQL endpoints don't.
>
> Don't get me wrong, I'm happy with every
> highly available SPARQL endpoint out there.
> Wikidata and DBpedia are awesome.
> It's just that there are too few
> and I see cost as a major factor there.
>
> > You are implying that cost vs benefit analysis don't
> > drive decisions to put services on the Web, of course they do.
>
> Quite the contrary, I am arguing that—and this is subjective—
> because cost/benefit analyses drive decisions on the Web,
> we will never have substantially more SPARQL endpoints
> on the public Web than we have now. They're just too expensive.
>
> >> Federation is where I think public SPARQL endpoints will fail,
> >> so it will be worthwhile to see what happens.
> >
> > Really, then you will ultimately be surprised on that front too!
>
> I really really hope so.
> If one day, machines can execute queries on the Web
> as well as we can, I'd be really happy.
> My way to reach that is lightweight interfaces,
> but if it is possible with heavyweight interfaces,
> all the better.
>
> Best,
>
> Ruben
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Data import hub, data preperation instructions and import workflow for muggles

2016-11-21 Thread Magnus Manske
There are other options to consider:
* Curated import/sync via mix'n'match
* Batch-based import via QuickStatements (also see rewrite plans at
https://www.wikidata.org/wiki/User:Magnus_Manske/quick_statements2 )

On Mon, Nov 21, 2016 at 3:11 PM john cummings 
wrote:

> Dear all
>
>
> Myself and Navino Evans have been working on a bare bone as possible
> workflow and instructions for making importing data into Wikidata available
> to muggles like me. We have written instructions up to the point where
> people would make a request on the 'bot requests' page to import the data
> into Wikidata.
>
>
> Please take a look and share your thoughts
>
>
> https://www.wikidata.org/wiki/User:John_Cummings/Dataimporthub
>
> https://www.wikidata.org/wiki/User:John_Cummings/wikidataimport_guide
>
>
> Thanks very much
>
>
> John
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] use of wdq and wqd create-a-query interface

2016-10-24 Thread Magnus Manske
You could just have asked...

Live version, no funny newlines:
https://tools.wmflabs.org/listeria/query_list.php


On Mon, Oct 24, 2016 at 2:48 PM Jan Dittrich 
wrote:

> If you want to CSV-Import the list to something, you can use "cat
> listeriaQueries.csv| sed ':a;N;$!ba;s/\\\n/ /g' > test.csv" to write a csv
> file which has no linebreaks in the queries. Some garbage \ may remain, but
> it it the best transformation my regex-savvy colleague and me could come up
> with.
>
> Jan
>
> 2016-10-21 17:45 GMT+02:00 Thad Guidry :
>
> The SPARQL ones might also start with CONSTRUCT
> (We at Schema.org are having fun with that one currently)
>
> Thad
> +ThadGuidry 
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
>
>
> --
> Jan Dittrich
> UX Design/ User Research
>
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Phone: +49 (0)30 219 158 26-0
> http://wikimedia.de
>
> Imagine a world, in which every single human being can freely share in the
> sum of all knowledge. That‘s our commitment.
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
> der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> Körperschaften I Berlin, Steuernummer 27/029/42207.
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] use of wdq and wqd create-a-query interface

2016-10-21 Thread Magnus Manske
Jan, Stas, Markus, all,

the run is now complete. I copied the ~13K queries, and the wikis/pages
they are used on, to

https://dl.dropboxusercontent.com/u/23027995/s52532__listeria_bot..21-10-2016
(128KB, bzip2)

The queries can be either WDQ or SPARQL; I guess SPARQL ones will start
with SELECT ;-)

Note: I do auto-replace "simple" (by heuristic) WDQ with SPARQL via the
wdq2sparql tool, so there is some bias there.

Cheers,
Magnus

On Fri, Oct 21, 2016 at 11:23 AM Jan Dittrich 
wrote:

> That’s great!
> I wonder how Stas and Markus want to classify the data, but if our
> interest goes in the same direction, we could pool brains and resources to
> find a classification or at least some nice ggplot vizs :-)
>
> Jan
>
> 2016-10-20 14:42 GMT+02:00 Magnus Manske :
>
> Jan, my Listeria bot currently manages ~13K lists on various WMF wikis. I
> just changed my update bot to collect all the queries that are used for
> those lists. It will take a day or so to collect them, then I'll forward
> them to you. Should be a good data set to estimate baseline demand, in
> terms of what complexity is required.
>
> On Thu, Oct 20, 2016 at 1:27 PM Jan Dittrich 
> wrote:
>
> Hi Magnus,
> Hi List,
>
> Thanks for your assessment!
>
> > Since "proper" wikidata lists would be powered by SPARQL, which is much
> more expressive, it will be difficult to wrap it into a flowchart-like
> interface, without overloading it with options, which in turn makes it
> harder to use.
>
> Yes, my goal would be not graphically wrap the whole SPARQL; like you, I
> noticed it would be too overloaded. In any new interface, I would also only
> support a subset – but a useful one (specifically, useful for querying to
> generate Wikipedia lists, at least for now).
>
> For example, one of my current drafts  allows querying in this very
> limited way: *Prop:Value AND (Prop:Value OR Prop:Value) AND*… – so
> - only one Type (Query:Value, which also should work for Qualifiers,
> though),
> - one first level Operator (AND),
> - one second level Operator (OR).
>
> So I aim for:
>
> 1. Finding out which  instructions are most useful in practice, to see
> what might be included in such a subset (possibly like: "In almost all
> queries, I use the -Operator, but I never understood what _ was
> for".)
>
> and, which is why I asked about wqd,
>
> 2. What did work well about the using the graphical interface of wdq?
> (possible things: How scary/friendly is it compared to SPARQL? How hard is
> finding the needed instructions compared to SPARQL etc.)
>
> Kind Regards,
>  Jan
>
>
>
>
>
> 2016-10-20 13:55 GMT+02:00 Magnus Manske :
>
> Hi Jan,
>
> as the author of wdq and its query builder, I recommend against using it
> as a model.
>
> The wdq query builder does work, to some degree, because the instruction
> set of wdq is very limited, and predictable. Since "proper" wikidata lists
> would be powered by SPARQL, which is much more expressive, it will be
> difficult to wrap it into a flowchart-like interface, without overloading
> it with options, which in turn makes it harder to use.
>
> Personally, I would let people just paste SPARQL into the list
> definitions, and let them construct and test the queries elsewhere.
> query.wikidata.org already has nice features, such as CTRL-space
> replacing of free text by Wikidata entities, searchable examples, etc.
> Combine that with a place where people can get help with query building,
> and IMHO it would serve the community well.
>
> There also was a natural-language-to-SPARQL tool somewhere, but I can't
> find the URL...
>
> This is not to say that we shouldn't have, at some point down the road, a
> query-building interface that is tailored to our needs!
>
> My two Eurocent,
> Magnus
>
>
> On Thu, Oct 20, 2016 at 12:30 PM Jan Dittrich 
> wrote:
>
> Hello everyone,
>
> I continue to work on creating an interface that allows easy querying of
> Wikidata for generating lists for wikipedia (and, possibly, beyond).
>
> An existing, interface based query builder is http://wdq.wmflabs.org/
>
> If you use (or have used it) it would be interested in hearing about the
> advantages or disadvantages you perceived in regards to the functions of
> wqd and their interface.
>
> Kind Regards,
>  Jan
>
> PS: In my experience, examples ("I tried to…") or context ("because I
> wanted to…") can greatly easy understanding, so if giving them makes sense
> for you, don't hesitate to include them.
>
> --
> Jan Dittrich
> UX Design/ User Research
>
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
&

Re: [Wikidata] use of wdq and wqd create-a-query interface

2016-10-20 Thread Magnus Manske
Jan, my Listeria bot currently manages ~13K lists on various WMF wikis. I
just changed my update bot to collect all the queries that are used for
those lists. It will take a day or so to collect them, then I'll forward
them to you. Should be a good data set to estimate baseline demand, in
terms of what complexity is required.

On Thu, Oct 20, 2016 at 1:27 PM Jan Dittrich 
wrote:

> Hi Magnus,
> Hi List,
>
> Thanks for your assessment!
>
> > Since "proper" wikidata lists would be powered by SPARQL, which is much
> more expressive, it will be difficult to wrap it into a flowchart-like
> interface, without overloading it with options, which in turn makes it
> harder to use.
>
> Yes, my goal would be not graphically wrap the whole SPARQL; like you, I
> noticed it would be too overloaded. In any new interface, I would also only
> support a subset – but a useful one (specifically, useful for querying to
> generate Wikipedia lists, at least for now).
>
> For example, one of my current drafts  allows querying in this very
> limited way: *Prop:Value AND (Prop:Value OR Prop:Value) AND*… – so
> - only one Type (Query:Value, which also should work for Qualifiers,
> though),
> - one first level Operator (AND),
> - one second level Operator (OR).
>
> So I aim for:
>
> 1. Finding out which  instructions are most useful in practice, to see
> what might be included in such a subset (possibly like: "In almost all
> queries, I use the -Operator, but I never understood what _ was
> for".)
>
> and, which is why I asked about wqd,
>
> 2. What did work well about the using the graphical interface of wdq?
> (possible things: How scary/friendly is it compared to SPARQL? How hard is
> finding the needed instructions compared to SPARQL etc.)
>
> Kind Regards,
>  Jan
>
>
>
>
>
> 2016-10-20 13:55 GMT+02:00 Magnus Manske :
>
> Hi Jan,
>
> as the author of wdq and its query builder, I recommend against using it
> as a model.
>
> The wdq query builder does work, to some degree, because the instruction
> set of wdq is very limited, and predictable. Since "proper" wikidata lists
> would be powered by SPARQL, which is much more expressive, it will be
> difficult to wrap it into a flowchart-like interface, without overloading
> it with options, which in turn makes it harder to use.
>
> Personally, I would let people just paste SPARQL into the list
> definitions, and let them construct and test the queries elsewhere.
> query.wikidata.org already has nice features, such as CTRL-space
> replacing of free text by Wikidata entities, searchable examples, etc.
> Combine that with a place where people can get help with query building,
> and IMHO it would serve the community well.
>
> There also was a natural-language-to-SPARQL tool somewhere, but I can't
> find the URL...
>
> This is not to say that we shouldn't have, at some point down the road, a
> query-building interface that is tailored to our needs!
>
> My two Eurocent,
> Magnus
>
>
> On Thu, Oct 20, 2016 at 12:30 PM Jan Dittrich 
> wrote:
>
> Hello everyone,
>
> I continue to work on creating an interface that allows easy querying of
> Wikidata for generating lists for wikipedia (and, possibly, beyond).
>
> An existing, interface based query builder is http://wdq.wmflabs.org/
>
> If you use (or have used it) it would be interested in hearing about the
> advantages or disadvantages you perceived in regards to the functions of
> wqd and their interface.
>
> Kind Regards,
>  Jan
>
> PS: In my experience, examples ("I tried to…") or context ("because I
> wanted to…") can greatly easy understanding, so if giving them makes sense
> for you, don't hesitate to include them.
>
> --
> Jan Dittrich
> UX Design/ User Research
>
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Phone: +49 (0)30 219 158 26-0
> http://wikimedia.de
>
> Imagine a world, in which every single human being can freely share in the
> sum of all knowledge. That‘s our commitment.
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
> der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> Körperschaften I Berlin, Steuernummer 27/029/42207.
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>

Re: [Wikidata] use of wdq and wqd create-a-query interface

2016-10-20 Thread Magnus Manske
Thanks Thomas!

"Execute this query" points to the wrong server though...

On Thu, Oct 20, 2016 at 1:16 PM Thomas PT  wrote:

>
> > There also was a natural-language-to-SPARQL tool somewhere, but I can't
> find the URL...
> http://tools.wmflabs.org/ppp-sparql/#Where%20is%20Paris%3F (it's very
> apha-ish).
>
> Thomas
>
> > Le 20 oct. 2016 à 13:55, Magnus Manske  a
> écrit :
> >
> > Hi Jan,
> >
> > as the author of wdq and its query builder, I recommend against using it
> as a model.
> >
> > The wdq query builder does work, to some degree, because the instruction
> set of wdq is very limited, and predictable. Since "proper" wikidata lists
> would be powered by SPARQL, which is much more expressive, it will be
> difficult to wrap it into a flowchart-like interface, without overloading
> it with options, which in turn makes it harder to use.
> >
> > Personally, I would let people just paste SPARQL into the list
> definitions, and let them construct and test the queries elsewhere.
> query.wikidata.org already has nice features, such as CTRL-space
> replacing of free text by Wikidata entities, searchable examples, etc.
> Combine that with a place where people can get help with query building,
> and IMHO it would serve the community well.
> >
> > There also was a natural-language-to-SPARQL tool somewhere, but I can't
> find the URL...
> >
> > This is not to say that we shouldn't have, at some point down the road,
> a query-building interface that is tailored to our needs!
> >
> > My two Eurocent,
> > Magnus
> >
> >
> > On Thu, Oct 20, 2016 at 12:30 PM Jan Dittrich 
> wrote:
> > Hello everyone,
> >
> > I continue to work on creating an interface that allows easy querying of
> Wikidata for generating lists for wikipedia (and, possibly, beyond).
> >
> > An existing, interface based query builder is http://wdq.wmflabs.org/
> >
> > If you use (or have used it) it would be interested in hearing about the
> advantages or disadvantages you perceived in regards to the functions of
> wqd and their interface.
> >
> > Kind Regards,
> >  Jan
> >
> > PS: In my experience, examples ("I tried to…") or context ("because I
> wanted to…") can greatly easy understanding, so if giving them makes sense
> for you, don't hesitate to include them.
> >
> > --
> > Jan Dittrich
> > UX Design/ User Research
> >
> > Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> > Phone: +49 (0)30 219 158 26-0
> > http://wikimedia.de
> >
> > Imagine a world, in which every single human being can freely share in
> the sum of all knowledge. That‘s our commitment.
> >
> > Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
> der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> Körperschaften I Berlin, Steuernummer 27/029/42207.
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] use of wdq and wqd create-a-query interface

2016-10-20 Thread Magnus Manske
Hi Jan,

as the author of wdq and its query builder, I recommend against using it as
a model.

The wdq query builder does work, to some degree, because the instruction
set of wdq is very limited, and predictable. Since "proper" wikidata lists
would be powered by SPARQL, which is much more expressive, it will be
difficult to wrap it into a flowchart-like interface, without overloading
it with options, which in turn makes it harder to use.

Personally, I would let people just paste SPARQL into the list definitions,
and let them construct and test the queries elsewhere. query.wikidata.org
already has nice features, such as CTRL-space replacing of free text by
Wikidata entities, searchable examples, etc. Combine that with a place
where people can get help with query building, and IMHO it would serve the
community well.

There also was a natural-language-to-SPARQL tool somewhere, but I can't
find the URL...

This is not to say that we shouldn't have, at some point down the road, a
query-building interface that is tailored to our needs!

My two Eurocent,
Magnus


On Thu, Oct 20, 2016 at 12:30 PM Jan Dittrich 
wrote:

> Hello everyone,
>
> I continue to work on creating an interface that allows easy querying of
> Wikidata for generating lists for wikipedia (and, possibly, beyond).
>
> An existing, interface based query builder is http://wdq.wmflabs.org/
>
> If you use (or have used it) it would be interested in hearing about the
> advantages or disadvantages you perceived in regards to the functions of
> wqd and their interface.
>
> Kind Regards,
>  Jan
>
> PS: In my experience, examples ("I tried to…") or context ("because I
> wanted to…") can greatly easy understanding, so if giving them makes sense
> for you, don't hesitate to include them.
>
> --
> Jan Dittrich
> UX Design/ User Research
>
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Phone: +49 (0)30 219 158 26-0
> http://wikimedia.de
>
> Imagine a world, in which every single human being can freely share in the
> sum of all knowledge. That‘s our commitment.
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
> der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> Körperschaften I Berlin, Steuernummer 27/029/42207.
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Quick Statements Tool: How to add publication date of sources

2016-10-18 Thread Magnus Manske
As with all my tools, development happens when both of these conditions
coincide:
* I have time
* I am in the mood
;-)

On Tue, Oct 18, 2016 at 1:07 PM Estermann Beat 
wrote:

> Dear Magnus,
>
>
>
> What do you suggest we do in the meanwhile?
>
>
>
> -  Wait with the data ingestion process until the overhaul has
> been done? - Do you have an idea when this could be the case?
>
>
>
> -  Ingest the data, providing the source without its “publication
> date”? – Do you think it will be possible to add the “publication date” of
> the source at a later stage by re-running the tool when it has been
> improved? Or is it better to leave out the source altogether for the time
> being?
>
>
>
> Cheers,
>
> Beat
>
>
>
>
>
>
>
> *From:* Wikidata [mailto:wikidata-boun...@lists.wikimedia.org] *On Behalf
> Of *Magnus Manske
> *Sent:* Dienstag, 18. Oktober 2016 13:56
> *To:* Discussion list for the Wikidata project.
> *Subject:* Re: [Wikidata] Quick Statements Tool: How to add publication
> date of sources
>
>
>
> The tool is missing several functions, and requires a general
> overhaul/rewrite. Haven't gotten around to it.
>
>
>
> On Tue, Oct 18, 2016 at 12:52 PM Estermann Beat 
> wrote:
>
> Hi,
>
>
>
> I’ve recently tried in vain to add a “publication date” qualifier to a
> reference, using the Quick Statements Tool. I’ve posted the issue on
> Magnus’ Talk Page, but haven’t gotten any response so far:
>
>
> https://en.wikipedia.org/wiki/User_talk:Magnus_Manske#Quick_Statements_Tool:_Adding_publication_date_of_sources
>
>
>
> Does anyone know how to do this? Or can you suggest any other tools, work
> arounds etc. to properly add the source, including the publication date, in
> the course of a larger data ingestion job?
>
>
>
> I’m not a software programmer; thus low-threshold tools or newbie-proof
> instructions would be appreciated. ;-)
>
>
>
> Cheers,
>
> Beat
>
>
>
> _
>
>  [image: OpenGLAM.ch_Logo.jpg]
>
> Beat Estermann
> Coordinator OpenGLAM CH Working Group
> http://openglam.ch
>
> Berne University of Applied Sciences
> E-Government Institute
> Brückenstrasse 73
> CH-3005 Bern
>
> beat.esterm...@openglam.ch
>
> Phone +41 31 848 34 38 <+41%2031%20848%2034%2038>
>
>
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Quick Statements Tool: How to add publication date of sources

2016-10-18 Thread Magnus Manske
The tool is missing several functions, and requires a general
overhaul/rewrite. Haven't gotten around to it.

On Tue, Oct 18, 2016 at 12:52 PM Estermann Beat 
wrote:

> Hi,
>
>
>
> I’ve recently tried in vain to add a “publication date” qualifier to a
> reference, using the Quick Statements Tool. I’ve posted the issue on
> Magnus’ Talk Page, but haven’t gotten any response so far:
>
>
> https://en.wikipedia.org/wiki/User_talk:Magnus_Manske#Quick_Statements_Tool:_Adding_publication_date_of_sources
>
>
>
> Does anyone know how to do this? Or can you suggest any other tools, work
> arounds etc. to properly add the source, including the publication date, in
> the course of a larger data ingestion job?
>
>
>
> I’m not a software programmer; thus low-threshold tools or newbie-proof
> instructions would be appreciated. ;-)
>
>
>
> Cheers,
>
> Beat
>
>
>
> _
>
>  [image: OpenGLAM.ch_Logo.jpg]
>
> Beat Estermann
> Coordinator OpenGLAM CH Working Group
> http://openglam.ch
>
> Berne University of Applied Sciences
> E-Government Institute
> Brückenstrasse 73
> CH-3005 Bern
>
> beat.esterm...@openglam.ch
>
> Phone +41 31 848 34 38 <+41%2031%20848%2034%2038>
>
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Terms - search for corresponding WD-item and WP-article

2016-10-10 Thread Magnus Manske
You could try this (example:"Cambridge"):
https://quarry.wmflabs.org/query/13025

Not sure if your terms will work though; "Aerial photograph" does not
exist, for example. You can replace
term_type='label'
with
term_type IN ('label','alias')
to get more hits.

On Mon, Oct 10, 2016 at 7:14 AM Markus Bärlocher <
markus.baerloc...@lau-net.de> wrote:

> Dear Wikidata specialists,
>
> I have a list with 5000 English terms,
> which are translated to several languages (including the corresponding
> WP-language-shortcut).
> Now I look for the corresponding WP-article (as URL).
>
> How can I search the corresponding *WD-item*?
>
> As result I need a table with the columns:
> - Term in English
> - WD-item
>
> How can I build the *WP-URL* for the language-specific WP-article?
> Input:
> - Term in Englisch
> - WD-item
> - WP-language-shortcut
>
> Best regards,
> Markus
>
> Examples:
> Aberration of light
> Abyssal hills
> Aerial photograph
> Age of diurnal inequality
> Aperture of antenna
>
> Languages can be all WP-languages.
>
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] [Analytics] SPARQL power users and developers

2016-10-03 Thread Magnus Manske
Using custom HTTP headers would, of course, complicate calls for the tool
authors (i.e., myself). $.ajax instead of $.get and all that. I would be
less inclined to change to that.

On Mon, Oct 3, 2016 at 10:42 AM Guillaume Lederrey 
wrote:

> On Mon, Oct 3, 2016 at 12:40 AM, Stas Malyshev 
> wrote:
> > Hi!
> >
> >> This thread is missing some background context info as to what the
> >> issues are,  if you could forward it it will be great.
> >
> > Well, I'm not talking about specific issues, except for the general need
> > of identifying which tool is responsible for which queries. Basically,
> > there are several ways of doing it:
> >
> > 1. Adding comments to the query itself
> > 2. Adding query parameters
> > 3. Adding query headers, specifically:
> > a) distinct User-Agent
> > b) distinct X-Analytics header
> > c) custom headers
> >
> > I think that 3a is good for statistics purposes, though 1 could be more
> > efficient when we need to find out who sent a particular query. 3b may
> > be superior to 3a, but I admit I don't know enough about it :)
>
> I'm a bit late to the discussion, but still...
>
> I think that as much as possible metadata about a query should be done
> via HTTP headers. This way, they are not coupled to SPARQL itself and
> can be analysed with generic tools already in place. Setting a
> user-agent is a standard best practice and seems to be part of the
> Mediawiki API guidelines [1], we should use the same guidelines, no
> reason to reinvent them.
>
> X-Analytics header might allow for more fine grained information, but
> I'm not sure this is actually needed (and using X-Analytics should not
> preclude from having a sensible user-agent).
>
>
> [1] https://www.mediawiki.org/wiki/API:Main_page#Identifying_your_client
>
>
> > --
> > Stas Malyshev
> > smalys...@wikimedia.org
> >
> > ___
> > Wikidata mailing list
> > Wikidata@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
>
> --
> Guillaume Lederrey
> Operations Engineer, Discovery
> Wikimedia Foundation
> UTC+2 / CEST
>
> ___
> Analytics mailing list
> analyt...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] SPARQL power users and developers

2016-10-01 Thread Magnus Manske
I'll try to throw in a #TOOL: comment where I can remember using SPARQL,
but I'll be bound to forget a few...

On Fri, 30 Sep 2016, 21:21 Stas Malyshev,  wrote:

> Hi!
>
> > Would it help if I add the following header to every large batch of
> queries?
>
> I think having a distinct User-Agent header (maybe with URL linking to
> the rest of the info) would be enough. This is recorded by the request
> log and can be used later in processing.
>
> In general, every time you create a bot which does large number of
> processing it is a good practice to send distinct User-Agent header so
> people on the other side would know what's going on.
>
> --
> Stas Malyshev
> smalys...@wikimedia.org
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Dynamic Lists, Was: Re: List generation input

2016-09-18 Thread Magnus Manske
On Fri, Sep 16, 2016 at 10:05 PM Luca Martinelli 
wrote:

> 2016-09-15 14:09 GMT+02:00 Jan Dittrich :
> > 2016-09-14 11:47 GMT+02:00 Magnus Manske :
> >> I found people opposed to Listeria lists (in article namespace) for two
> main
> >> reasons:
> >>
> >> * The list is wikitext, so it /looks/ like one can edit it, but then it
> gets
> >> overwritten by a bot. If the wikitext representation of a list were
> just a
> >> parser function or extension tag, that problem would not appear
> (nothing to
> >> edit)
> >
> > Thanks! This is very helpful for me.
> >
> > The wikitext overwriting is a good point and it is easy to understand
> that
> > this leads to confusion.
>
> Still, there's a notice that warns people that "Edits made within the
> list area will be removed on the next update!". Moreover, this is not
> entirely "new" to, say, Italian Wikipedia, where we already have
> "automated pages", i.e. lists filled in by bots. I'm not
> underestimating the possible confusion, I'm just weighing pros and
> cons, and the former outweigh the latter IMHO.
>
> Let me do an example: do we *really* think this is a problem, say,
> with the list of Prime Ministers of the Kingdom of Sardinia (a title
> bestowed from 1848 to 1861)? Editing such list would be virtually
> useless, except for a bunch of coherence edit (adding an image, fixing
> a link) that ListeriaBot can do on its own every $time. Not just on
> one wiki, but on ALL wikis.
>
> The only problem that comes to my mind is a table with wrong data in
> it -- this is not a problem that ListeriaBot can solve, but a problem
> that can help bring to the surface, so again... profit?
>

The overwriting of human edits was the main "violation of the rules" that
led to the banning of Listeria bot on German Wikipedia for the article
namespace. I think someone actually made an IP edit just to have it
overwritten on the next update, then point to the "evil bot action". Ah,
such is luddites.


>
> 2016-09-15 16:53 GMT+02:00 Navino Evans :
> > The main issue that comes up for me with Listeria is with the 'section
> by property' feature. There is currently no control over how it deals with
> multiple values, so a simple list of people sectioned by occupation can
> lead to very misleading results.
> > Every item appears only once on the list, so someone with two
> occupations will just end up in one section or the other.
>
> I found another problem: if I do a query on query.wikidata about, say,
> ministers of Transport of the Kingdom of Italy, the query would show
> the exact list of ministers, repeating correctly how many times John
> Smith has been minister, with data and all. But with the automated
> list, ALL John Smith's multiple terms would be "compressed" in just
> one entry. Is it possible to "convince" ListeriaBot that the same
> value may occur more than once?
>
I *think* you could do that if you use the SPARQL variables directly
instead of the properties in the column headings, but you'd need to make
"fake" item IDs (e.g. Q123.a, Q123.b or something). Internally, everything
is wired to list one item per row, and it would be hard to fiddle with it.
It was always a first attempt, not the final product...

>
> --
> Luca "Sannita" Martinelli
> http://it.wikipedia.org/wiki/Utente:Sannita
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Dynamic Lists, Was: Re: List generation input

2016-09-14 Thread Magnus Manske
On Wed, Sep 14, 2016 at 10:07 AM Luca Martinelli 
wrote:

> Il 05 set 2016 11:43, "Jan Dittrich"  ha
> scritto:
> > - What are current workflows with Listeria? I think I remember that
> people in our interviews (thanks to everyone who participated!) mentioned
> that the generation process currently needs to be triggered manually. Is
> that just because there is no automatic way or does it provide you with
> advantages, too?
>
> Just one thing on this issue. Based on my short experience, the necessity
> of a manual trigger is half true: ListeriaBot updates every 24 hours or so
> every automated list. If you change things and you want to see those
> changes immediately THEN you can ask manually to update now. I don't see
> this much of a disadvantage actually.
>
> Yes, that's how I set it up.

I contemplated having a bot look through recent changes to update manually
changed lists (e.g. query was changed) but it would probably be a few
minutes for some lists until they update, which might be more confusing.

A proper extension would likely regenerate the list as part of the edit, so
this wouldn't be an issue.

I found people opposed to Listeria lists (in article namespace) for two
main reasons:

* The list is wikitext, so it /looks/ like one can edit it, but then it
gets overwritten by a bot. If the wikitext representation of a list were
just a parser function or extension tag, that problem would not appear
(nothing to edit)

* Handcrafted lists. A lot of time went into some of the lists, and while
the raw data can often be regenerated from Wikidata, some manually curated
lists have fields for notes, special formats for coordinates etc. that are
hard to replicate. Allowing custom templates can solve the bespoke
rendering, but especially the "notes" column is essentially not
reproducible with Wikidata, as it is. A tag-based MediaWIki extension could
offer notes per item, maybe, in wikitext.

My 2 Eurocent,
Magnus
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] List of WP-languages and language short code

2016-09-07 Thread Magnus S?lg?
Manske tweeted
https://twitter.com/MagnusManske/status/755399343532892160

==> P424 is Wikimedia language codes ==> 
https://www.wikidata.org/wiki/Property:P424

==> his SPARQL query gives today 293 results

http://tinyurl.com/j2fgxf8

q,code,label
http://www.wikidata.org/entity/Q27811,aa,Qafár af
http://www.wikidata.org/entity/Q5111,ab,§AC0 1K7HÙ0
http://www.wikidata.org/entity/Q27683,ace,Bahsa Acèh
http://www.wikidata.org/entity/Q27776,ady,4K3017M
http://www.wikidata.org/entity/Q56240,aeb,Tounsi
http://www.wikidata.org/entity/Q14196,af,Afrikaans
http://www.wikidata.org/entity/Q28026,ak,Akan
http://www.wikidata.org/entity/Q28244,am, -›
http://www.wikidata.org/entity/Q8765,an,Idioma aragonés
http://www.wikidata.org/entity/Q42365,ang,Ænglisc sprãc
http://www.wikidata.org/entity/Q28378,anp,?>
http://www.wikidata.org/entity/Q13955,ar,'D91(J)
http://www.wikidata.org/entity/Q28602,arc, +" *!
http://www.wikidata.org/entity/Q29919,arz,'DD:G 'DE51JG 'D-/J+G
http://www.wikidata.org/entity/Q29401,as,…¸®À¯¼¾ ­¾·¾
http://www.wikidata.org/entity/Q29507,ast,asturianu
http://www.wikidata.org/entity/Q29561,av,03I0@C; <0FI
http://www.wikidata.org/entity/Q4627,ay,Aymar aru
http://www.wikidata.org/entity/Q9292,az,AzYrbaycan dili
http://www.wikidata.org/entity/Q13389,ba,0H¡>@B B5;5
http://www.wikidata.org/entity/Q33284,bcl,Bikol Sentral
http://www.wikidata.org/entity/Q9091,be,15;0@CA:0O <>20
http://www.wikidata.org/entity/Q7918,bg,J;30@A:8 578:
http://www.wikidata.org/entity/Q33268,bho,-K*A0@ ->7>
http://www.wikidata.org/entity/Q35452,bi,Bislama
http://www.wikidata.org/entity/Q33151,bjn,Bahasa Banjar
http://www.wikidata.org/entity/Q33243,bm,Bamanankan
http://www.wikidata.org/entity/Q9610,bn,¬¾‚²¾ ­¾·¾
http://www.wikidata.org/entity/Q34271,bo,V|QfQ

http://www.wikidata.org/entity/Q37059,bpy,¬¿·Í£ÁªÍ°¿¯¼¾ ®£¿ªÁ°À
http://www.wikidata.org/entity/Q12107,br,Brezhoneg
http://www.wikidata.org/entity/Q9303,bs,Bosanski jezik
http://www.wikidata.org/entity/Q33190,bug, 
http://www.wikidata.org/entity/Q33120,bxr,C@O04 EM;M=
http://www.wikidata.org/entity/Q7026,ca,català
http://www.wikidata.org/entity/Q33281,cbk-zam,Chavacano
http://www.wikidata.org/entity/Q36455,cdo,Mìng-d$ng-ngs
http://www.wikidata.org/entity/Q33350,ce,>EG89 <>BB
http://www.wikidata.org/entity/Q33239,ceb,Sinugboanon
http://www.wikidata.org/entity/Q33262,ch,Fino' Chamoru
http://www.wikidata.org/entity/Q32979,cho,Chahta Anumpa
http://www.wikidata.org/entity/Q33388,chr,㳩 §ì¯Í×
http://www.wikidata.org/entity/Q33265,chy,Tsêhesenêstsestôtse
http://www.wikidata.org/entity/Q36811,ckb,©H1/ÌÌ F'HÕF/Ì
http://www.wikidata.org/entity/Q33111,co,Lingua corsa
http://www.wikidata.org/entity/Q33390,cr,(+ 
(§Ð
http://www.wikidata.org/entity/Q33357,crh,Q1r1mtatar tili
http://www.wikidata.org/entity/Q9056,cs,
eatina
http://www.wikidata.org/entity/Q33690,csb,Kaszëbsczi jãzëk
http://www.wikidata.org/entity/Q35499,cu,!;>2c=LA:J iAQ:J
http://www.wikidata.org/entity/Q33348,cv,'20H G;E8
http://www.wikidata.org/entity/Q9309,cy,Cymraeg
http://www.wikidata.org/entity/Q9035,da,dansk
http://www.wikidata.org/entity/Q188,de,Deutsch
http://www.wikidata.org/entity/Q10199,diq,Zazaki
http://www.wikidata.org/entity/Q13286,dsb,Dolnoserbaina
http://www.wikidata.org/entity/Q32656,dv,‹¨ˆ¬€¨
http://www.wikidata.org/entity/Q33081,dz,b«|DA
http://www.wikidata.org/entity/Q30005,ee,E‹egbe
http://www.wikidata.org/entity/Q36510,el,µ»»·½¹º® ³»ÎÃñ
http://www.wikidata.org/entity/Q242648,eml,Langua emiglièna-rumagnôla
http://www.wikidata.org/entity/Q1860,en,English
http://www.wikidata.org/entity/Q44676,en-ca,Canadian English
http://www.wikidata.org/entity/Q7979,en-gb,British English
http://www.wikidata.org/entity/Q143,eo,Esperanto
http://www.wikidata.org/entity/Q1321,es,español
http://www.wikidata.org/entity/Q9072,et,Eesti keel
http://www.wikidata.org/entity/Q8752,eu,Euskara
http://www.wikidata.org/entity/Q30007,ext,Luenga estremeña
http://www.wikidata.org/entity/Q9168,fa,2('F A'13Ì
http://www.wikidata.org/entity/Q33454,ff,Fulfulde
http://www.wikidata.org/entity/Q1412,fi,suomi
http://www.wikidata.org/entity/Q13357,fit,Meänkieli
http://www.wikidata.org/entity/Q32762,fiu-vro,Võro kiil
http://www.wikidata.org/entity/Q33295,fj,Na vosa vaka-Viti
http://www.wikidata.org/entity/Q25258,fo,Føroyskt mál
http://www.wikidata.org/entity/Q150,fr,français
http://www.wikidata.org/entity/Q15087,frp,Arpetan
http://www.wikidata.org/entity/Q28224,frr,Nordfriisk
http://www.wikidata.org/entity/Q33441,fur,Lenghe furlane
http://www.wikidata.org/entity/Q27175,fy,Frysk
http://www.wikidata.org/entity/Q9142,ga,An Ghaeilge
http://www.wikidata.org/entity/Q33457,gag,Gagauz dili
http://www.wikidata.org/entity/Q33475,gan,ž
http://www.wikidata.org/entity/Q9314,gd,Gàidhlig na h-Alba
http://www.wikidata.org/entity/Q9307,gl,Lingua galega
http://www.wikidata.org/entity/Q33657,glk,¯JD©J
http://www.wikidata.org/entity/Q35876,gn,Avañe'½
http://www.wikidata.org/entity/Q35722,got,2?D0B0630
http://www.wikidata.org/ent

Re: [Wikidata] Wikidata - Property Proposal

2016-09-07 Thread Magnus Manske
One "typical" approach for a data set this type and size is Mix'n'match:
https://tools.wmflabs.org/mix-n-match/

If you get a list of IDs and names, let me know.


On Tue, Sep 6, 2016 at 7:17 PM Brill Lyle  wrote:

> Hi Wikidatans,
>
> After going past my 500th edit on Wikidata #Whee! I was hoping to dip my
> toe into doing something that would involve a larger scale project, like
> adding database information to Wikidata.
>
> There's a database I use all the time that is excellent, rich, deep, and
> well-deployed -- at JewishGen.org
>
> main search page: http://www.jewishgen.org/Communities/Search.asp
> example page:
> http://data.jewishgen.org/wconnect/wc.dll?jg~jgsys~community~-524980
>
> I started a Property proposal here:
>
>
> https://www.wikidata.org/wiki/Wikidata:Property_proposal/Place#JewishGen_Locality_ID_English
>
> I have also contacted the folks over at JewishGen to ask if they might
> provide me with raw data, initially even just with the locality page IDs,
> then hopefully more rich / fuller data that's in the database.
>
> I was wondering if this is
>
> (a) the typical approach people use when importing data
> (b) if you have any advice / best practices to share
> (c) also, if I should try and do a wget to scrub for this data (if that's
> even possible)? do people do this to grab data?
>
> This information, I envision being used as part of a unique identifier
> that could be built into infoboxes, and might also be a sort of templatized
> box even (although I don't hugely love the issue of restricted / redirected
> editing away from Wikipedia). But I would really like to see this
> information in a pathway to Wikipedia. I think it would improve a lot of
> these town pages, a lot of which are stubs.
>
> Best -- and thanks in advance for any advice,
>
> Erika
>
>
> *Erika Herzog*
> Wikipedia *User:BrillLyle *
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] SPARQL query increased timeouts?

2016-09-07 Thread Magnus Manske
FWIW, I also noticed the SERVICE label being much slower than using
rdfs:label. Not sure if that's a recent development, but I switched to
avoid timeouts.

On Wed, Sep 7, 2016 at 7:36 AM Markus Kroetzsch <
markus.kroetz...@tu-dresden.de> wrote:

> On 07.09.2016 03:05, Stas Malyshev wrote:
> > Hi!
> >
> >> I bet wikibase:label has to be reimplemented in some other way to prove
> >> efficient...
> >
> > Yes, label service may be inefficient sometimes. I'll look into how it
> > can be improved.
> >
>
> However, the query without the counting but with the labels included
> also works. Probably we need to do two queries instead of one ...
>
> Markus
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Serialization

2016-08-22 Thread Magnus Manske
Never mind, worked immediately after sending this mail ;-)

On Mon, Aug 22, 2016 at 8:37 PM Magnus Manske 
wrote:

> I am trying to create new items by supplying a large-ish JSON structure,
> but I keep getting "The serialization is invalid". Sadly, that does not
> tell me which part is invalid.
>
> Can anyone see what's wrong? (I suspect the references, but I don't want
> to create unreferenced testing items, and test.wikidata.org has different
> properties apparently...)
>
> https://tools.wmflabs.org/paste/view/7b2aab11
>
> Cheers,
> Magnus
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Serialization

2016-08-22 Thread Magnus Manske
I am trying to create new items by supplying a large-ish JSON structure,
but I keep getting "The serialization is invalid". Sadly, that does not
tell me which part is invalid.

Can anyone see what's wrong? (I suspect the references, but I don't want to
create unreferenced testing items, and test.wikidata.org has different
properties apparently...)

https://tools.wmflabs.org/paste/view/7b2aab11

Cheers,
Magnus
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] first Commons prototype is live \o/

2016-07-28 Thread Magnus Manske
YES!

On Thu, Jul 28, 2016 at 5:02 PM Lydia Pintscher <
lydia.pintsc...@wikimedia.de> wrote:

> Hey everyone :)
>
> I just posted exciting news about structured data support for Commons
> at
> https://commons.wikimedia.org/wiki/Commons_talk:Structured_data#It.27s_alive.21
> *SPOILER* There is a first demo system now! *SPOILER*
>
> Cheers
> Lydia
>
> --
> Lydia Pintscher - http://about.me/lydia.pintscher
> Product Manager for Wikidata
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
> www.wikimedia.de
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
> Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Drag'n'drop Gadget functionality issues

2016-07-13 Thread Magnus Manske
You can start using it right now! It's been robust enough in my tests, YMMV.

On Wed, Jul 13, 2016 at 11:21 PM Info WorldUniversity <
i...@worlduniversityandschool.org> wrote:

> When might CC WUaS begin to use robustly this great tool, Magnus?
>
> Thank you!
> Scott
>
> On Jul 13, 2016 3:08 PM, "Magnus Manske" 
> wrote:
>
>> Hi all,
>>
>> I have tried to replicate the issue in FIrefox, Chrome, and Safari. It
>> appears to work for me, bowever, API calls occasionally took a signisicant
>> amount of time. This was the case for a few minutes, and then "fixed"
>> itself. So some of the described effect might be due to "Wikidata weather".
>>
>> If anyone else has issues with this gadget, please report them to me,
>> ideally with detailed "story" (action performed, browser, item, etc.) I
>> need to be able to reproduce the issue reliably to fix any problems that
>> may be in the code.
>>
>> Also, if you know JavaScript, please have a look and help improve the
>> gadget for more and better reference import. (You can also drag Commons
>> images, coodrinates, and Wikipedia links as new statements!).
>>
>> Cheers,
>> Magnus
>>
>> On Wed, Jul 13, 2016 at 5:14 PM Brill Lyle 
>> wrote:
>>
>>> Hello --
>>>
>>> There is a new *Drag'n'drop gadget* available in Wikidata > Preferences
>>> > Gadgets
>>>
>>> https://www.wikidata.org/wiki/Special:Preferences#mw-prefsection-gadgets
>>>
>>> *   Drag'n'drop*: Add statements and references from Wikidata or
>>> Wikipedia by dragging and dropping them.
>>>
>>> Please note that there are issues with the gadget.
>>>
>>> I would like to evangelize this gadget but for my purposes it is not
>>> functioning
>>> --->   i.e., I drag the reference and get a shadow image of text and
>>> even after waiting 5 minutes the reference does not apply; refresh clears
>>> the attempted addition
>>>
>>> It might be a browser issue, but I have tried it on both Mac and PC, as
>>> well as on Chrome, Firefox, Safari, and SeaMonkey with exactly the same
>>> unsuccessful results. I have talked with Magnus about this (he is not
>>> having the problems I have had). It might also be a Wikidata response
>>> issue, but I think that issue was resolved.
>>>
>>> Currently if you use Wiki Markup and want to use one of the four Cite
>>> templates via the RefToolbar these templates I don't think will transfer
>>> - Cite books: not configured. Will get error message
>>> - I have not tested Cite web, Cite journal, Cite news, but assume none
>>> of these are configured to be captured by this tool.
>>>
>>> Obviously to configure the tool to work with these RefToolbar citation
>>> templates would be significant amount of time & effort.
>>>
>>> But if this tool was fully functional and robust -- and was able to
>>> transfer ALL of the piped data -- it would allow for a great
>>> interoperability of citations between Wikipedia and Wikidata.
>>>
>>> I don't build citations without using templates, as my assumption is
>>> that templates are more machine readable and more useful -- and are more
>>> consistent -- but obviously others are using different approaches. I assume
>>> bare urls are probably the most transferrable. But those didn't work for me
>>> either, so.
>>>
>>> I really appreciate the fact that this gadget is available and the hard
>>> work it took to create it. Magnus has been very patient and kind offlist
>>> trying to problem-solve the issues I have had.
>>>
>>> I just wanted to follow up and provide this information, as it seems an
>>> important tool for us citation-focused editors.
>>>
>>> Best,
>>>
>>> - Erika
>>>
>>> *Erika Herzog*
>>> Wikipedia *User:BrillLyle
>>> <https://en.wikipedia.org/wiki/User:BrillLyle>*
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Drag'n'drop Gadget functionality issues

2016-07-13 Thread Magnus Manske
Hi all,

I have tried to replicate the issue in FIrefox, Chrome, and Safari. It
appears to work for me, bowever, API calls occasionally took a signisicant
amount of time. This was the case for a few minutes, and then "fixed"
itself. So some of the described effect might be due to "Wikidata weather".

If anyone else has issues with this gadget, please report them to me,
ideally with detailed "story" (action performed, browser, item, etc.) I
need to be able to reproduce the issue reliably to fix any problems that
may be in the code.

Also, if you know JavaScript, please have a look and help improve the
gadget for more and better reference import. (You can also drag Commons
images, coodrinates, and Wikipedia links as new statements!).

Cheers,
Magnus

On Wed, Jul 13, 2016 at 5:14 PM Brill Lyle  wrote:

> Hello --
>
> There is a new *Drag'n'drop gadget* available in Wikidata > Preferences >
> Gadgets
>
> https://www.wikidata.org/wiki/Special:Preferences#mw-prefsection-gadgets
>
> *   Drag'n'drop*: Add statements and references from Wikidata or
> Wikipedia by dragging and dropping them.
>
> Please note that there are issues with the gadget.
>
> I would like to evangelize this gadget but for my purposes it is not
> functioning
> --->   i.e., I drag the reference and get a shadow image of text and even
> after waiting 5 minutes the reference does not apply; refresh clears the
> attempted addition
>
> It might be a browser issue, but I have tried it on both Mac and PC, as
> well as on Chrome, Firefox, Safari, and SeaMonkey with exactly the same
> unsuccessful results. I have talked with Magnus about this (he is not
> having the problems I have had). It might also be a Wikidata response
> issue, but I think that issue was resolved.
>
> Currently if you use Wiki Markup and want to use one of the four Cite
> templates via the RefToolbar these templates I don't think will transfer
> - Cite books: not configured. Will get error message
> - I have not tested Cite web, Cite journal, Cite news, but assume none of
> these are configured to be captured by this tool.
>
> Obviously to configure the tool to work with these RefToolbar citation
> templates would be significant amount of time & effort.
>
> But if this tool was fully functional and robust -- and was able to
> transfer ALL of the piped data -- it would allow for a great
> interoperability of citations between Wikipedia and Wikidata.
>
> I don't build citations without using templates, as my assumption is that
> templates are more machine readable and more useful -- and are more
> consistent -- but obviously others are using different approaches. I assume
> bare urls are probably the most transferrable. But those didn't work for me
> either, so.
>
> I really appreciate the fact that this gadget is available and the hard
> work it took to create it. Magnus has been very patient and kind offlist
> trying to problem-solve the issues I have had.
>
> I just wanted to follow up and provide this information, as it seems an
> important tool for us citation-focused editors.
>
> Best,
>
> - Erika
>
> *Erika Herzog*
> Wikipedia *User:BrillLyle <https://en.wikipedia.org/wiki/User:BrillLyle>*
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Verifiability and living people

2016-07-07 Thread Magnus Manske
Sure, maybe it gets more people involved in coding as well. Having thigs
show up in a "proper" way after dropping would be nice, but I'd have no
idea where to start.
Speaking of: Do I suggest that new gadget on Phabricator, or a Wikidata
page?

Actually, I made that script a while ago, but it broke after an UI change;
that's what I meant by "fixed". I even have YouTube HOWTOs:
https://www.youtube.com/watch?v=NRYEjmoDkLQ
https://www.youtube.com/watch?v=jP-qJIkjPf0
https://www.youtube.com/watch?v=ew7oGEhtTPI



On Thu, Jul 7, 2016 at 7:34 PM Lydia Pintscher 
wrote:

> On Thu, Jul 7, 2016 at 8:06 PM, Magnus Manske
>  wrote:
> > OK, this thread seems appropriate, so I just fixed up one of my scripts,
> it
> > lets you
> > *drag'n'drop references between statements
> > *drag'n'drop URL references from Wikipedia (sidebar preview) onto
> statements
> > *drag'n'drop Wikipedia links as new statements (asks for a property to
> use)
> >
> > https://www.wikidata.org/wiki/User:Magnus_Manske/dragref.js
>
> \o/
>
> > Maybe it helps, a little.
>
> It definitely does!
> Time to make it a gadget maybe?
>
>
> Cheers
> Lydia
>
> --
> Lydia Pintscher - http://about.me/lydia.pintscher
> Product Manager for Wikidata
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
> www.wikimedia.de
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
> Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Verifiability and living people

2016-07-07 Thread Magnus Manske
On Thu, Jul 7, 2016 at 7:25 PM Lydia Pintscher 
wrote:

> On Thu, Jul 7, 2016 at 7:16 PM, Magnus Manske
>  wrote:
> > The Visual editor has a whole UI team behind it, who've been working on
> it
> > for years. Yes, citations are only a small part of it, but there is
> nothing
> > equivalent in WMF or German chapter for Wikidata, AFAIK. The Wikidata UI
> is
> > improved constantly, but I don't think there is anyone, let alone a whole
> > team, developing massive new UI features.
>
> Katie is currently working on a gadget for citoid on Wikidata. I hope
> we can get it into the next sprint to then have a really basic first
> version for people to try. Tracking at
> https://phabricator.wikimedia.org/T131661 I am confident this will get
> us in the right direction.
>
>
Three cheers for the Wikidata team!
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Verifiability and living people

2016-07-07 Thread Magnus Manske
OK, this thread seems appropriate, so I just fixed up one of my scripts, it
lets you
*drag'n'drop references between statements
*drag'n'drop URL references from Wikipedia (sidebar preview) onto statements
*drag'n'drop Wikipedia links as new statements (asks for a property to use)

https://www.wikidata.org/wiki/User:Magnus_Manske/dragref.js

Maybe it helps, a little.

On Thu, Jul 7, 2016 at 6:16 PM Magnus Manske 
wrote:

> The Visual editor has a whole UI team behind it, who've been working on it
> for years. Yes, citations are only a small part of it, but there is nothing
> equivalent in WMF or German chapter for Wikidata, AFAIK. The Wikidata UI is
> improved constantly, but I don't think there is anyone, let alone a whole
> team, developing massive new UI features.
>
>
> On Thu, Jul 7, 2016 at 5:14 PM Brill Lyle  wrote:
>
>> These are great, important suggestions. But like with the library science
>> resources, it seems like Wikidata is again trying to re-invent an existing
>> wheel. Why are the existing tools in Wikipedia not being migrated and/or
>> pathway'd into Wikidata? The stripping of Wikipedia citations to push it to
>> Wikidata often / always (?) denudes the information of what is a very
>> rigorous requirement on Wiki.
>>
>> I love the Citoid option, or whatever has been deployed for the Wiki
>> Markup editor when you put an ISBN number, OCLC number, or NYTimes URL into
>> the lookups there. There is no interchangeability with Wikidata though?
>>
>> #WikiCite :-)
>>
>> Agree on the {{citation needed}} button, Finn! ha!
>>
>> - Erika
>>
>>
>> *Erika Herzog*
>> Wikipedia *User:BrillLyle <https://en.wikipedia.org/wiki/User:BrillLyle>*
>>
>> On Thu, Jul 7, 2016 at 10:12 AM, Magnus Manske <
>> magnusman...@googlemail.com> wrote:
>>
>>> While the proposal of all statements requiring citation is obviously
>>> overshooting, I believe we all agree that more/better citations improve
>>> Wikidata.
>>> One component here would be a social one, namely that it first becomes
>>> good practice, then the default, to cite statements.
>>> For that, improved technology and new approaches are required.
>>> Suggestions include:
>>> * Open a blank reference box when adding a statement in the default
>>> editor, thus subtly prompting a reference
>>> * Show a "smart field" for reference adding, e.g. just paste a URL, and
>>> it registers it's an URL, suggests a title from the page at the URL, adds
>>> access date, suggests other data that can be inferred from the URL or the
>>> linked page, shows likely other fields (e.g. "author" or such) for easy
>>> fill-in
>>> * Automatically add references for statements via external IDs. I have a
>>> bot that does that to some degree, but it could use productizing
>>> * Tools to "migrate" Wikipedia references to the actual sources. (Again,
>>> I have some, but...)
>>> * "Reference mode", to quickly add references to statements. (I have a
>>> drag'n'drop script, but that breaks on every Wikidata UI update)
>>> * A list of items/statements that are in "priority need" for
>>> referencing. For example, death dates of the recently deceased should be
>>> simple, while they are still in the news.
>>> * Dedicated drives to complete a "set" (e.g. all women chemists), that
>>> is, have all statements references in those items
>>> * Special watchlist for new statements without reference, especially on
>>> otherwise "completely referenced" items
>>>
>>> Magnus
>>>
>>> On Thu, Jul 7, 2016 at 2:56 PM Brill Lyle 
>>> wrote:
>>>
>>>> *blanket, not blanked...
>>>>
>>>>
>>>> ___
>>>> Wikidata mailing list
>>>> Wikidata@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>>
>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Verifiability and living people

2016-07-07 Thread Magnus Manske
The Visual editor has a whole UI team behind it, who've been working on it
for years. Yes, citations are only a small part of it, but there is nothing
equivalent in WMF or German chapter for Wikidata, AFAIK. The Wikidata UI is
improved constantly, but I don't think there is anyone, let alone a whole
team, developing massive new UI features.


On Thu, Jul 7, 2016 at 5:14 PM Brill Lyle  wrote:

> These are great, important suggestions. But like with the library science
> resources, it seems like Wikidata is again trying to re-invent an existing
> wheel. Why are the existing tools in Wikipedia not being migrated and/or
> pathway'd into Wikidata? The stripping of Wikipedia citations to push it to
> Wikidata often / always (?) denudes the information of what is a very
> rigorous requirement on Wiki.
>
> I love the Citoid option, or whatever has been deployed for the Wiki
> Markup editor when you put an ISBN number, OCLC number, or NYTimes URL into
> the lookups there. There is no interchangeability with Wikidata though?
>
> #WikiCite :-)
>
> Agree on the {{citation needed}} button, Finn! ha!
>
> - Erika
>
>
> *Erika Herzog*
> Wikipedia *User:BrillLyle <https://en.wikipedia.org/wiki/User:BrillLyle>*
>
> On Thu, Jul 7, 2016 at 10:12 AM, Magnus Manske <
> magnusman...@googlemail.com> wrote:
>
>> While the proposal of all statements requiring citation is obviously
>> overshooting, I believe we all agree that more/better citations improve
>> Wikidata.
>> One component here would be a social one, namely that it first becomes
>> good practice, then the default, to cite statements.
>> For that, improved technology and new approaches are required.
>> Suggestions include:
>> * Open a blank reference box when adding a statement in the default
>> editor, thus subtly prompting a reference
>> * Show a "smart field" for reference adding, e.g. just paste a URL, and
>> it registers it's an URL, suggests a title from the page at the URL, adds
>> access date, suggests other data that can be inferred from the URL or the
>> linked page, shows likely other fields (e.g. "author" or such) for easy
>> fill-in
>> * Automatically add references for statements via external IDs. I have a
>> bot that does that to some degree, but it could use productizing
>> * Tools to "migrate" Wikipedia references to the actual sources. (Again,
>> I have some, but...)
>> * "Reference mode", to quickly add references to statements. (I have a
>> drag'n'drop script, but that breaks on every Wikidata UI update)
>> * A list of items/statements that are in "priority need" for referencing.
>> For example, death dates of the recently deceased should be simple, while
>> they are still in the news.
>> * Dedicated drives to complete a "set" (e.g. all women chemists), that
>> is, have all statements references in those items
>> * Special watchlist for new statements without reference, especially on
>> otherwise "completely referenced" items
>>
>> Magnus
>>
>> On Thu, Jul 7, 2016 at 2:56 PM Brill Lyle  wrote:
>>
>>> *blanket, not blanked...
>>>
>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Verifiability and living people

2016-07-07 Thread Magnus Manske
While the proposal of all statements requiring citation is obviously
overshooting, I believe we all agree that more/better citations improve
Wikidata.
One component here would be a social one, namely that it first becomes good
practice, then the default, to cite statements.
For that, improved technology and new approaches are required. Suggestions
include:
* Open a blank reference box when adding a statement in the default editor,
thus subtly prompting a reference
* Show a "smart field" for reference adding, e.g. just paste a URL, and it
registers it's an URL, suggests a title from the page at the URL, adds
access date, suggests other data that can be inferred from the URL or the
linked page, shows likely other fields (e.g. "author" or such) for easy
fill-in
* Automatically add references for statements via external IDs. I have a
bot that does that to some degree, but it could use productizing
* Tools to "migrate" Wikipedia references to the actual sources. (Again, I
have some, but...)
* "Reference mode", to quickly add references to statements. (I have a
drag'n'drop script, but that breaks on every Wikidata UI update)
* A list of items/statements that are in "priority need" for referencing.
For example, death dates of the recently deceased should be simple, while
they are still in the news.
* Dedicated drives to complete a "set" (e.g. all women chemists), that is,
have all statements references in those items
* Special watchlist for new statements without reference, especially on
otherwise "completely referenced" items

Magnus

On Thu, Jul 7, 2016 at 2:56 PM Brill Lyle  wrote:

> *blanket, not blanked...
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Looking for list article editors

2016-06-16 Thread Magnus Manske
Worth reading, for context:
https://de.wikipedia.org/wiki/Wikipedia:L%C3%B6schkandidaten/11._September_2015#Liste_der_Gem.C3.A4lde_von_Jacob_van_Ruisdael_.28erl..2C_schnellgel.C3.B6scht.29
(in German)

On Thu, Jun 16, 2016 at 6:59 AM Biyanto Rebin 
wrote:

> Dear Lydia,
>
> You can contact me. I'm on project of language, ethnicity, and also
> administrative division in Indonesia.
> I usually import the data from the idwiki into Wikidata using PetScan too.
>
> Regards,
>
> 2016-06-16 3:01 GMT+07:00 Lydia Pintscher :
>
>> Hey folks :)
>>
>> We're starting concept work for automated list generation on Wikipedia
>> and the other Wikimedia projects based on queries to Wikidata. As a first
>> step I'd like to get a better understanding of the current state of things.
>> For this Jan (our UX person) and me would like to have a chat with a few
>> editors from different Wikipedias and other Wikimedia projects who are
>> maintaining list articles. If that is you or you know someone who fits
>> please let me know.
>>
>> Cheers
>> Lydia
>> --
>> Lydia Pintscher - http://about.me/lydia.pintscher
>> Product Manager for Wikidata
>>
>> Wikimedia Deutschland e.V.
>> Tempelhofer Ufer 23-24
>> 10963 Berlin
>> www.wikimedia.de
>>
>> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>>
>> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
>> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt
>> für Körperschaften I Berlin, Steuernummer 27/029/42207.
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>
>
> --
>
> Biyanto Rebin | Ketua Umum (*Chair*) 2016-2018
> Wikimedia Indonesia
> Nomor Ponsel: +62 8989 037379
> Surel: biyanto.re...@wikimedia.or.id
> 
>
> Dukung upaya kami membebaskan pengetahuan:
> http://wikimedia.or.id/wiki/Wikimedia_Indonesia:Donasi
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


  1   2   >