Re: [Wikimedia-l] Wikidata now officially has more total edits than English language Wikipedia

2019-03-20 Thread Emilio J . Rodríguez-Posada
El mié., 20 mar. 2019 a las 7:48, Ariel Glenn WMF ()
escribió:

> Only 45 minutes later, the gap is already over 2000 revsions:
>
> [ariel@bigtrouble wikidata-huge]$ python3 ./compare_sizes.py
> Last enwiki revid is 888606979 and last wikidata revid is 888629401
> 2019-03-20 06:46:03: diff is 22422
>
>
This is the escape velocity, I think that Wikipedia will never surpass
Wikidata again.

The singularity is near.
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] LsJbot and geonames

2015-09-06 Thread Emilio J . Rodríguez-Posada
Congratulations for the stub creation, they are good (and better that those
handmade stubs in other languages).

About the Wikidata placeholder project, it sounds very interesting.

2015-09-06 2:40 GMT+02:00 Anders Wennersten :

> Geonames [1] is a database which holds around 9 M entries of geographical
> related items from all over the world.
>
> Lsjbot is now generating articles from a subset of it, after several
> months of extensive research on its quality, Wikidata relations and
> notability issues. While the quality in some regions is substandard (and
> these will not be generated) it was seen as very good in most areas.  In
> the discussion  I was intrigued to learn that identical Arabic names should
> be transcribed differently depending on its geographic location. And I was
> fascinated of the question of notability of wells in the Bahrain desert
> (which in the end was excluded, mostly because we knew too little of that
> reality)
>
> In this run Lsjbot has extended its functionality even further then when
> it generated articles for species. It looks for relevant geographical items
> close to the actual one: a lake close by, a mountain and where is the
> nearest major town etc.
>
> Macedonia  can be taken as one example. Lsjbot generated over 1
> articles (and 5000 disambiguous pages) making it a magnitude more than what
> exist in enwp. Also for a well defined type like villages, almost 50% as
> many has been generated than existing in enwp. One example [2] where you
> can see what has been generated (and note the reuse of a relevant figure
> existing in frwp). Please compare the corresponding articles on other
> languages in this case, many having less information than the bot generated
> one.
>
> The generation is still in early stage [3) but has already got the article
> count for svwp to pass 2 M  today.  But it will take many months more
> before completed and perhaps more M marks will be passed before it is
> through. If you want to give feedback you are welcome to enter it at [4]
>
> Anders
> (with all credits for the Lsjbot to be given to Sverker, its owner, I am
> just one of the many supporters of him and his bot on svwp)
>
> [1]
> http://www.geonames.org/about.html
>
> [2]
> https://sv.wikipedia.org/wiki/Polaki_%28ort_i_Makedonien%29
>
> [3]
> https://sv.wikipedia.org/wiki/Kategori:Robotskapade_geografiartiklar
>
> [4]
>
> https://sv.wikipedia.org/wiki/Anv%C3%A4ndardiskussion:Lsjbot/Projekt_alla_platser
>
>
>
>
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] LsJbot and geonames

2015-09-06 Thread Emilio J . Rodríguez-Posada
2015-09-06 13:22 GMT+02:00 Steinsplitter Wiki :

> Hoi,
>
> "Article Placeholders are automatically generated content pages in
> Wikipedia or other mediawiki projects displaying data from Wikidata."
>  Seriously? RobotWiki? Do we really want this? Quality, not quantity.
>
>
Yeah. I REALLY want this.


> > From: gerard.meijs...@gmail.com
> > Date: Sun, 6 Sep 2015 11:35:31 +0200
> > To: wikimedia-l@lists.wikimedia.org
> > Subject: Re: [Wikimedia-l] LsJbot and geonames
> >
> > Hoi,
> > As always I have been a big fan of the wonderful work that has been done.
> > My reaction was very much for what I perceived as a negative reaction
> from
> > Ricordisamoa. Telling you to stop and become part of Wikidata is a bit
> off.
> > Asking for collaboration and work towards a common goal, a goal that you
> > very much want to share as I perceive it in your reply is most wonderful
> > and most welcome.
> >
> > When your data is at a quality level where you create stubs, it is very
> > much at the level where we should have it in Wikidata. Obviously it is
> for
> > the Swedish community to have the stubs or experiment with cached
> articles
> > based on Wikidata data. Obviously, we are at a point where we can create
> > the stubs and where caching concepts is technically feasible but not
> > something we have done so far.
> >
> > What does it take to have such an experiment?
> > Thanks,
> >  GerardM
> >
> > On 6 September 2015 at 11:23, Anders Wennersten <
> m...@anderswennersten.se>
> > wrote:
> >
> > > At svwp we work closely with Wikidata and see it as the natural base
> for
> > > our article substance. And we follow closely Phabricator and are eager
> to
> > > implement it as soon as it will be feasible to implement. And Lsjbot
> is in
> > > no way counteractive to these. It will be easy to exchange Lsjbot
> article
> > > with Phabricator generated ones when time is right.
> > >
> > > But I believe you miss the point with what Lsjbot is doing now.  The
> > > extensive research etc done on data in Geonames is one of the crucial
> > > efforts. And in a way all this generation project is a research on the
> > > viability to use this data for full in all language versions. If it
> still
> > > is seen as viable we could extend our article coverage for geographical
> > > entities with a factor 10 in all versions. And this research is a must
> even
> > > independently of which technique is used to generate the articles.
> > >
> > > The other crucial effort is the extended intelligence built into the
> > > generation of  facts in the articles. To find out close by physical
> object
> > > by clever algorithms is a intellectual effort of highest dignity. First
> > > when bot generating was introduced, it was more or less a mapping of
> items
> > > from input to items in output (in articles). We now see how more info
> is
> > > created by info only implicit existing in input and where it is
> combined
> > > with external (map) data
> > >
> > > I can not enough press on how much I am impressed by Sverkers
> outstanding
> > > intellectual effort and his creativity in implementing and running
> software
> > > that is of great help reaching our common vision "free knowledge for
> all".
> > >
> > >  Anders
> > >
> > >
> > >
> > >
> > >
> > > Den 2015-09-06 kl. 08:50, skrev Gerard Meijssen:
> > >
> > >> Hoi,
> > >> PLEASE reconsider. A Wikidata based solution is not superior because
> it
> > >> started from Wikidata.
> > >>
> > >> PLEASE consider collaboration. It will be so much more powerful when
> > >> LSJBOT
> > >> and people at Wikidata collaborate. It will get things right the first
> > >> time. It does not have to be perfect from the start as long as it gets
> > >> better over time. As long as we always work on improving the data.
> > >>
> > >> PLEASE consider text generation based on Wikidata. They are the
> scripts
> > >> LSJBOT uses, they can help us improve the text when more or better
> > >> information becomes available.
> > >> Thanks,
> > >>   GerardM
> > >>
> > >> On 6 September 2015 at 08:25, Ricordisamoa <
> ricordisa...@openmailbox.org>
> > >> wrote:
> > >>
> > >> Proper data-based stubs are being worked on:
> > >>> https://phabricator.wikimedia.org/project/profile/1416/
> > >>> Lsjbot, you have no chance to survive make your time.
> > >>>
> > >>>
> > >>> Il 06/09/2015 02:40, Anders Wennersten ha scritto:
> > >>>
> > >>> Geonames [1] is a database which holds around 9 M entries of
> geographical
> >  related items from all over the world.
> > 
> >  Lsjbot is now generating articles from a subset of it, after several
> >  months of extensive research on its quality, Wikidata relations and
> >  notability issues. While the quality in some regions is substandard
> (and
> >  these will not be generated) it was seen as very good in most
> areas.  In
> >  the discussion  I was intrigued to learn that identical Arabic names
> >  should
> >  be 

Re: [Wikimedia-l] Fwd: Wikimania 15 videos

2015-09-05 Thread Emilio J . Rodríguez-Posada
Thanks for this Ivan!

The last time I enjoyed Wikimania talks were Wikimania Argentina (2009),
because there were some in Spanish too. I can write in English but
listening is a bit hard for me.

Subtitles are necessary to make the talks accessible to everybody, not only
non-English speakers but also deaf people.

If I can help transcribing or whatever, just contact me.

2015-09-05 5:46 GMT+02:00 Ivan Martínez :

> Sorry for the crosspost.
> Thanks,
>
> -- Forwarded message --
> From: Ivan Martínez 
> Date: 2015-09-04 22:38 GMT-05:00
> Subject: Wikimania 15 videos
> To: "Wikimania general list (open subscription)" <
> wikimani...@lists.wikimedia.org>
>
>
> Hi everyone, the featured speakers videos are up now, both on Commons and
> Youtube. We have two versions of each speaker, the original audio and the
> simultaneous translation English or Spanish according to the case.
>
> https://commons.wikimedia.org/wiki/Wikimania_2015_presentation_videos
>
> https://www.youtube.com/playlist?list=PLxdLXCagb6RAiK9w2y0DsDUzWF7dgj7j-
>
> Our volunteers will work in the next months in the closed caption of each
> talk. Will be very nice if we can have many translations many languages as
> possible.
>
> We have pending only the publication of the Wikimania 15 documentary, that
> will be premiered soon at National Film Archive of Mexico (Cineteca
> Nacional) and TV UNAM, public digital TV with national reach, and then
> shared by all our channels. Stay tuned!
>
> Thanks again for the great moments we lived in Mexico City.
>
> Cheers,
>
> --
> *Iván Martínez*
>
>
> *Presidente - Wikimedia México A.C.User:ProtoplasmaKid @protoplasmakid*
>
> Hemos creado la más grande colección de conocimiento compartido. Ayuda a
> proteger a Wikipedia, dona ahora:
> https://donate.wikimedia.org
>
>
>
> --
> *Iván Martínez*
>
>
> *Presidente - Wikimedia México A.C.User:ProtoplasmaKid @protoplasmakid*
>
> Hemos creado la más grande colección de conocimiento compartido. Ayuda a
> proteger a Wikipedia, dona ahora:
> https://donate.wikimedia.org
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] Future of Wikipedia

2015-07-15 Thread Emilio J . Rodríguez-Posada
It is interesting to see the reactions, but it just shows the change in how
information is saved, disseminated and consumed, from analog to digital
medium.

I am more worried about how many encyclopedias have closed in the last
years. We are moving to a world where Wikipedia is the de facto
encyclopedia. This have evolved faster than the concentration of media
ownership,[1] and it is dangerous in my opinion. Furthermore, references
are links to published works, and who decides what is published or not? The
big media and publishing companies.

[1] https://en.wikipedia.org/wiki/Concentration_of_media_ownership

2015-07-14 22:22 GMT+02:00 Renata St renataw...@gmail.com:

 Hi.

 So I saw this YouTube video yesterday about kids reacting to printed
 encyclopedia: https://www.youtube.com/watch?v=X7aJ3xaDMuMnoredirect=1

 It made me sad. And very fearful of the future of Wikipedia.

 These kids do not appreciate knowledge and information because they grew up
 with its abundance. When I was growing up (and I am only 30), printed
 encyclopedia was the only research tool. These kids will never know the
 frustration when you tried looking something up in those dusty volumes only
 to find minimal information (stub) or, worse yet, nothing on the topic.
 And the nagging feeling it left you with because your curiosity was not
 satisfied and you thirsted for more, but there was nothing else! And so
 when Wikipedia came around it was this wondrous thing where information was
 seemingly limitless and endless. And it was expanding at dizzying speeds.
 And you could add more! It was the answer to my childhood fantasy of having
 the limitless encyclopedia that answered every questions. And it filed my
 heart with joy and satisfaction not unlike the joy of a child in candy
 story (yes, I am a geek).

 Those kids never deprived of knowledge and information will never know how
 precious it is. They will not have the same love that is required to edit
 Wikipedia and write quality articles. And it makes me sad.

 Renata
 ___
 Wikimedia-l mailing list, guidelines at:
 https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Wiki Loves Cinema editathon

2015-02-10 Thread Emilio J . Rodríguez-Posada
I like cinema and I'm really happy of seeing this.

This list needs some help
https://en.wikipedia.org/wiki/Lists_of_film_archives

2015-02-09 17:04 GMT+01:00 Ivan Martínez gala...@gmail.com:

 Dear all, we are very pleased to announce our first editathon Wikipedia ama
 el cine (Wikipedia Loves Cinema), an editathon focused in improve articles
 related to Cinema of Mexico in Spanish Wikipedia. Of course all the support
 from the whole Wikimedia community to write about the theme will be very
 welcome!


 https://es.wikipedia.org/wiki/Wikipedia:Encuentros/Editat%C3%B3n_Wikipedia_ama_el_cine

 This is the first activity as a party of a new agreement with Cineteca
 Nacional, the main film archive of our country.

 Best,

 --
 *Iván Martínez*



 *Wikimanía 2015 Chief CoordinatorUser:ProtoplasmaKid
 @protoplasmakidhttp://wikimania2015.wikimedia.org
 http://wikimania2015.wikimedia.org*

 Hemos creado la más grande colección de conocimiento compartido. Ayuda a
 proteger a Wikipedia, dona ahora:
 https://donate.wikimedia.org
 ___
 Wikimedia-l mailing list, guidelines at:
 https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Rarest records

2014-08-04 Thread Emilio J . Rodríguez-Posada
Related:
https://www.youtube.com/watch?v=SwXayHbUQ2o
https://en.wikipedia.org/wiki/Record-Rama


2014-08-04 16:11 GMT+02:00 Andy Mabbett a...@pigsonthewing.org.uk:

 This is a good read in its own right:


 http://www.slate.com/articles/arts/books/2014/07/amanda_petrusich_s_do_not_sell_at_any_price_reviewed_by_sarah_o_holla.html

 but the thesis that some 78rpm records constitute the only surviving
 example of a particular recording, with no master in an archive
 somewhere, sent chills up my spine.

 --
 Andy Mabbett
 @pigsonthewing
 http://pigsonthewing.org.uk

 ___
 Wikimedia-l mailing list, guidelines at:
 https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] How Wikimedia could help languages to survive

2014-04-22 Thread Emilio J . Rodríguez-Posada
How many languages exist?
 |_ How many languages have written works?
 |_How many languages have UNICODE support?

That is the max number of Wikisource projects we can create :-P


2014-04-22 15:12 GMT+02:00 Milos Rancic mill...@gmail.com:

 On Tue, Apr 22, 2014 at 3:12 PM, Milos Rancic mill...@gmail.com wrote:
  On Tue, Apr 22, 2014 at 3:09 PM, Milos Rancic mill...@gmail.com wrote:
  That means that it's the best starting point to raise that number from
  157 per million to ~1000 per million. If WM UK would be successful in
  achieving that goal, we'd know that it's possible. And we'll have some
  ideas how to do that.
 
  In real numbers: We need there 100 active editors.

 Sorry, 70.

 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Dells are backdored

2013-12-29 Thread Emilio J . Rodríguez-Posada
2013/12/29 Leslie Carr lc...@wikimedia.org

 On Sun, Dec 29, 2013 at 5:17 AM, geni geni...@gmail.com wrote:
  On 29 December 2013 12:55, James Salsman jsals...@gmail.com wrote:
 
  Can we please stop paying the Microsoft and NSA taxes
 
 
  The WMF doesn't.
 
 
 
  and start buying
  datacenter equipment which costs a lot less? Cubieboard/Cubietrucks for
  instance?
 
  Ref.:
 
 
 http://www.spiegel.de/international/world/catalog-reveals-nsa-has-back-doors-for-numerous-devices-a-940994.html
 
  Best regards,
  James
 
 
 
  Using non standard data center equipment is a great way to add costs.
 

 Naw, it's a great idea.  Let's switch to building our own ARM-based
 servers (by the way, which have already been a flop commercially),
 using only unproven, low-volume available motherboards and having to
 buy and assemble all of the rest of the components.  And then of
 course, we need to design our own cases... and since these have such a
 low performance, we'll need to have a lot more rack and datacenter
 space, of course which comes with a cost... and we'll have to figure
 out how to run our caching layers which require large amounts of
 memory... and our storage layers which require large amounts of disk
 space.  At that point we'll probably need to redesign those boards
 which are incapable of doing these things, so we'll need a team of
 hardware engineers, plus a deal with a manufacturing plant.

 So... I think with about a 100 million dollar per year research budget
 we can do this.  Who's ponying up? ;)


Funny huh?

If we use free software, I don't see why we can't move to open-source
hardware ASAP.



  As for security given the limited resources the WMF has whenever GCHQ,
 FSB
  and MSS have wanted to get in they have and there is nothing we can do
  about this.
  ___
  Wikimedia-l mailing list
  Wikimedia-l@lists.wikimedia.org
  Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe



 --
 Leslie Carr
 Wikimedia Foundation
 AS 14907, 43821
 http://as14907.peeringdb.com/

 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] The Old Wikipedia logo is still widely used - what can we do?

2013-12-21 Thread Emilio J . Rodríguez-Posada
Put a big red notice in the old images page descriptions saying that the
logo is deprecated and that the new logo is available in [link to the new
version]. So when people click on them from Google, they will read it.


2013/12/20 Amir E. Aharoni amir.ahar...@mail.huji.ac.il

 Hallo.

 The old version of the puzzle globe Wikipedia logo was retired in 2010.
 It had wrongly painted letters in several languages as well as some
 graphical imperfections. The current logo has correct letters and improved
 vector graphics quality.

 Despite the replacement having taken place almost four years ago, I still
 see the old logo in news stories about Wikipedia in many languages every
 few days. I find it very frustrating.

 One likely reason for this is that Google images search shows mostly images
 of the old logo when queried for wikipedia logo.

 Can we do anything about it? My SEO skills are about non-existent. A
 Facebook friend suggested changing the title of the Commons file
 File:Wikipedia-logo.png to File:Wikipedia-logo-v1.png and
 File:Wikipedia-logo-v2.svg to File:Wikipedia_logo.svg. This sounds
 reasonable, though it may have considerable technical implications.

 Is there anything else we can do?

 --
 Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
 http://aharoni.wordpress.com
 ‪“We're living in pieces,
 I want to live in peace.” – T. Moore‬
 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

[Wikimedia-l] The British Library releases 1 million images

2013-12-15 Thread Emilio J . Rodríguez-Posada
Quote from full announcement
http://britishlibrary.typepad.co.uk/digital-scholarship/2013/12/a-million-first-steps.html

We have released over a million
imageshttp://www.flickr.com/photos/britishlibraryonto Flickr Commons
for anyone to use, remix and repurpose. These images
 were taken from the pages of 17th, 18th and 19th century books digitised
 by 
 Microsofthttp://pressandpolicy.bl.uk/Press-Releases/The-British-Library-19th-Century-Book-Digitisation-Project-343.aspxwho
  then generously gifted the scanned images to us, allowing us to release
 them back into the Public Domain. The images themselves cover a startling
 mix of subjects: There are maps, geological diagrams, beautiful
 illustrations, comical satire, illuminated and decorative letters,
 colourful illustrations, landscapes, wall-paintings and so much more that
 even we are not aware of.


Flickr account http://www.flickr.com/photos/britishlibrary
Example of image http://www.flickr.com/photos/britishlibrary/11307195524/
Example of all images from a book
http://www.flickr.com/photos/britishlibrary/tags/sysnum002660292
Stuff for coders https://github.com/BL-Labs/imagedirectory

So... :-)
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Internet.org and Wikipedia Zero ?

2013-08-23 Thread Emilio J . Rodríguez-Posada
Looks like NSA has bought some new hard drives and needs moar data.


2013/8/23 Gerard Meijssen gerard.meijs...@gmail.com

 Hoi,
 But when they provide the infrastructure that allows our content to be seen
 by many more people, they do us a service.

 In the end it is what we are about. Last thing I heard we were first of all
 about getting the knowledge out there.
 Thanks,
   GerardM


 On 23 August 2013 12:14, Jens Best jens.b...@wikimedia.de wrote:

  Nothing good comes with people like Mark Zuckerberg or Peter Thiel, they
  don't share our vision of a *really* free and open internet. So,
 actually,
  Emmanuel, I couldn't care less which direction they gonna make their next
  moves. It will all be a disguise of what they really attempt and with
 whom
  they really cooperate.
 
  It's time to realize that there isn't a shared vision of the web between
  Silicon Valley and Wikimedia. Their words are empty. When they speak of
  freedom, they speak of the freedom of money and control. Just because
 they
  use the word internet they don't speak of the same thing we do.
 
  Jens
 
  2013/8/23 Emmanuel Engelhart kel...@kiwix.org
 
   Le 23/08/2013 10:59, Kul Wadhwa a écrit :
I have my concerns as well so we're watching how things unfold for
 now.
Perhaps to add to Teemu's question (If I could be so bold) how would
internet.org need to evolve to make it worth our time and effort to
 be
involved?
  
   If what I fear becomes real, then I would be sad that our movement
 joins
   such a dishonest project.
  
   If they want to give access to a subset of Internet services and adapt
   their communication (honesty about the product), then we face a
 dilemma.
   A dilemna between our wish to give access to our content (tactical
 move)
   and the one of having a free, neutral and un-clustered Internet
   (strategical view)... Big discussions in view, but we already have done
   it with Wikipedia zero and I know the WMF tends to be pragmatic in such
   situations ;)
  
   If they really want to help to give a neutral access to internet...
 then
   this is really a dream we should be part of!
  
   But, this is all about speculations...
  
   I just wanted to explain why this launch doesn't sound well to my
 hears.
   But I know nothing about their real intentions and concrete projects.
   That's why, it's IMO urgent to wait... and see in which direction they
   will make the next moves.
  
   Emmanuel
   --
   Kiwix - Wikipedia Offline  more
   * Web: http://www.kiwix.org
   * Twitter: https://twitter.com/KiwixOffline
   * more: http://www.kiwix.org/wiki/Communication
  
   ___
   Wikimedia-l mailing list
   Wikimedia-l@lists.wikimedia.org
   Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
   mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
  
 
 
 
  --
  --
  Jens Best
  Präsidium
  Wikimedia Deutschland e.V.
  web: http://www.wikimedia.de
  mail: jens.best http://goog_17221883@wikimedia.de
 
  Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
  Eingetragen im Vereinsregister des Amtsgerichts
  Berlin-Charlottenburg unter der Nummer 23855 B. Als gemeinnützig
  anerkannt durch das Finanzamt für Körperschaften I Berlin,
  Steuernummer 27/681/51985.
  ___
  Wikimedia-l mailing list
  Wikimedia-l@lists.wikimedia.org
  Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
  mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
 
 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] A proposal towards a multilingual Wikipedia

2013-08-07 Thread Emilio J . Rodríguez-Posada
This may work very fine for little stubs about repetitive stuff, like the
introductions of cities (location, population, foundation date, country,
etc). But, how will that work for the rest of sections of Berlin (history,
geography, politics...)? https://en.wikipedia.org/wiki/Berlin


2013/8/7 Denny Vrandečić denny.vrande...@wikimedia.de

 I have been thinking about this for a while, and now finally managed to
 write it down as a proposal. Details are on meta on the following link,
 below is the intro to the proposal:

 
 http://meta.wikimedia.org/wiki/A_proposal_towards_a_multilingual_Wikipedia
 

 I tried to anticipate some possible questions and provide answers on the
 page. Besides that, I obviously hope that Wikimania could provide a place
 to start this conversation. And yes, I am aware that the proposal would
 lead to a very restrictive solution, but imagine what good it already could
 achieve! And since it is not meant to replace anything, but enrich our
 current projects... well, read for yourself.

 Cheers,
 Denny


 Wikipedia provides knowledge in more than 200 languages. Whereas a small
 number of languages are fortunate enough to have a large Wikipedia, many of
 the language editions are far away from providing a comprehensive
 encyclopedia by any measure. There are several approaches towards closing
 this gap, mostly focusing on increasing the number of contributors to the
 small language editions or to improve the provision of automatic or
 semi-automatic translations of articles. Both are viable. In the following
 we present a proposal for a different approach, which is based on the idea
 of multilingual Wikipedia.

 Imagine a small extension to the template system, where a template call
 like *{{F12}}* would not be expanded by a call to the template
 Template:F12, but rather to Template:F12/en, i.e. the template name with
 the selected language code of the reader of the page. A template call such
 as *{{F12:Q64|Q5519|Q183}}* can be expanded by Template:F12/en into
 *“Berlin
 is the capital of Germany.”* and by Template:F12/de into *“Berlin ist die
 Hauptstadt Deutschlands.”* (in the example, the template parameters Q5119,
 Q64 and Q183 refer to the Wikidata items for capital, Berlin and Germany
 respectively, which the templates query for the label in the respective
 language). Sentence by sentence could be created in order to provide for a
 simple article.

 That wiki would consist of *content*, i.e. the article pages, possibly just
 a simple series of template calls, and *frames*, i.e. the templates that
 lexicalize the parameters of a given template call into a sentence (Note
 that “sentence” here should not be considered literally. It could be a
 table, an image, anything). The implementation of the frames can be done in
 normal wiki template syntax, in Lua, in a novel mechanism, or a mix of
 these. This would be up to the communities creating them.

 Read the rest here:
 
 http://meta.wikimedia.org/wiki/A_proposal_towards_a_multilingual_Wikipedia
 

 --
 Project director Wikidata
 Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
 Tel. +49-30-219 158 26-0 | http://wikimedia.de

 Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
 Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
 der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
 Körperschaften I Berlin, Steuernummer 27/681/51985.
 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] A proposal towards a multilingual Wikipedia

2013-08-07 Thread Emilio J . Rodríguez-Posada
Most times the best approach is a compilation of several approaches.

Perhaps we can use the Denny system for the little introduction of articles
(for example: geography, biographies) and optional automatic translation
for the rest of the article.

I mean, if you follow a red link in a little Wikipedia, it loads the i18n
template + wikidata bits, so you have a brief summary about the topic. Then
you can save that live generated stub, and expand it (using
autotraslation from other WIkipedia).


2013/8/7 Anders Wennersten m...@anderswennersten.se

 Thanks for sharing your very interesting ideas. While I am not fully
 support your idea of implementation, I share your basic view of the need
 and think some of the concepts you introduce has a very high potential to
 better utilize the power of us having many versions.

 I have put in my feedback on the talkpage and hope there will be a
 possibility to evolve this concept further in some type of workgroup. I
 also see an interesting relation to the talk of machine translation where I
 believe we can do a lot very quickly if we limit the vocabulary to be
 included in such a tool

 Anders


 Denny Vrandečić skrev 2013-08-07 02:20:

  I have been thinking about this for a while, and now finally managed to
 write it down as a proposal. Details are on meta on the following link,
 below is the intro to the proposal:

 http://meta.wikimedia.org/**wiki/A_proposal_towards_a_**
 multilingual_Wikipediahttp://meta.wikimedia.org/wiki/A_proposal_towards_a_multilingual_Wikipedia
 

 I tried to anticipate some possible questions and provide answers on the
 page. Besides that, I obviously hope that Wikimania could provide a place
 to start this conversation. And yes, I am aware that the proposal would
 lead to a very restrictive solution, but imagine what good it already
 could
 achieve! And since it is not meant to replace anything, but enrich our
 current projects... well, read for yourself.

 Cheers,
 Denny


 Wikipedia provides knowledge in more than 200 languages. Whereas a small
 number of languages are fortunate enough to have a large Wikipedia, many
 of
 the language editions are far away from providing a comprehensive
 encyclopedia by any measure. There are several approaches towards closing
 this gap, mostly focusing on increasing the number of contributors to the
 small language editions or to improve the provision of automatic or
 semi-automatic translations of articles. Both are viable. In the following
 we present a proposal for a different approach, which is based on the idea
 of multilingual Wikipedia.

 Imagine a small extension to the template system, where a template call
 like *{{F12}}* would not be expanded by a call to the template
 Template:F12, but rather to Template:F12/en, i.e. the template name with
 the selected language code of the reader of the page. A template call such
 as *{{F12:Q64|Q5519|Q183}}* can be expanded by Template:F12/en into
 *“Berlin
 is the capital of Germany.”* and by Template:F12/de into *“Berlin ist die
 Hauptstadt Deutschlands.”* (in the example, the template parameters Q5119,
 Q64 and Q183 refer to the Wikidata items for capital, Berlin and Germany
 respectively, which the templates query for the label in the respective
 language). Sentence by sentence could be created in order to provide for a
 simple article.

 That wiki would consist of *content*, i.e. the article pages, possibly
 just
 a simple series of template calls, and *frames*, i.e. the templates that
 lexicalize the parameters of a given template call into a sentence (Note
 that “sentence” here should not be considered literally. It could be a
 table, an image, anything). The implementation of the frames can be done
 in
 normal wiki template syntax, in Lua, in a novel mechanism, or a mix of
 these. This would be up to the communities creating them.

 Read the rest here:
 http://meta.wikimedia.org/**wiki/A_proposal_towards_a_**
 multilingual_Wikipediahttp://meta.wikimedia.org/wiki/A_proposal_towards_a_multilingual_Wikipedia
 



 __**_
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.**org Wikimedia-l@lists.wikimedia.org
 Unsubscribe: 
 https://lists.wikimedia.org/**mailman/listinfo/wikimedia-lhttps://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-request@**lists.wikimedia.orgwikimedia-l-requ...@lists.wikimedia.org
 ?subject=**unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] NSA

2013-08-01 Thread Emilio J . Rodríguez-Posada
It is funny (but also sad) to see how people thought that Internet privacy
was respected in Western world. Almost 99% only worried about China/Iran
Internet monitoring and censorship but we had here the most comprehensive
spy system logging every site you read.

Wake up!
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Picturing Canada: historic Canadian photography now on Commons

2013-07-01 Thread Emilio J . Rodríguez-Posada
Very interesting. Congrats!


2013/7/1 Andrew Gray andrew.g...@dunelm.org.uk

 Hi all,

 Today, the British Library announced the Picturing Canada project to
 mark Canada Day (1st July). Those of you who were at GLAM-Wiki in
 April may remember this collection: it's a digitisation of the
 Canadian Copyright Collection, 1895-1924, covering photographs
 deposited for copyright registration in Canada during this period.
 There's currently about 2,000 photographs, many of which are
 composites of multiple images stuck together; all are available as
 full-resolution TIFFs and JPEGs.

 There's more files still trickling up - including some interesting
 aerial photographs, panoramas, and a collection of official
 photographs from WWI - but almost all of the general images are now
 online, and we're now just adding the oddities. Including the official
 photographs, this will total around 4,000 works. Please do take a look
 - there's some marvellous material in there.

 WMF: http://blog.wikimedia.org/2013/07/01/picturing-canada/ (in
 English and French; translation by Benoit Rochon)
 BL:
 http://britishlibrary.typepad.co.uk/americas/2013/07/happy-canada-day.html
 Commons: http://commons.wikimedia.org/wiki/Commons:Picturing_Canada

 Thanks to Wikimedia UK and the Eccles Centre for American Studies for
 funding this, and to Phil Hatfield at the British Library for
 championing the collection!

 Andrew.

 --
 - Andrew Gray
   andrew.g...@dunelm.org.uk

 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Fwd: Fwd: Re: Swedish Wikipedia reach 1 million (with supportof bots)

2013-06-18 Thread Emilio J . Rodríguez-Posada
Seriously, are we discussing again about bot stubs yes, bot stubs no?

Those users who want to submit complete 10-page articles, can move to the
defunct Nupedia or the 'vibrant' Citizendium.

This eternal discussion is so boring.


2013/6/18 Lydia Pintscher lydia.pintsc...@wikimedia.de

 On Tue, Jun 18, 2013 at 9:17 AM, Nikola Smolenski smole...@eunet.rs
 wrote:
  On 18/06/13 01:04, Martin Rulsch wrote:
 
  As far as I know, that's even planned by the Wikidata team.
 
 
  It isn't exactly planned by the Wikidata tema, a volunteer would need to
 do
  it.

 What exactly are you talking about being planned by the team? I'm not
 sure we're all talking about the same thing.


 Cheers
 Lydia

 --
 Lydia Pintscher - http://about.me/lydia.pintscher
 Community Communications for Technical Projects

 Wikimedia Deutschland e.V.
 Obentrautstr. 72
 10963 Berlin
 www.wikimedia.de

 Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.

 Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
 unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
 Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l


Re: [Wikimedia-l] Swedish Wikipedia reach 1 million (with support of bots)

2013-06-16 Thread Emilio J . Rodríguez-Posada
Impressive numbers. Congrats.

Can we have those species stubs and lakes in English Wikipedia?


2013/6/16 Anders Wennersten m...@anderswennersten.se

 Yesterday sv:wp reached 1 M articles. The one who did the passing was a
 bot generated article of a butterfly http://sv.wikipedia.org/wiki/**
 Erysichton_elaborata http://sv.wikipedia.org/wiki/Erysichton_elaborata.

 The bot behind this article is Lsjbot who creates articles from the
 database  Catalogue of Life http://www.catalogueoflife.**
 org/services/res/2011AC_**26July.ziphttp://www.catalogueoflife.org/services/res/2011AC_26July.zipwhich
   (complemented by other databases) which holds data of around 1.5
 million species. The bot genrates about 5000 new articles per day and has
 generated just under 400 000 of the sv:wps million and continues...

 The guy who runs he bot is a member of the Swedish chapters board
 http://se.wikimedia.org/wiki/**Kandidater_2013/Sverker_**Johanssonhttp://se.wikimedia.org/wiki/Kandidater_2013/Sverker_Johanssonand
  is in his civil life a University teacher. In this capacity he is also
 a guest lecturer at the university of the Phillipines where he stayed the
 last couple of months (and the bot was on hold). He is there active in
 Cebuano-Wikipedia 
 http://ceb.wikipedia.org/wiki/**Unang_Panidhttp://ceb.wikipedia.org/wiki/Unang_Panidand
  supporting their local community, and he is now running his bot on
 their wikipedia as well as on the Warai-Warai Wikipedia
 http://war.wikipedia.org/wiki/**Syahan_nga_Paklihttp://war.wikipedia.org/wiki/Syahan_nga_Pakli.
 So perhaps at the end of the year these two language versions will also
 pass the 1 million mark!

 Anders
 PS out other major botgenerating effort of all lakes in Sweden is also
 making very nice progress, done 25% of all DS

 __**_
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.**org Wikimedia-l@lists.wikimedia.org
 Unsubscribe: 
 https://lists.wikimedia.org/**mailman/listinfo/wikimedia-lhttps://lists.wikimedia.org/mailman/listinfo/wikimedia-l

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l


Re: [Wikimedia-l] endangered languages project

2013-05-27 Thread Emilio J . Rodríguez-Posada
Idea: Wiki Loves Languages (similar to WLM)


2013/5/23 phoebe ayers phoebe.w...@gmail.com

 Perhaps of interest to many Wikimedians: the Endangered Languages project
 recently launched a new layout, making it easier to find and submit
 information on languages that are in the catalog of endangered languages
 that they are building. Worth a look.
 http://www.endangeredlanguages.com/

 -- phoebe

 --
 * I use this address for lists; send personal messages to phoebe.ayers at
 gmail.com *
 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l


Re: [Wikimedia-l] All the lakes in Sweden

2013-05-19 Thread Emilio J . Rodríguez-Posada
Thanks. I have discovered some estimations of lakes for more countries, but
no datasets.

Belarus: 11,000+
Canada: 31,752+ larger than 3 km² (2-3 million in total)
Estonia: 1,000+
Finland: 187,888 lakes 500 m² or larger (56,000 over 10,000 m²)
Latvia: 3,000+
Lithuania: 3,000+
Norway: 450,000+


2013/5/17 Anders Wennersten m...@anderswennersten.se

 Sorry not, and what we have learned is how hard is to to define what a
 lake is

 In Swden some mention there are 200 000 lakes, and that the Hydrology
 authority has data of over 150 000  but where less then 75 000 has a name
 and many with just rudimentary data. The lakes with environment impact data
 are 7 233 and in this project data with 100 dataitems per lake are
 collected from three authorities,  and articles are generated into sv:wp
 for 57 648 (not 30 000) lakes. Only around a few percent of these are
 without name and the most common name is used by 645 unique lakes... (just
 to take care of the forkpages is a science in its own)

 Anders



 Emilio J. Rodríguez-Posada skrev 2013-05-17 18:06:

 Very cool project. Do you know how many lakes are there in other European
 countries (or website/datasets)?

 It is for this
 http://en.wikipedia.org/wiki/**User:Emijrp/All_human_**knowledge#Lakeshttp://en.wikipedia.org/wiki/User:Emijrp/All_human_knowledge#Lakes

 2013/5/16 Anders Wennersten m...@anderswennersten.se

  A new major botgenerating effort is now under way on sv:wp. All lakes
 (3, all down to ones of pondsizes with no name) is now produced based
 on lake data from Swedish metereology institute and all lake environment
 data from a newly set up authority demanded by EU, in order to register
 and
 track all data of lakes in all Europe.

 The articles are generated by AWB and with some manual effort to take
 care
 of text in existing articles and a major effort taking care of all with
 the
 same name (Little lake, Black lake etc)

 Examples
 http://sv.wikipedia.org/wiki/%C3%96ren,_Sm%C3%A5landhttp:**
 //sv.wikipedia.org/wiki/%C3%**96ren,_Sm%C3%A5landhttp://sv.wikipedia.org/wiki/%C3%96ren,_Sm%C3%A5land
 
 http://sv.wikipedia.org/wiki/Bunnhttp://sv.wikipedia.org/wiki/**Bunn
 http://sv.wikipedia.org/wiki/**Bunn http://sv.wikipedia.org/wiki/Bunn


 Also on maps from Google or Bing you can localize all lakes if you do not
 know the name (and for the ones missing name)
 https://maps.google.com/maps?q=http://toolserver.org/~**para/**https://maps.google.com/maps?**q=http://toolserver.org/~para/**
 cgi-bin/kmlexport%3Fproject%3Dsv%26article%3DKategori%**
 253AInsj%2525C3%2525B6ar_i_J%2525C3%2525B6nk%2525C3%
 2525B6pings_kommunhttps://**maps.google.com/maps?q=http://**
 toolserver.org/~para/cgi-bin/**kmlexport%3Fproject%3Dsv%**
 26article%3DKategori%253AInsj%**2525C3%2525B6ar_i_J%2525C3%**
 2525B6nk%2525C3%2525B6pings_**kommunhttps://maps.google.com/maps?q=http://toolserver.org/~para/cgi-bin/kmlexport%3Fproject%3Dsv%26article%3DKategori%253AInsj%2525C3%2525B6ar_i_J%2525C3%2525B6nk%2525C3%2525B6pings_kommun
 
 or
 http://www.bing.com/maps/?mapurl=http%3A%2F%**
 2Ftoolserver.org%2F%7Epara%2Fcgi-bin%2Fkmlexport%**
 3Fproject%3Dsv%26article%3DKategori%253AInsj%2525C3%**
 2525B6ar_i_J%2525C3%2525B6nk%2525C3%2525B6pings_kommunhtt**
 p://www.bing.com/maps/?mapurl=**http%3A%2F%2Ftoolserver.org%**
 2F%7Epara%2Fcgi-bin%**2Fkmlexport%3Fproject%3Dsv%**
 26article%3DKategori%253AInsj%**2525C3%2525B6ar_i_J%2525C3%**
 2525B6nk%2525C3%2525B6pings_**kommunhttp://www.bing.com/maps/?mapurl=http%3A%2F%2Ftoolserver.org%2F%7Epara%2Fcgi-bin%2Fkmlexport%3Fproject%3Dsv%26article%3DKategori%253AInsj%2525C3%2525B6ar_i_J%2525C3%2525B6nk%2525C3%2525B6pings_kommun
 


 As you can see the text is substantial around 5000 characters and even
 thing like fisharts in the lakes are generated from databases.

 We are now starting to contemplate to put these type of basedata in
 Wikidata (we are experimenting with this for some adm units), and to
 generate the articles from that as a base instead of from database
 extracted from the authorities and put on personal PCs

 We have excellent relations with involved authorities that are really
 happy with the result, and we are now being approached by other
 authorites
 who wants to follow and get their data from theirs databases used to
 generate qualified Wp articles (like all runestones, all archaeological
 excavations sites, all artworks placed on official grounds etc)

 Are there any other effort like this going on, especially if any one come
 further in establish links from authorities databased to wikidata?

 Anders



 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org 
 Wikimedia-l@lists.wikimedia.**orgWikimedia-l@lists.wikimedia.org
 
 Unsubscribe: https://lists.wikimedia.org/
 mailman/listinfo/wikimedia-lhttps://lists.wikimedia.org/**mailman/listinfo/wikimedia-l
 h**ttps://lists.wikimedia.org/**mailman/listinfo/wikimedia-lhttps

Re: [Wikimedia-l] All the lakes in Sweden

2013-05-17 Thread Emilio J . Rodríguez-Posada
Very cool project. Do you know how many lakes are there in other European
countries (or website/datasets)?

It is for this
http://en.wikipedia.org/wiki/User:Emijrp/All_human_knowledge#Lakes

2013/5/16 Anders Wennersten m...@anderswennersten.se

 A new major botgenerating effort is now under way on sv:wp. All lakes
 (3, all down to ones of pondsizes with no name) is now produced based
 on lake data from Swedish metereology institute and all lake environment
 data from a newly set up authority demanded by EU, in order to register and
 track all data of lakes in all Europe.

 The articles are generated by AWB and with some manual effort to take care
 of text in existing articles and a major effort taking care of all with the
 same name (Little lake, Black lake etc)

 Examples
 http://sv.wikipedia.org/wiki/%**C3%96ren,_Sm%C3%A5landhttp://sv.wikipedia.org/wiki/%C3%96ren,_Sm%C3%A5land
 http://sv.wikipedia.org/wiki/**Bunn http://sv.wikipedia.org/wiki/Bunn

 Also on maps from Google or Bing you can localize all lakes if you do not
 know the name (and for the ones missing name)
 https://maps.google.com/maps?**q=http://toolserver.org/~para/**
 cgi-bin/kmlexport%3Fproject%**3Dsv%26article%3DKategori%**
 253AInsj%2525C3%2525B6ar_i_J%**2525C3%2525B6nk%2525C3%**2525B6pings_kommunhttps://maps.google.com/maps?q=http://toolserver.org/~para/cgi-bin/kmlexport%3Fproject%3Dsv%26article%3DKategori%253AInsj%2525C3%2525B6ar_i_J%2525C3%2525B6nk%2525C3%2525B6pings_kommun
 or
 http://www.bing.com/maps/?**mapurl=http%3A%2F%**
 2Ftoolserver.org%2F%7Epara%**2Fcgi-bin%2Fkmlexport%**
 3Fproject%3Dsv%26article%**3DKategori%253AInsj%2525C3%**
 2525B6ar_i_J%2525C3%2525B6nk%**2525C3%2525B6pings_kommunhttp://www.bing.com/maps/?mapurl=http%3A%2F%2Ftoolserver.org%2F%7Epara%2Fcgi-bin%2Fkmlexport%3Fproject%3Dsv%26article%3DKategori%253AInsj%2525C3%2525B6ar_i_J%2525C3%2525B6nk%2525C3%2525B6pings_kommun

 As you can see the text is substantial around 5000 characters and even
 thing like fisharts in the lakes are generated from databases.

 We are now starting to contemplate to put these type of basedata in
 Wikidata (we are experimenting with this for some adm units), and to
 generate the articles from that as a base instead of from database
 extracted from the authorities and put on personal PCs

 We have excellent relations with involved authorities that are really
 happy with the result, and we are now being approached by other authorites
 who wants to follow and get their data from theirs databases used to
 generate qualified Wp articles (like all runestones, all archaeological
 excavations sites, all artworks placed on official grounds etc)

 Are there any other effort like this going on, especially if any one come
 further in establish links from authorities databased to wikidata?

 Anders



 __**_
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.**org Wikimedia-l@lists.wikimedia.org
 Unsubscribe: 
 https://lists.wikimedia.org/**mailman/listinfo/wikimedia-lhttps://lists.wikimedia.org/mailman/listinfo/wikimedia-l

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l