Re: [Wikitech-l] Facebook's HHVM performance sprint retrospective highlights MediaWiki gains

2015-06-12 Thread Gerard Meijssen
Hoi,
And good news it is :)
Thanks Ori :) were people of the WMF involved in this ?
Thanks,
 GerardM

On 12 June 2015 at 06:08, Ori Livneh o...@wikimedia.org wrote:

 On Thu, Jun 11, 2015 at 9:02 PM, Ori Livneh o...@wikimedia.org wrote:

  a parse of the Barack Obama article.
 

 A parse of *English Wikipedia's* Barack Obama article. I am trying, I am
 trying... :)
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] XML dumps

2015-05-29 Thread Gerard Meijssen
hear hear
Gerard

On 29 May 2015 at 01:52, Lars Aronsson l...@aronsson.se wrote:

 The XML database dumps are missing all through May, apparently
 because of a memory leak that is being worked on, as described
 here,
 https://phabricator.wikimedia.org/T98585

 However, that information doesn't reach the person who wants to
 download a fresh dump and looks here,
 http://dumps.wikimedia.org/backup-index.html

 I think it should be possible to make a regular schedule for
 when these dumps should be produced, e.g. once each month or
 once every second month, and treat any delay as a bug. The
 process to produce them has been halted by errors many times
 in the past, and even when it runs as intended the interval
 is unpredictable. Now when there is a bug, all dumps are
 halted, i.e. much delayed. For a user of the dumps, this is
 extremely frustrating. With proper release management, it
 should be possible to run the old version of the process
 until the new version has been tested, first on some smaller
 wikis, and gradually on the larger ones.


 --
   Lars Aronsson (l...@aronsson.se)
   Aronsson Datateknik - http://aronsson.se



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] ContentTranslation gets to 2000

2015-05-03 Thread Gerard Meijssen
Hoi,
What does it take for the WMF to provide continued support for important
tools like Apertium ? If there is one movement that will benefit from
development of Apertium it is ours..
Thanks,
GerardM

On 1 May 2015 at 15:57, David Cuenca dacu...@gmail.com wrote:

 Amir,

 First of all a big thank you as a speaker of Catalan and fervent advocate
 of minoritary languages.

 OTOH, I would like to bring awareness to the topic of translation engine.
 Apertium is no longer supported as a GsoC project, I guess the project will
 keep alive but it worries me that downstream we reap the benefits without
 considering that the upstream projects might need support too.

 I wish that the conversation thread that was started long ago to support
 upstream projects also includes now open sourced translation tools because
 as your number show, they seem very relevant.

 Best regards,
 Micru



 On Thu, Apr 30, 2015 at 8:04 PM, Steven Walling steven.wall...@gmail.com
 wrote:

  -Wikimedia-l
 
  Amir, this is awesome. Glad to see it's taking off.
 
  On Thu, Apr 30, 2015 at 6:24 AM Amir E. Aharoni 
  amir.ahar...@mail.huji.ac.il wrote:
  
   * In all the Wikipedias in which ContentTranslation is deployed, it is
   currently defined as a Beta feature, which means that it is only
  available
   to logged-in users who opted into it in the preferences.
  
 
  Regarding Beta feature status: what would it take to enable this as a
  default? You mentioned this in plans for the coming months.
 
  That deletion rate (60 out of 2000 = 3%?) looks actually a lot better
 than
  pretty OK. According to historical stats, it's basically equivalent to
  deletion rates for article creators with more than a month of
  experience.[1]
 
  It seems like the only risk in taking this out of beta status as moving
 to
  a default is UI clutter for monolingual users who can't ever make use of
  the feature? Maybe it's unobtrusive enough that you don't need to do
 this,
  but perhaps you could enable as a default for only those users who have
  substantively edited more than one language edition of Wikipedia? Either
  that, or we could consider adding a languages I edit in section to
  the Internationalisation section of user preferences?
 
  I'm sure you've thought about this before, but I'd love to hear more
 about
  the rollout plan.
 
  1. http://meta.wikimedia.org/wiki/Research:Wikipedia_article_creation
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 Etiamsi omnes, ego non
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RfC] Feature to watch categorylinks

2015-04-02 Thread Gerard Meijssen
Hoi,
Would it be possible to trigger a function that might add statements to the
items for the articles involved ?
Thanks,
 GerardM

On 2 April 2015 at 11:54, Kai Nissen kai.nis...@wikimedia.de wrote:

 I already noticed the extension, but decided not to enhance it.

 As you already said, it hasn't been touched in a long time. In fact,
 there are some minor issues in the code that would need to be fixed.

 Apart from that, it's using a different hook (OnArticleSave) and a diff
 mechanism to determine the added/removed categories. It's also sending
 notification mails on its own instead of using Extension:Echo or the job
 queue.

 The feature for having personal categories (Category:Automatically
 watched by USER_NAME) seems quite useful for custom mediawiki
 installations, but shouldn't be available in Wikimedia projects.

 If we went ahead and changed all this, there wouldn't be much left of
 what it was. The extension would also depend on Echo, so people who are
 using it in their own mediawiki installations would need to install Echo
 in order to be able to upgrade.

 Cheers,
 Kai


 Am 30.03.2015 um 19:05 schrieb Aran:
  The the CategoryWatch extension was created a few years ago for this,
  but hasn't been updated in a long time so may need some minor fixes by
 now.
  https://www.mediawiki.org/wiki/Extension:CategoryWatch
 
  On 30/03/15 12:40, Kai Nissen wrote:
  Hi there,
 
  I created a Request for comments[1] and a corresponding Phabricator
  Task[2] regarding a feature that enables users to watch categories for
  page additions and removals.
 
  If you are interested in this topic or somehow involved in
  Extension:Echo or watchlist features, feel free to participate in the
  discussion.
 
  Cheers,
  Kai
 
 
  [1]:
 https://www.mediawiki.org/wiki/Requests_for_comment/Watch_Categorylinks
  [2]: https://phabricator.wikimedia.org/T94414
 
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 Kai Nissen
 Software-Entwickler

 Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
 Tel. (030) 219 158 26-0
 http://wikimedia.de

 Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
 Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
 unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt
 für Körperschaften I Berlin, Steuernummer 27/681/51985.



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Maps

2015-03-19 Thread Gerard Meijssen
Hoi

What is meant by real hardware ?
Can we have some at Labs please?
Thanks,
 GerardM


  And will this service be hosted on
  Wikimedia Labs?
 

 No, on real hardware.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Maps

2015-03-19 Thread Gerard Meijssen
Hoi,
Yes, and that why the Labs hardware should not be talked down. It is or it
is not and truth apparently is in the eye of the one who sees it in its way.
Thanks,
 GerardM

On 19 March 2015 at 07:15, Pine W wiki.p...@gmail.com wrote:

 Heh, I'm not sure if I should laugh or groan. IIRC, Erik, Yuvi and others
 are prioritizing Labs reliability improvements for the next FY. I hope so.

 Pine
 On Mar 18, 2015 11:03 PM, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:

  Hoi
 
  What is meant by real hardware ?
  Can we have some at Labs please?
  Thanks,
   GerardM
 
 
And will this service be hosted on
Wikimedia Labs?
   
  
   No, on real hardware.
  
  
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-19 Thread Gerard Meijssen
Hoi,
Sorry if you feel that I am condescending. What you said was not clear at
all. For me it was not put in a way that made me understand that you were
proposing a solution. It came across as: look how great Wiki synax is; we
have templates..

I hate templates with a vengeance. They obfuscate what is being said, they
do not transcend the limits of a wiki and it makes people believe that the
templates in use are best practice. Maybe for them but not when you have a
profile on over 100 wikis like me.
Thanks,
 GerardM

On 19 March 2015 at 17:17, John phoenixoverr...@gmail.com wrote:

 Ok stop being condescending and dismissive. I was proposing that type of
 functionality be added to flow as an option for dealing with reply level
 excessnes. It is a simple graphic to de-indent a discussion without much
 disruption and it improves readability

 On Thursday, March 19, 2015, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:

  Hoi,
  It does not scale in two ways:
 
 - you do not get everybody to know about such trivia; there are more
 relevant things to learn
 - it may work on one wiki but how about the next ?
 
  Remember this is Wikimedia-l not en.wikipedia-l
 
  Thanks,
 
GerardM
 
  On 19 March 2015 at 17:05, John phoenixoverr...@gmail.com
 javascript:;
  wrote:
 
   What do you mean it doesn't scale?
  
   On Thursday, March 19, 2015, Gerard Meijssen 
 gerard.meijs...@gmail.com
  javascript:;
   wrote:
  
Hoi,
REALLY .. never ever heard about that template ... and it does not
  scale
Thanks,
 GerardM
   
On 19 March 2015 at 16:39, John phoenixoverr...@gmail.com
  javascript:;
   javascript:;
wrote:
   
 I think my post might have gotten lost. But what is normal on wiki
  when
we
 hit a reasonable indentation level is to use a template like {{od}}
   that
 has a return and an arrow showing that the conversation is
 continued
 unindented
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org javascript:; javascript:;
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org javascript:; javascript:;
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org javascript:;
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org javascript:;
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-19 Thread Gerard Meijssen
Hoi,
It does not scale in two ways:

   - you do not get everybody to know about such trivia; there are more
   relevant things to learn
   - it may work on one wiki but how about the next ?

Remember this is Wikimedia-l not en.wikipedia-l

Thanks,

  GerardM

On 19 March 2015 at 17:05, John phoenixoverr...@gmail.com wrote:

 What do you mean it doesn't scale?

 On Thursday, March 19, 2015, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:

  Hoi,
  REALLY .. never ever heard about that template ... and it does not scale
  Thanks,
   GerardM
 
  On 19 March 2015 at 16:39, John phoenixoverr...@gmail.com
 javascript:;
  wrote:
 
   I think my post might have gotten lost. But what is normal on wiki when
  we
   hit a reasonable indentation level is to use a template like {{od}}
 that
   has a return and an arrow showing that the conversation is continued
   unindented
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org javascript:;
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org javascript:;
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-19 Thread Gerard Meijssen
Hoi,
If you want to have a real world population of editors moving over to flow,
try translatewiki.net when FLOW is ready. It is exclusively LQT and it has
localisation in any languages always.You will not have any religious rants
about Wikitext; there is none and, you will not have official bias.
Thanks,
  GerardM

On 19 March 2015 at 16:18, Risker risker...@gmail.com wrote:

 On 19 March 2015 at 11:08, Jon Robson jdlrob...@gmail.com wrote:

  On 19 Mar 2015 7:55 am, Brad Jorsch (Anomie) bjor...@wikimedia.org
  wrote:
  
   On Wed, Mar 18, 2015 at 9:42 PM, Danny Horn dh...@wikimedia.org
 wrote:
  
Brad: unfortunately, it's really hard to tell very much from a
  conversation
with messages like 3: Post C: reply to Post A. You could do that
 with
  the
old model, the new model or the perfect magic Nobel-Prize-winning
discussion threading still to be discovered, and it would probably
 look
like nonsense in all three.
   
  
   I shouldn't have used both numbers and post-names, but once I realized
  that
   it was already a bit late and it won't let me edit those posts. Someone
   with appropriate permissions is free to go back and edit them all to
  remove
   the number prefix and let the alphabetical order of the post-names
  suffice
   to indicate the chronological order of the postings, if that would make
  it
   less confusing for you.
  
   The point is the structure you're displaying doesn't make any sense,
 not
   that the content of my messages isn't anything that might make sense on
  its
   own. My content is explicitly simplified to illustrate the failings
 in
   the displayed structure. Structure should *facilitate* understanding,
 but
   in your demo I'd find that understanding the underlying structure of
 the
   conversation would be *despite* the broken display-structure.
  
   Nor is the point that people can screw up wikitext talk pages in ways
  that
   are even more confusing. That's a given, but Flow is supposed to do
  better.
   Right now it's worse than a well-formatted wikitext talk page (which
 has
   the advantage that human users can *fix* the structure when a newbie
  breaks
   it).
  
   Comparing http://flow-tests.wmflabs.org/wiki/Wikitext version of
   Topic:Sdrqdcffddyz0jeo to
  
 
 
 http://flow-tests.wmflabs.org/wiki/Wikitext_version_of_Topic:Sdrqdcffddyz0jeo
  ,
   I find it much easier in the latter to see what is a reply to what.
  
  
We've tried in our testing to pretend that we're having real
  conversations,
so we could see whether there's any logical way to get to eight
 levels
  of
nested threading. It's not easy to organize make-believe
 conversations,
but if you want to start a thread, I'd be happy to fire up a few
sockpuppets and pretend to talk about something with you.
   
  
   No thanks. Pretend real conversations are ok for a first assessment
 at
   usability, but by nature they're likely to be vapid and unlikely to
 have
   the inter-post complexity of actual conversations on-wiki where people
  are
   concentrating on actually communicating rather than on forcing a
   conversation for the sake of testing.
  
 
  Let's all be happy then that we are replacing an unloved broken talk
  extension with Flow on a wiki where we have real conversations then ...?
 :)
  actually dogfooding will make it much easier for us to communicate errors
  with the Flow team and help improve the software.
 
  I truly hope that soon we can get to a point where we can enable flow on
  all pages on mediawiki.org and this seems like the obvious first step.
 
 
 The dogfooding has been happening for a while on WMF's own office-wiki.  We
 haven't heard any results about that.  Is the system being used more than
 the wikitext system?  (i.e., are there more talk page comments now than
 there were before?)  Have users expressed satisfaction/dissatisfaction with
 the system?  Have they been surveyed?  Do they break down into groups
 (e.g., engineering loves it, grants hates it, etc...)?  I hear some stories
 (including stories that suggest some groups of staff have pretty much
 abandoned talk pages on office-wiki and are now reverting to emails
 instead) but without any documentary evidence or analysis it's unreasonable
 to think that it is either a net positive OR a net negative.



 Risker/Anne
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-19 Thread Gerard Meijssen
Hoi,
REALLY .. never ever heard about that template ... and it does not scale
Thanks,
 GerardM

On 19 March 2015 at 16:39, John phoenixoverr...@gmail.com wrote:

 I think my post might have gotten lost. But what is normal on wiki when we
 hit a reasonable indentation level is to use a template like {{od}} that
 has a return and an arrow showing that the conversation is continued
 unindented
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-17 Thread Gerard Meijssen
Hoi,
Sorry but Wikitext is of such a nature that I do not use it as much as
possible. LiquidThreads was a huge step forward and I still find it
astonishing that it was left to rot for such a long time.I really welcome
the move towards Flow.

Flow is not an  inadequate substitute, it is what will replace wikitext
hopefully soon for discussions. I cannot wait.
Thanks,
  GerardM

On 17 March 2015 at 06:43, Kevin Wayne Williams kwwilli...@kwwilliams.com
wrote:

 Nick Wilson (Quiddity) schreef op 2015/03/16 om 17:51:

 LiquidThreads (LQT) has not been well-supported in a long time. Flow
 is in active development, and more real-world use-cases will help
 focus attention on the higher-priority features that are needed. To
 that end, LQT pages at mediawiki.org will start being converted to
 Flow in the next couple of weeks.


 I assume that the intention is to greater increase the divide between
 Wikipedia editors and the Wikimedia Foundation? It would be nice if the WMF
 would focus on becoming good with the tools that editors use instead of
 attempting to convince them that inadequate substitutes are adequate.

 KWW



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia REST content API is now available in beta

2015-03-12 Thread Gerard Meijssen
Hoi,
In what way will we know how useful this is? Will we have usage statistics ?
Thanks,
  GerardM

On 10 March 2015 at 23:23, Gabriel Wicke gwi...@wikimedia.org wrote:

 Hello all,

 I am happy to announce the beta release of the Wikimedia REST Content API
 at

 https://rest.wikimedia.org/

 Each domain has its own API documentation, which is auto-generated from
 Swagger API specs. For example, here is the link for the English Wikipedia:

 https://rest.wikimedia.org/en.wikipedia.org/v1/?doc

 At present, this API provides convenient and low-latency access to article
 HTML, page metadata and content conversions between HTML and wikitext.
 After extensive testing we are confident that these endpoints are ready for
 production use, but have marked them as 'unstable' until we have also
 validated this with production users. You can start writing applications
 that depend on it now, if you aren't afraid of possible minor changes
 before transitioning to 'stable' status. For the definition of the terms
 ‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning
 .

 While general and not specific to VisualEditor, the selection of endpoints
 reflects this release's focus on speeding up VisualEditor. By storing
 private Parsoid round-trip information separately, we were able to reduce
 the HTML size by about 40%. This in turn reduces network transfer and
 processing times, which will make loading and saving with VisualEditor
 faster. We are also switching from a cache to actual storage, which will
 eliminate slow VisualEditor loads caused by cache misses. Other users of
 Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
 translation will benefit similarly.

 But, we are not done yet. In the medium term, we plan to further reduce the
 HTML size by separating out all read-write metadata. This should allow us
 to use Parsoid HTML with its semantic markup
 https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec directly for
 both views and editing without increasing the HTML size over the current
 output. Combined with performance work in VisualEditor, this has the
 potential to make switching to visual editing instantaneous and free of any
 scrolling.

 We are also investigating a sub-page-level edit API for micro-contributions
 and very fast VisualEditor saves. HTML saves don't necessarily have to wait
 for the page to re-render from wikitext, which means that we can
 potentially make them faster than wikitext saves. For this to work we'll
 need to minimize network transfer and processing time on both client and
 server.

 More generally, this API is intended to be the beginning of a multi-purpose
 content API. Its implementation (RESTBase
 http://www.mediawiki.org/wiki/RESTBase) is driven by a declarative
 Swagger API specification, which helps to make it straightforward to extend
 the API with new entry points. The same API spec is also used to
 auto-generate the aforementioned sandbox environment, complete with handy
 try it buttons. So, please give it a try and let us know what you think!

 This API is currently unmetered; we recommend that users not perform more
 than 200 requests per second and may implement limitations if necessary.

 I also want to use this opportunity to thank all contributors who made this
 possible:

 - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
 Services team worked hard to build RESTBase, and to make it as extensible
 and clean as it is now.

 - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
 Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
 cluster backing this API.

 - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
 Marc Ordinas i Llopis is solving the extremely difficult task of converting
 between wikitext and HTML, and built a new API that lets us retrieve and
 pass in metadata separately.

 - On the MediaWiki core team, Brad Jorsch quickly created a minimal
 authorization API that will let us support private wikis, and Aaron Schulz,
 Alex Monk and Ori Livneh built and extended the VirtualRestService that
 lets VisualEditor and MediaWiki in general easily access external services.

 We welcome your feedback here:
 https://www.mediawiki.org/wiki/Talk:RESTBase
 - and in Phabricator
 
 https://phabricator.wikimedia.org/maniphest/task/create/?projects=RESTBasetitle=Feedback
 :
 .

 Sincerely --

 Gabriel Wicke

 Principal Software Engineer, Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Partial (but dramatic) labs outage on Tuesday: 2015-02-24 1500UTC-1800UTC -- finished

2015-02-24 Thread Gerard Meijssen
Hoi,
Andrew, Coren, Yuvi, thank you... You have not been lucky but you certainly
get my vote for the great effort you have put in.
Thanks,
 GerardM

On 24 February 2015 at 19:43, Andrew Bogott abog...@wikimedia.org wrote:

 This is done.  Instances were largely back up and running half an hour
 ago, and Coren and Yuvi have now prodded various Tools jobs and services
 back to life.

 As always, please email or contact us on IRC if your particular tool or
 instance is still misbehaving.

 -Andrew



 On 2/19/15 1:00 PM, Andrew Bogott wrote:

 It is with a heavy heart that I must share the news of an upcoming Labs
 maintenance window.

 The labs NFS store (which you probably know as /data/project) is filling
 up rapidly and we need to add more drives.  By weird coincidence the actual
 physical space for that server in the datacenter is ALSO filling up, so
 Chris Johnson has graciously agreed to spend his day re-shuffling servers
 in order to make space for the new diskshelf.  This involves lots of
 unplugging and replugging and amounts to the fact that the NFS server will
 need to be turned off for several hours.

 During this window Chris will take care of another long-deferred
 maintenance task -- he's putting more RAM into the labs puppet master,
 virt1000.

 What will break:

 - Shared storage for all labs and tools instances.  That includes volumes
 like /data/project, /public/dumps, /data/scratch, /home

 - Logins to all instances running ubuntu Precise.  (Trusty hosts will
 /probably/ still support logins.)

 - Login to wikitech and manipulation of instances.

 What won't break:

 - Labs instances will continue to run

 - Tasks running on instances will continue to run; those that don't rely
 on shared storage should be fine.

 - Web proxies should keep working, if the services they support aren't
 relying on shared storage.

 What will get better:

 - More storage space!

 - Fewer problems with dumps filling up NFS (which is basically the same
 as 'more storage space'.

 - More reliable puppet runs and fewer outages with miscellaneous
 OpenStack services (which also run on virt1000)

 I apologize in advance for this downtime.  Don't hesitate to contact me
 or Coren either here or on IRC with advice about how to harden your tool
 against this upcoming outage.  We will also be available on IRC during and
 after the outage to help revive things that are angry about the timeouts.

 -Andrew



 ___
 Labs-l mailing list
 lab...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/labs-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikitech-ambassadors] Global user pages deployed to all wikis

2015-02-21 Thread Gerard Meijssen
Hoi,
When the local language is not among the selected languages, it helps to
show a level 0 for the local language.

What do you think, is this feasible ??
Thanks,
GerardM

On 20 February 2015 at 20:32, Ryan Kaldari rkald...@wikimedia.org wrote:

 This is awesome! When do we get a global language pref? ;)

 On Wed, Feb 18, 2015 at 5:06 PM, Legoktm legoktm.wikipe...@gmail.com
 wrote:

  Hello!
 
  Global user pages have now been deployed to all public wikis for users
  with CentralAuth accounts. Documentation on the feature is available at
  mediawiki.org[1], and if you notice any bugs please file them in
  Phabricator[2].
 
  Thanks to all the people who helped with the creation and deployment
  (incomplete, and in no particular order): Jack Phoenix  ShoutWiki,
 Isarra,
  MZMcBride, Nemo, Quiddity, Aaron S, Matt F, James F, and everyone who
  helped with testing it while it was in beta.
 
  [1] https://www.mediawiki.org/wiki/Help:Extension:GlobalUserPage
  [2] https://phabricator.wikimedia.org/maniphest/task/create/?
  projects=PHID-PROJ-j536clyie42uptgjkft7
 
 
  ___
  Wikitech-ambassadors mailing list
  wikitech-ambassad...@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-ambassadors
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Global user pages deployed to all wikis

2015-02-20 Thread Gerard Meijssen
Hoi,
Babel templates are replaced by the #Babel functionality... The only
problem I have with the Babel functionality on Meta is that they decided to
have everything in Green..

However check it out on my profile.
Thanks,
 GerardM

On 20 February 2015 at 11:22, David Gerard dger...@gmail.com wrote:

 Nice one!

 Will anyone be porting all the Babel templates? That's a seriously
 useful thing that should go there, and they aren't on Meta yet.

 On 19 February 2015 at 01:06, Legoktm legoktm.wikipe...@gmail.com wrote:
  Hello!
 
  Global user pages have now been deployed to all public wikis for users
 with
  CentralAuth accounts. Documentation on the feature is available at
  mediawiki.org[1], and if you notice any bugs please file them in
  Phabricator[2].
 
  Thanks to all the people who helped with the creation and deployment
  (incomplete, and in no particular order): Jack Phoenix  ShoutWiki,
 Isarra,
  MZMcBride, Nemo, Quiddity, Aaron S, Matt F, James F, and everyone who
 helped
  with testing it while it was in beta.
 
  [1] https://www.mediawiki.org/wiki/Help:Extension:GlobalUserPage
  [2]
 
 https://phabricator.wikimedia.org/maniphest/task/create/?projects=PHID-PROJ-j536clyie42uptgjkft7
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] E-mail login to wiki - needs feedback

2015-02-20 Thread Gerard Meijssen
Hoi,
I have been at Meta ... I do not see it, I do not understand it .. What
should I do to enable this ?
Thanks,
 GerardM

On 20 February 2015 at 18:53, Bryan Davis bd...@wikimedia.org wrote:

 On Fri, Feb 20, 2015 at 9:52 AM, devunt dev...@gmail.com wrote:
  We should consider some edge cases like:
 
  * More than two accounts with exactly same email and password.
  - In this case, which account should be chosen for logged-in? Maybe
  account selector could be one of the answers.
 
  * If there's a 42 accounts with same email.
  - Should mediawiki try to check password forty two times? It will
  takes _very_ long time as enough to cause gateway timeout. Which means
  nobody can log in to that account.
  - To avoid timing attack completely, should mediawiki calculate hash
  of all users forty two times as same as above user?

 Minimum viable product assumption:

 Given that authentication is attempted with an (email, password) pair
 When more than one account matches email
 Then perform one data load and hash comparison to mitigate timing attacks
 and fail authentication attempt

 A community education campaign could easily be launched to notify
 users that this invariant will hold for email based authentication and
 give instructions on how to change the email associated with an
 account. The target audience for email based authentication (newer
 users who think of email addresses as durable tokens of their
 identity) will not be likely to be effected or even aware of the
 multiple account disambiguation problem.

 Bryan
 --
 Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
 [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
 irc: bd808v:415.839.6885 x6855

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New feature: tool edit

2015-02-14 Thread Gerard Meijssen
Hoi,
S or T  what language ?? What do you propose for Cyrillic ?
Thanks,
 GerardM

On 13 February 2015 at 18:22, Pine W wiki.p...@gmail.com wrote:

 I would go with S or T depending on what you choose to call these edits. I
 lean towards S.

 This can be changed later of there is an upswell of bikeshedding, but I
 hope that people will be grateful either way. (:

 Pine
 On Feb 13, 2015 3:22 AM, Petr Bena benap...@gmail.com wrote:

  What would be the letter for it then? s? t? a?
 
  On Thu, Feb 12, 2015 at 8:31 PM, Pine W wiki.p...@gmail.com wrote:
   I think bot edits are most closely aligned with fully automated
 editing.
   Perhaps semi-automated edit would work.
  
   Pine
   On Feb 12, 2015 10:34 AM, Marc A. Pelletier m...@uberbox.org
 wrote:
  
   On 15-02-12 01:13 AM, Pine W wrote:
  
   What would it take to implement a new tool edit flag userright
  
  
   In the time-honored spirit of bikeshedding, I'd suggest that the right
   name for this is automated edit rather than tool edit.
  
   -- Marc
  
  
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New feature: tool edit

2015-02-11 Thread Gerard Meijssen
Hoi,
What does this have to do with English Wikipedia ? It is useful
everywhere.. Why limit the scope ?
Thanks,
 GerardM

On 11 February 2015 at 10:33, Petr Bena benap...@gmail.com wrote:

 Yes, the question is however, if this passed consensus on english
 wikipedia and I made a patch for mediawiki, assuming code would be
 correct would it be merged to core of mediawiki or is there any other
 requirement? Does it actually even need to pass consensus on
 wikipedia? I think this would be useful for other wikis as well. We
 already have bot flag and most of smaller wikis have no bots.

 On Wed, Feb 11, 2015 at 10:10 AM, Pine W wiki.p...@gmail.com wrote:
  Definitely worth discussing. For ENWP, I suggest bringing this up on
 VP:T.
 
  Thanks,
  Pine
  On Feb 11, 2015 12:45 AM, Petr Bena benap...@gmail.com wrote:
 
  Hi,
 
  I think I proposed this once but I forgot the outcome.
 
  I would like to implement a new feature called tool edit it would be
  pretty much the same as bot edit but with following differences:
 
  -- Every registered user would be able to flag edit as tool edit (bot
  needs special user group)
  -- The flag wouldn't be intended for use by robots, but regular users
  who used some automated tool in order to make the edit
  -- Users could optionally mark any edit as tool edit through API only
 
  The rationale is pretty clear: there is a number of tools, like AWB
  and many others that produce incredible amounts of edits every day.
  They are spamming recent changes page -
  https://en.wikipedia.org/wiki/Special:RecentChanges can't be filtered
  out and most of regular users are not interested in them. This would
  make it possible to filter them out and it would also make it easier
  to figure out how many real edits some user has made, compared to
  automated edits made by tools.
 
  Is it worth implementing? I think yes, but not so sure.
 
  Thanks
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Developer Summit: The future of Language Converter

2015-01-28 Thread Gerard Meijssen
Hoi,
It has always been very clear that the script conversion for Serbian and
Chinese is very important. Every time it has been stressed that it needs to
remain available. The fact that technically you have a challenge does not
take anything away from it.

I am quite confident that the Language Committee is opposed to even the
suggestion to do without this.

Some provocative questions of my own:

* When MediaWiki is not able to provide script conversion, will we not
implement the software that prevents this.

Thanks,
   GerardM

On 27 January 2015 at 18:55, C. Scott Ananian canan...@wikimedia.org
wrote:

 Another late addition to the developer summit:  let's talk about language
 converter, content translation tools, and other ways to bridge language
 differences between projects.

 2:45pm PST today (Tuesday) in Robertson 3.

 Task: https://phabricator.wikimedia.org/T87652

 Here's a list of all the wikis currently using (or considering to use)
 Language Converter:
 https://www.mediawiki.org/wiki/Writing_systems#LanguageConverter

 Some provocative questions to spur your attendance:

 1. Should zhwiki split for mainland China and Taiwan?  What about Malay?

 2. Should we be using LanguageConverter on enwiki to handle
 British/Australian/Indian/American spelling differences?

 3. Perhaps Swiss German and High German should be different wikis? Or use
 LanguageConverter? Or use Content Translation tools?

 4. How do we best edit content which has slight localized differences?
 Templates?  Interface strings?  Can we scale our tools across this range?

 See you there!
   --scott



 --
 (http://cscott.net)
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Stance on Social Media

2015-01-12 Thread Gerard Meijssen
Hoi,
The great thing of personal opinion is that they make great arguments, they
have their point When they are presented well they are compelling..
HOWEVER, we are in the habit of testing many of these arguments.

This is a test that is bound to be interesting to many of us.
Thanks,
  GerardM

On 12 January 2015 at 06:35, MZMcBride z...@mzmcbride.com wrote:

 Tim Starling wrote:
 Yes, there's a risk we could end up with an alphabetical list of
 hundreds of social networks to share an article on, like
 Special:Booksources. Better than nothing, I guess.

 You guess wrong. As quiddity points out, we don't want
 https://i.imgur.com/XGJHLvW.png or equivalent on our sites. Such a mess
 of icons is ugly and tacky and awful. It would be worse than nothing.

 Adding social media icons is a lot of pain for very little gain, in my
 opinion. People who want to tweet about an article or post about a page on
 Facebook can (and do!) copy and paste the URL. What problem are we trying
 to solve here? If the answer is that we want to make it painless to submit
 noise into the ether and our users are only capable of clicking a colorful
 icon and using a pre-filled tweet or Facebook post or e-mail body, we
 probably need to re-evaluate what we're trying to accomplish and why.

 MZMcBride



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Numbering table rows

2015-01-01 Thread Gerard Meijssen
Hoi,
That would be useful when the table is static. When numbers are not static
it is only useful for one time usage. Typically it should be possible to
add numbers for one time usage.
Thanks,
 GerardM

On 1 January 2015 at 13:02, Bináris wikipo...@gmail.com wrote:

 Hi folks,
 is there any hidden feature or any plan do implement a feature to be able
 to automatically number the rows of a wikitable?

 --
 Bináris
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Numbering table rows

2015-01-01 Thread Gerard Meijssen
Hoi,
The number is to represent what ?? Or do you only want to know the number
of records presented ?
Thanks,
  GerardM

On 1 January 2015 at 13:40, Bináris wikipo...@gmail.com wrote:

 2015-01-01 13:31 GMT+01:00 Gerard Meijssen gerard.meijs...@gmail.com:

  Hoi,
  That would be useful when the table is static. When numbers are not
 static
  it is only useful for one time usage. Typically it should be possible to
  add numbers for one time usage.
 

 I am afraid I don't understand you. Why is it such a big deal to change
 some magic word or symbol to a dynamic row number when rendering the table?
 My thought is that MW has to translate wikitables to HTML, thus it has to
 generate tr tags that are countable. Well, rowspan may cause some
 headache.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Numbering table rows

2015-01-01 Thread Gerard Meijssen
Hoi,
What you describe is static data, not dynamic data. For dynamic data you
will not have the benefits that you seek.
Thanks,
  GerardM

On 1 January 2015 at 14:01, Bináris wikipo...@gmail.com wrote:

 I think numbering table rows is a natural need everywhere through the life.
 Not the number of the rows, the number of each row alone, e.g. a ranking.
 Let's have a look at this:

 https://hu.wikipedia.org/w/index.php?title=Magyarorsz%C3%A1g_legnagyobb_telep%C3%BCl%C3%A9sei_k%C3%B6zigazgat%C3%A1si_ter%C3%BClet_szerintcurid=115741diff=15476065oldid=15256125
 Do we really need to do this manually, and if a new row is inserted into
 the middle, manually renumber the whole table from that point again? OK, I
 can do some dirty workarounds in Excel (where generating numbers is a mouse
 motion only) if I need, and I definitely would do in such cases for 114
 rows, but I am more educated in Excel than the average of editors. Wouldn't
 it be nice to have a natural wikiconform way, either a {{#}}-like magic
 symbol to the first cell of each row, or perhaps a new class that generates
 a first column by itself?

 2015-01-01 13:49 GMT+01:00 Gerard Meijssen gerard.meijs...@gmail.com:

  Hoi,
  The number is to represent what ?? Or do you only want to know the number
  of records presented ?
  Thanks,
GerardM
 
  On 1 January 2015 at 13:40, Bináris wikipo...@gmail.com wrote:
 
   2015-01-01 13:31 GMT+01:00 Gerard Meijssen gerard.meijs...@gmail.com
 :
  
Hoi,
That would be useful when the table is static. When numbers are not
   static
it is only useful for one time usage. Typically it should be possible
  to
add numbers for one time usage.
   
  
   I am afraid I don't understand you. Why is it such a big deal to change
   some magic word or symbol to a dynamic row number when rendering the
  table?
   My thought is that MW has to translate wikitables to HTML, thus it has
 to
   generate tr tags that are countable. Well, rowspan may cause some
   headache.
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 Bináris
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] search engine integration of wikimedia projects

2014-12-02 Thread Gerard Meijssen
Hoi,
Actually we already have cross Wikimedia projects search. We already serve
this to many Wikipedias I know off. I blogged about it several times and it
does allow you to find people who have an article in another project.

... it does help us provide access to the other parts of the sum of all
knowledge that is not locally available. So far I find a severe lack of
interest preventing people from implementing this. It is in use for
instance on the Tamil WIkipedia among many others... Search for Valerie
Sutton for instance :)
Thanks,
  GerardM

On 2 December 2014 at 23:18, svetlana svetl...@fastmail.com.au wrote:

 I'm calling for developers for this idea:
 https://duck.co/ideas/idea/4655/wikimedia-projects-integration

 Polyglots-
 Please vote for this idea and spread it at your local wiki, and also call
 for developers.

 --
 svetlana

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator outage due to network issues, 11/29

2014-11-30 Thread Gerard Meijssen
Hoi,
Right ? so it is thanksgiving et al.. Be thankful that it is seen, It is
not Wikipedia or any of the projects...so relax.. eat some left over
turkey..
Thanks,
 GerardM

On 30 November 2014 at 09:59, K. Peachey p858sn...@gmail.com wrote:

 ASAP? when it's already hitting approx. five hours of down time?

 On 30 November 2014 at 18:14, Erik Moeller e...@wikimedia.org wrote:

  As noted in the server admin log [1], Phabricator is currently down due
 to
  a network outage impacting one of our racks in the Ashburn data-center.
  We're investigating and will aim to restore service ASAP.
 
  Erik
 
  [1] https://wikitech.wikimedia.org/wiki/Server_Admin_Log
 
  --
  Erik Möller
  VP of Product  Strategy, Wikimedia Foundation
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator outage due to network issues, 11/29

2014-11-30 Thread Gerard Meijssen
Hoi,
The argument about non-working hours is problematic. When the only thing
that counts are the working hours of staff in the USA you may be right. As
it is, WIkimedia Germany has staff working at other times and they are
affected. Affected are the non-professionals as well..

My advise, do not go there. It is broken and it needs fixing.
Thanks,
  GerardM

On 30 November 2014 at 19:02, Brian Wolff bawo...@gmail.com wrote:

 Thanksgiving is only celebrated at this time in the US. Many of us dont
 celebrate it.

 That said downtime happens, and its a non-essential service during non
 working hours. Well it may be frustrating, its not the end of the world. If
 anyone is despretely looking for a bug to fix, they can ask on irc, im sure
 the regulars can think of a hundred bugs off the top of their head.

 I dont think Peachy was grumbling so much as looking for an accurate time
 frame for the solution.

 Re selveta's comment about distributing the db: there is standard ways of
 doing that (e.g. simplest would be to just use db replication, and switch
 master on failure), im not sure if phab is important enough to warrant
 that. I would probably lean to no it isnt personally. Obviously that would
 be an operations call.

 --bawolff
 On Nov 30, 2014 5:13 AM, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:
 
  Hoi,
  Right ? so it is thanksgiving et al.. Be thankful that it is seen, It is
  not Wikipedia or any of the projects...so relax.. eat some left over
  turkey..
  Thanks,
   GerardM
 
  On 30 November 2014 at 09:59, K. Peachey p858sn...@gmail.com wrote:
 
   ASAP? when it's already hitting approx. five hours of down time?
  
   On 30 November 2014 at 18:14, Erik Moeller e...@wikimedia.org wrote:
  
As noted in the server admin log [1], Phabricator is currently down
 due
   to
a network outage impacting one of our racks in the Ashburn
 data-center.
We're investigating and will aim to restore service ASAP.
   
Erik
   
[1] https://wikitech.wikimedia.org/wiki/Server_Admin_Log
   
--
Erik Möller
VP of Product  Strategy, Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC for media file request counts, please chime in

2014-11-26 Thread Gerard Meijssen
Hoi,
Pageviews are associated with articles. Articles are associated with items.
Have experiments been done to learn the page views per item?
Thanks,
   GerardM

On 26 November 2014 at 19:06, Erik Zachte ezac...@wikimedia.org wrote:

 Since 2008 Wikimedia collects pageview counts for most pages on nearly all
 wikis. A longstanding request of stakeholders (editors, researchers, GLAM
 advocates) has been to publish similar counts for media files: images,
 sounds, videos. A major obstacle to effectuate this was the existing
 traffic
 data collecting software. Webstatscollector simply couldn't be scaled up
 further without incurring huge costs. In 2014 WMF engineers rolled out a
 new
 Hadoop based infrastructure, which makes it possible to collect raw request
 counts for media files. So a few months after releasing extended pageview
 counts (with mobile/zero added), the time has come to produce similar data
 dumps for media files.




 https://www.mediawiki.org/wiki/Requests_for_comment/Media_file_request_count
 s



 Please comment onwiki.



 Thanks



 Erik Zachte

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Identifying potential cross-language contributions

2014-11-17 Thread Gerard Meijssen
Hoi,
As you know all articles about the same subject are linked together thanks
to Wikidata. Many articles refer to the same subjects. This concept cloud
includes what was considered important when writing these articles. The
content of this concept cloud can be seen from the Reasonator for any
subject. It is on the right under all the links the article refers to. This
[1] is the concept cloud for Mr Carl Sanders a former governor of Georgia.

As you know, Reasonator will show you all information about a subject. All
it takes are the labels used for *your* language. It is available now and
it beats waiting for an article that may never come in *your* language.
Adding labels is easy once Widar has been set up; you do it from the
Reasonator itself.
Thanks,
  GerardM

[1] https://tools.wmflabs.org/wikidata-todo/cloudy_concept.php?q=Q888924

On 16 November 2014 23:49, Chas Leichner c...@chas.io wrote:

 I have noticed that there are a lot of pages which are extremely well
 developed in one language and not particularly well developed in other
 languages. I have been thinking about making tools to help identify and
 translate these articles. What tools and approaches have been developed to
 address this problem already? Have there been any projects to this effect?

 Regards,
 Chas
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcement: Yuvi Panda joins Ops

2014-11-06 Thread Gerard Meijssen
Hoi,
I am really happy for Yuvi. He will continue to do what he does and will
become even more awesome at it.

At the same time I find it sad that he moves to the USA because he was
invaluable because he was available when the other Labs people were not
around. As such he provided a much needed service exactly because he lives
in India.
Thanks,
 Gerard

On 7 November 2014 06:44, Ashish Dubey ashish.dube...@gmail.com wrote:

 Awesome. Congrats Yuvi!

 On Fri, Nov 7, 2014 at 1:14 AM, Derric Atzrott 
 datzr...@alizeepathology.com
  wrote:

  Agreed! Congrats!
 
   Awesome Yuvi. Congratulations! :D
  
   On Nov 6, 2014, at 10:47 PM, Danese Cooper dan...@gmail.com wrote:
  
   Woot!  Way to go, Yuvi!
  
   D
  
   On Thu, Nov 6, 2014 at 4:55 PM, Mark Bergsma m...@wikimedia.org
  wrote:
  
   Hi all,
  
   I'm very pleased to announce that as of this week, Yuvi Panda is part
  of
   the Wikimedia Technical Operations team, to work on our Wikimedia
 Labs
   infrastructure. Yuvi originally joined the Wikimedia Foundation
 Mobile
  team
   in December 2011, where he has been lead development for the original
   Wikipedia App and its rewrite, amongst many other projects.
  
   Besides his work in Mobile, Yuvi has been volunteering for Ops work
 in
   Wikimedia Labs for a long time now. One of the notable examples of
 his
  work
   is a seamlessly integrated Web proxy system that allows public web
  requests
   from the Internet to be proxied to Labs instances on private IPs
  without
   requiring public IP addresses for each instance. This very user
  friendly
   system, which he built on top of NGINX, LUA, redis, sqlite and the
   OpenStack API, sees a lot of usage and has dramatically reduced the
  need
   for Labs users to request (scarce) public IP address resources via a
  manual
   approval process.
  
   Another example of his work that has made a big difference is the
   initiation of the Labs-Vagrant project; bringing the virtues of the
   Mediawiki:Vagrant project to Wikimedia Labs, and allowing anyone to
  bring a
   MediaWiki development environment up in Labs with great ease. More
  recently
   Yuvi has been working on our much needed infrastructure in Labs for
   monitoring metrics (Graphite) and service availability (Shinken). We
  expect
   this will give us a lot more insight into the internals and
  availability of
   software and services running in Wikimedia Labs and its many
 projects,
  and
   we should be able to deploy it in Production as well.
  
   Of course all of this work didn't go unnoticed, and about half a year
  ago
   we've asked Yuvi if he was interested to move to Ops. With his
  extensive
   development experience and his demonstrated ability to join this with
  solid
   Ops work to create stable and highly useful solutions, we think he's
 a
   great fit for this role.
  
   Yuvi recently had his VISA application accepted, and is planning to
  move to
   San Francisco in March 2015. Until then he will be working with us
  remotely
   from India.
  
   Please join me in congratulating Yuvi!
  
   --
   Lead Operations Architect
   Director of Technical Operations
   Wikimedia Foundation
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 Ashish Dubey
 http://dash1291.github.io
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WikiGrok deployed

2014-11-03 Thread Gerard Meijssen
Hoi,
I do not get it. It says, it will eventually put this information into
Wikidata ... WHY NOT NOW ?
Thanks,
 GerardM

On 3 November 2014 23:16, Max Semenik maxsem.w...@gmail.com wrote:

 Hi, WikiGrok[0] has been successfully deployed to the English Wikipedia.
 Along with it a first campaign[1] was deployed. Now database is being
 slowly populated with suggestions:

 MariaDB [enwiki_p] select page_title, pp_value from page, page_props where
 pp_page=page_id and pp_propname='wikigrok_questions_v1' limit 100;

 +-+--+
 | page_title  | pp_value
   |

 +-+--+
 | Richard_Branson |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Tina_Fey|

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Jon_Stewart |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Bill_Maher  |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Jeff_Foxworthy  |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Evadne_Price|

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Dominic_Guard   |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Dilsa_Demirbag_Sten |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | J._Douglas_MacMillan|

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Carol_Bowman|

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Lianella_Carell |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | G._K._Reddy |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Liù_Bosisio |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |
 | Matilde_Rodríguez_Cabo  |

 a:1:{s:6:author;a:2:{s:8:property;s:4:P106;s:9:questions;a:1:{s:7:Q482980;s:6:author;}}}
 |

 +-+--+
 14 rows in set (0.42 sec)

 Pages are getting updated when edited (null edit works, but not
 action=purge). According to estimations made with WikiData Query[2], the
 number of potentially affected pages is approximately 33,000. If really
 needed, we could whip up a script to null-edit these pages from server side
 in a controlled manner, but I would like to have more data on performance
 and memory consumption first.

 == Monitoring ==
 * Graphite: MediaWiki - WikiGrok
 * Exceptions from WikiData: type:mobile in Logstash.

 == Firefighting ==
 Most of potentially performance-scary/error causing code with can be
 disabled by commenting out $wgWikiGrokSlowCampaigns in
 wmf-config/mobile.php. If shit hits fan really hard, whole extension can be
 disabled through the usual means, with $wmgUseWikiGrok.

 == Next steps ==
 I'm working on DB storage for questions[3] which will allow us to avoid
 abusing page_props and give features such as find me pages that could use
 this type of fixes and find me a random page to fix.


 
 [0] https://www.mediawiki.org/wiki/Extension:WikiGrok
 [1] https://gerrit.wikimedia.org/r/#/c/170453/
 [2] http://wdq.wmflabs.org/
 [3] https://gerrit.wikimedia.org/r/170263

 --
 Best regards,
 Max Semenik ([[User:MaxSem]])
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal: slight change to the XML dump format

2014-10-27 Thread Gerard Meijssen
Hoi,
You may want to wait until the dumps are fixed. Magnus fixed the one but
last dump by hand. The following dump is still broken. Wait until we KNOW
the dumps are ok.
Thanks,
 GerardM

On 27 October 2014 21:58, Ariel T. Glenn agl...@wikimedia.org wrote:

 Thank you Google for hiding the start of this thread in my spam folder
 _

 I'm going to have to change my import tools for the new format, but
 that's the way it goes; it's a reasonable change.  Have you checked with
 folks on the xml data dumps list to see who might be affected?

 Ariel


 Στις 23-10-2014, ημέρα Πεμ, και ώρα 09:52 -0500, ο/η Aaron Halfaker
 έγραψε:
  I spend a lot of time processing the XML dumps that this will affect.  I
  just wanted to chime in to say that this change makes sense to me and it
  won't affect my work.
 
  -Aaron
 
  On Thu, Oct 23, 2014 at 9:06 AM, Daniel Kinzler dan...@brightbyte.de
  wrote:
 
   tl;dr:
  
   In the xml dumps, I want to change
   text sha1 model format
   to
   model format text sha1
  
   However, this is a breaking change to our XML schema.
   See https://bugzilla.wikimedia.org/show_bug.cgi?id=72417
  
  
   Background:
  
   While trying to fix bug 72361, I ran into an issue with our current XML
   dump format:
  
   The model and format tags are placed *after* the text tag.
   This means that we don't know how to handle the text when we process
 XML
   events
   in a stream - we'd have to buffer the text, wait until we know model
 and
   format,
   and then process it. A pain.
  
   The current order has no deeper meaning - it is, indeed, my own fault:
 i
   didn't
   think this through when adding these tags. I propose to change the
 order
   of the
   tags now, to make stream processing easier.
  
   That would technically be a breaking change to the dump format,
   incompatible
   with https://www.mediawiki.org/xml/export-0.8.xsd and
 export-0.9.xsd. I
   doubt
   however that any consumers rely on the current placement of model and
   format, as it is extremely inconvenient (compare bug 72361), but you
   never know.
  
   I propose to release a new XSD version 0.10 with the order changed, and
   mention
   it in the release notes. Should be fine.
  
   Any objections?
  
   -- daniel
  
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: Phabricator

2014-10-19 Thread Gerard Meijssen
Hoi,
I understand that Phabricator is the place to be. My user for phabricator
thinks I use an old WMF profile. Can someone please change this to this
email address for me?

As it is I am stuck. I cannot change it.
Thanks,
GerardM
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: Phabricator

2014-10-19 Thread Gerard Meijssen
Hoi,
This is the text I get .Obviously I cannot do this because I do not have
access to that account any more 

You must verify your email address to login. You should have a new email
message from Phabricator with verification instructions in your inbox
(*gmeijs...@wikimedia.org
gmeijs...@wikimedia.org*).

If you did not receive an email, you can click the button below to try
sending another one.

Thanks,

 GerardM

On 19 October 2014 11:19, Bartosz Dziewoński matma@gmail.com wrote:

 Have you looked at https://phabricator.wikimedia.org/settings/panel/email/
 ? You can add additional email addresses to your account there, and change
 which one is primary.

 --
 Bartosz Dziewoński

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: Phabricator

2014-10-19 Thread Gerard Meijssen
Hoi,
My SUL profile has gerard.meijs...@gmail.com, my bugzilla is based on
gerard.meijs...@gmail.com and I want to KEEP my own bugs.

When I click on Prabricator in Bugzilla it goes haywire because it
expects gmeijs...@wikimedia.org.
Thanks,
GerardM

On 20 October 2014 01:05, Andre Klapper aklap...@wikimedia.org wrote:

 This discussion might better suited for
 https://www.mediawiki.org/wiki/Phabricator/Help than wikitech-l@...


 Hi,

 On Sun, 2014-10-19 at 13:09 +0200, Gerard Meijssen wrote:
  Obviously I cannot do this because I do not have
  access to that account any more 

 Not sure if I understand the problem correctly as I don't know what an
 old WMF profile is in this context.

 Does this refers to claiming your Bugzilla and RT accounts in
 Phabricator, as described in

 https://www.mediawiki.org/wiki/Phabricator/Help#Claiming_your_previous_Bugzilla_and_RT_accounts
 ?
 If yes: Assuming that you still know the password for your Bugzilla
 account, have you considered changing your email address in Bugzilla to
 a valid email address via
 https://bugzilla.wikimedia.org/userprefs.cgi?tab=account ?

 Cheers,
 andre
 --
 Andre Klapper | Wikimedia Bugwrangler
 http://blogs.gnome.org/aklapper/


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] I'm leaving the Wikimedia Foundation

2014-09-13 Thread Gerard Meijssen
Hoi,
Thanks Sumana.. Have a great life and have fun living it. I hope to see you
around :)
GerardM

On 12 September 2014 18:09, Sumana Harihareswara suma...@wikimedia.org
wrote:

 I write this email with regret to let you know that I've decided to leave
 the Wikimedia Foundation after nearly four years working here, and that my
 last day will be 30 September.

 I go into my reasoning and plans in this personal blog post:
 http://www.harihareswara.net/sumana/2014/09/12/0

 I'm grateful for what I've learned here and will take these lessons
 wherever I go. And I've made many friends here; thank you. I'll remain
 [[User:Sumanah]] in any volunteer work I do onwiki. Quim Gil will be the
 person to contact for any loose threads I leave behind; still, if you have
 questions for me, the next two weeks would be a good time to ask them.

 best,
 Sumana Harihareswara
 ​was Volunteer Development Coordinator, then Engineering Community Manager,
 now Senior Technical Writer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Superprotect user right, Comming to a wiki near you

2014-08-12 Thread Gerard Meijssen
Hoi,
When I watch Jimmy, it is a lot like Wikipedia. There is a lot I like but I
do not like everything. This time was no different. However LIKE Wikipedia
there is so much that I like that I will not abandon it.

The notion that the community is free to choose whatever negates the
technical point that certain innovations are intended to NOT be backwards
compatible. Choices are made all the time that are NOT backwards
compatible, often it does not affect user land really and then there is
supposed to be no problem. It is just a different public that suffers the
consequences. At Wikimania the Wikidatafication of Commons was mentioned
often and in many contexts. This needs to happen when we want to have multi
lingual search and other goodie goodies that is the point of it all. In my
understanding it is a game changer and it will change things more than what
is being discussed at the moment.

I can see the Media Viewer as a precursor of these changes.

NB several of the proposals re Wikidatafication are imho a bit daft.
However, the need for it is such that I prefer something that is half baked
to start off with than nothing at all.

We disagree on the amount of attention that is given.. In my appreciation
we do not invest in tools that affect readers that much and it only started
fairly recently. It is the editors and even more so the admins / power
users that have the most attention available to them. They often role their
own tools and consequently are not assured of continued support for their
tools. These more sophisticated users are often mistaken for the community.
It certainly is the most vocal subgroup of our eco-system but it is not
them we aim to serve.

I am not into second guessing what the WMF could or should do. I have
opinions and they typically are about how and where I think we could do
better. So my two pennies worth is that:

   - Reasonator like functionality is our future for much of the
   information we have.
   - Wikipedia can have its sources but Wikidata, certainly for now, cannot
   be relied on having sources. Confidence is to be had by comparing the
   information it holds with other sources (including individual Wikipedias)
   - When our data is not used, we might be better off not having it.

Thanks,
 GerardM





On 12 August 2014 10:43, Strainu strain...@gmail.com wrote:

 Hi Gerard,

 Some answers (in a random order).

 2014-08-11 12:20 GMT+03:00 Gerard Meijssen gerard.meijs...@gmail.com:
  You know our projects, you know our licenses. If you, the communitydo
 not
  like what you have, you can fork. At Wikimania forking and leaving the
  community was very much discussed. Watch Jimbo's presentation for
 instance,
  he may be aghast that I quote him here but in his state of the Wiki he
 made
  it abundantly clear that it is your option to stay or go.

 I gave up watching Jimbo's keynotes a few years ago, as I would
 invariably get pissed off. So, should we understand that the vast
 ammounts of money and resources spent on editor retention are a waste
 of our money? I sincerely hope this is a heat-of-the-moment argument,
 just like the one about closing de.wp.

  Hoi,
  Code review should be a strictly technical process surely. However the
  community CANNOT decide on everything.

 Agreed. How about letting the WMF decide for anonymous users and the
 community decide for logged-in users? Presumably, the logged-in users
 have access to a large panel of options and can make up their own mind
 if they disagree with the consensus. Of course, discussions should not
 disappear because of such a separation, but even become more active
 and hopefully less aggressive.


  When you are in those conversations you realise that many
  complications are considered; it is not easy nor obvious.
  NB there is not one community, there are many with often completely
  diverging opinions. Technically it is not possible to always keep
 backward
  compatibility / functionality. We are not backward we need to stay
  contemporary.

 As a software engineer in a publicly traded company, I understand the
 reasoning behind more than 90% of the decisions made by the
 Engineering staff - and this worries me terribly, because they *don't*
 work for a company. Their objectives and approaches should be
 different.

 There are three main wiki-use-cases that should receive similar levels
 of attention:
 * reading
 * basic editing
 * advanced editing

 The first two receive a lot of love, but the third one not so much,
 it's even hindered by initiatives designed for the first two. I'm not
 saying that we should keep backwards compatibility forever, but since
 the WMF wants to deploy stuff early in order to get feedback, it
 should begin by offering it as a beta (they do that now), then, when
 reaching a decent level of stability, deploy it for anonymous users
 and opt-in users and only when it reaches feature-parity with the
 feature being replaced should it be pushed for everybody (keeping an
 opt-out

Re: [Wikitech-l] Superprotect user right, Comming to a wiki near you

2014-08-11 Thread Gerard Meijssen
Hoi,
Code review should be a strictly technical process surely. However the
community CANNOT decide on everything. The WMF has one codebase for
MediaWiki and it explicitly defines with long lead times the things it is
working on. It does invite the community to be involved in the process.
However, this is where and when the technical future is decided. Several of
the decisions do not have an option for a community to decide they want it
or not when they are confronted with the realised software/future.

You know our projects, you know our licenses. If you, the communitydo not
like what you have, you can fork. At Wikimania forking and leaving the
community was very much discussed. Watch Jimbo's presentation for instance,
he may be aghast that I quote him here but in his state of the Wiki he made
it abundantly clear that it is your option to stay or go.
Thanks,
   GerardM


On 11 August 2014 08:55, Strainu strain...@gmail.com wrote:

 2014-08-10 17:00 GMT+03:00 Michał Łazowik mlazo...@me.com:
  Wiadomość napisana przez Nicolas Vervelle nverve...@gmail.com w dniu
 10 sie 2014, o godz. 15:45:
 
  I hope it's not an other step from WMF to prevent the application of
  community decisions when they not agree with it. I fear that they will
 use
  this to bypass community decisions. For example like forcing again VE on
  everyone on enwki: last year, sysop were able to apply community
 decision
  against Erik wishes only because they had access to site wide js or CSS.
 
  I'd like to believe that code-reviewing would mean improving code
 quality, security
  and performance (applies to javascript).

 I would like to believe that too. More likely though, without a
 serious comittment from the Foundation (which I heard nothing about),
 that would mean waiting for months, possibly years for reviews on
 small wikis.

 Also, code review should be a strictly technical process and should
 not be used to overrule the community's decisions.

 Strainu

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Superprotect user right, Comming to a wiki near you

2014-08-11 Thread Gerard Meijssen
Hoi,
This is simply not true. At Wikimania there were several sessions where the
topic was what will the technical underpinnings be of our software. It was
also discussed to some extend what kind of effect things may have on a
community. When you are in those conversations you realise that many
complications are considered; it is not easy nor obvious.
Thanks,
GerardM

NB there is not one community, there are many with often completely
diverging opinions. Technically it is not possible to always keep backward
compatibility / functionality. We are not backward we need to stay
contemporary.


On 11 August 2014 10:36, svetlana svetl...@fastmail.com.au wrote:

 On Mon, 11 Aug 2014, at 19:20, Gerard Meijssen wrote:
  Hoi,
  Code review should be a strictly technical process surely. However the
  community CANNOT decide on everything. The WMF has one codebase for
  MediaWiki and it explicitly defines with long lead times the things it is
  working on. It does invite the community to be involved in the process.

 It does not. It first designs a software and implements it, and then
 /tweaks/ it. Community should be asked what it'd like much, much earlier.
 Community participation in product design is currently impossible.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] (no subject)

2014-07-10 Thread Gerard Meijssen
Hoi,
I use GMAIL and I see it plenty fine.
Thanks,
GerardM


On 10 July 2014 11:51, Bartosz Dziewoński matma@gmail.com wrote:

 You should resend this email  from a different address and with a subject,
 GMail  thinks it's spam, so almost no one will ever see it.

 -- Matma Rex


 2014-07-10 8:47 GMT+02:00 Thomas Mulhall thomasmulhall...@yahoo.com:

  Hi we are upgrading jquery cookie from an early alpha version of 1.1 to
  1.2. Please start upgrading your code to be compatible with jquery cookie
  1.2. There is just one deprecations to notice and that is
  $.cookie('foo',
  null) is now deprecated. And replace it with Adding $.removeCookie('foo')
  for deleting a cookie. We are slowly upgrading to version 1.4.1 but one
  step at a time because it is. A ja our change and removes a lot of
 things.
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] (no subject)

2014-07-10 Thread Gerard Meijssen
Hoi,
no filters for me..
Thanks,
 GerardM


On 10 July 2014 13:39, Tyler Romeo tylerro...@gmail.com wrote:

 On Thu, Jul 10, 2014 at 5:59 AM, Gerard Meijssen 
 gerard.meijs...@gmail.com
 wrote:

  I use GMAIL and I see it plenty fine.


 I also use Gmail, and it says the only reason it wasn't sent to spam was
 because I have a filter sending all wikitech emails to me inbox.

 *-- *
 *Tyler Romeo*
 Stevens Institute of Technology, Class of 2016
 Major in Computer Science
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Finding images

2014-06-18 Thread Gerard Meijssen
Hoi,
As long as our categories are English, they are useless for all of those
who do not speak English. Even so, as long as the current technology is
used for those categories it is a trial to find images at all. Many people
have given up.
Thanks,
GerardM


On 18 June 2014 09:12, Kristian Kankainen krist...@eki.ee wrote:

 Hello!

 I think, if one is clever enough, some categorization could be automated
 allready.

 Searching for pictures based on meta-data is called Concept Based Image
 Retrieval, searching based on the machine vision recognized content of the
 image is called Content Based Image Retrieval.

 What I understood of Lars' request, is an automated way of finding the
 superfluous concepts or meta-data for pictures based on their content. Of
 course recognizing an images content is very hard (and subjective), but I
 think it would be possible for many of these superfluous categories, such
 as winter landscape, summer beach and perhaps also red flowers and
 bicycle.

 There exist today many open source Content Based Image Retrieval
 systems, that I understand basically works in the way that you give them a
 picture, and they find you the matching pictures accompanied with a
 score. Now suppose we show a picture with known content (pictures from
 Commons with good meta-data), then we could to a degree of trust find
 pictures with overlapping categories.
 I am not sure whether this kind of automated reverse meta-data labelling
 should be done for only one category per time, or if some kind of category
 bundles work better. Probably adjectives and items should be compounded
 (eg red flowers).

 Relevant articles and links from Wikipedia:
 # https://en.wikipedia.org/wiki/Image_retrieval
 # https://en.wikipedia.org/wiki/Content-based_image_retrieval
 # https://en.wikipedia.org/wiki/List_of_CBIR_engines#CBIR_
 research_projects.2Fdemos.2Fopen_source_projects

 Best wishes
 Kristian Kankainen

 18.06.2014 09:14, Pine W kirjutas:

  Machine vision is definitely getting better with time. We have
 computer-driven airplanes, computer-driven cars, and computer-driven
 spacecraft. The computers need us less and less as hardware and software
 improve. I think it may be less than a decade before machine vision is
 good
 enough to categorize most objects in photographs.

 Pine
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Finding images

2014-06-18 Thread Gerard Meijssen
Hoi,
The perfect Wikidata heaven solution will be as imperfect as the perfect
Wikipedia heaven solution. Lets stay down to earth and go for something
that works most of the time and does not take forever to realise.
Thanks,
GerardM


On 18 June 2014 10:40, Federico Leva (Nemo) nemow...@gmail.com wrote:

 Kristian, it's not impossible to find a mentee for such an idea. Please
 edit: https://www.mediawiki.org/wiki/Mentorship_programs/
 Possible_projects#Category_suggestions .

 While we wait for The Perfect Wikidata Heaven Solution (TM), it's worth
 expertimenting with.

 Nemo


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Finding images

2014-06-18 Thread Gerard Meijssen
Hoi,
My understanding of English is adequate. I find it however really hard to
find the pictures I seek. I have given up using Commons for illustrations
on my blog for instance.
Thanks,
GerardM


On 18 June 2014 12:25, Kristian Kankainen krist...@eki.ee wrote:

 Could not the categories' language links be useful here? Otherwise
 BabelNet[1] has set up different ways to connect concepts in different
 languages into a semantic network. They call it a multilingual encyclopedic
 dictionary and compile it by combining data from the Wikipedia(s) and
 WordNet. It's quite clever but still easy.
 This is still english-centric -- as in having english in the centre of a
 hub-and-spoke modelled dictionary -- but it does make it _translatable_,
 which I think is enough for this feature.

 Kristian Kankainen

 [1] http://babelnet.org/

 18.06.2014 11:52, Gerard Meijssen kirjutas:

  Hoi,
 As long as our categories are English, they are useless for all of those
 who do not speak English. Even so, as long as the current technology is
 used for those categories it is a trial to find images at all. Many people
 have given up.
 Thanks,
  GerardM


 On 18 June 2014 09:12, Kristian Kankainen krist...@eki.ee wrote:

  Hello!

 I think, if one is clever enough, some categorization could be automated
 allready.

 Searching for pictures based on meta-data is called Concept Based Image
 Retrieval, searching based on the machine vision recognized content of
 the
 image is called Content Based Image Retrieval.

 What I understood of Lars' request, is an automated way of finding the
 superfluous concepts or meta-data for pictures based on their content.
 Of
 course recognizing an images content is very hard (and subjective), but I
 think it would be possible for many of these superfluous categories,
 such
 as winter landscape, summer beach and perhaps also red flowers and
 bicycle.

 There exist today many open source Content Based Image Retrieval
 systems, that I understand basically works in the way that you give them
 a
 picture, and they find you the matching pictures accompanied with a
 score. Now suppose we show a picture with known content (pictures from
 Commons with good meta-data), then we could to a degree of trust find
 pictures with overlapping categories.
 I am not sure whether this kind of automated reverse meta-data labelling
 should be done for only one category per time, or if some kind of
 category
 bundles work better. Probably adjectives and items should be compounded
 (eg red flowers).

 Relevant articles and links from Wikipedia:
 # https://en.wikipedia.org/wiki/Image_retrieval
 # https://en.wikipedia.org/wiki/Content-based_image_retrieval
 # https://en.wikipedia.org/wiki/List_of_CBIR_engines#CBIR_
 research_projects.2Fdemos.2Fopen_source_projects

 Best wishes
 Kristian Kankainen

 18.06.2014 09:14, Pine W kirjutas:

   Machine vision is definitely getting better with time. We have

 computer-driven airplanes, computer-driven cars, and computer-driven
 spacecraft. The computers need us less and less as hardware and software
 improve. I think it may be less than a decade before machine vision is
 good
 enough to categorize most objects in photographs.

 Pine
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Finding images

2014-06-17 Thread Gerard Meijssen
Hoi,
Now that ONLY indicates that stock agencies have a similar problem to
Commons, it does not help finding images or indicates a path we could take
to improve things.

When images are gaining tags as part of the Wikidatification of multi
mediafiles we at least have a way to add multi lingual support and, that
does improve things on what we have today.
Thanks,
 GerardM


On 18 June 2014 03:46, Tim Starling tstarl...@wikimedia.org wrote:

 On 18/06/14 11:13, Lars Aronsson wrote:
  Why is it still, now in 2014, so hard to find images?
  We have categories and descriptions, but we also know
  they don't describe all that we want to find in an
  image. If I need an image with a bicycle and some red
  flowers, I can only go to the category:bicycles and
  hope that I'm lucky when browsing through the first
  700 images there. Most likely, the category will be
  subdivided by country or in some other useless way
  that will make my search harder.
 
  Where is science? Google was created in 1998, based
  on its Pagerank algorithm for web pages filled with
  words and links. That was 14 years ago. But what
  algorithms are there for finding images?

 How do the commercial stock agencies do it? They have a much more
 similar problem to Commons than Google does.

 -- Tim Starling


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-06 Thread Gerard Meijssen
Hoi,
While some may think it perfectly ok to be contrary and argue vehemently to
keep old hat technology operational for them and everyone around them, I
wonder if they consider cost and consequences.
* Cost to maintain duplicate and increasingly feature incompatible
functionality
* Cost to the users who are appalled by the archaic software they have to
suffer
* Cost to the increased problem of recruiting new contributors.

When I read what is said, it gives me a really bad impression of what
community means. It is a priori not positive at all when you argue for a
blanket switch  to disable new features.

I find it impossible to argue to our donors that the additional money spend
in keeping ancient software alive is well spend.
Thanks,
  Gerard


On 6 June 2014 01:24, Juergen Fenn schneeschme...@googlemail.com wrote:

 2014-06-06 0:16 GMT+02:00 Danny Horn dh...@wikimedia.org:
  The Flow team is going to work in a few weeks on automatically archiving
  talk pages, so that we can enable Flow on pages where there are already
  existing conversations. Basically, this means moving the old discussions
 on
  an archive page, and leaving a link for See archived talk page visible
 on
  the new Flow board.
 
  That means there'll be a minute where a currently active discussion would
  get interrupted, and have to be restarted on the new Flow board. That
 will
  be a pain, but it would only be a one-time inconvenience during that
  transition moment.

 Interesting point. Thanks for keeping us up to date. You might like to
 know, though, that on German Wikipedia most discussions about Flow
 seem to focus on how to turn it off or how to keep it out of the
 project altogether. Switching to Flow would require a community
 consensus anyway. So could you please consider a global switch for
 communities that would rather like to disable these new features
 completely.

 Regards,
 Jürgen.

 PS. We have never enabled the LiquidThreads extension neither. And the
 new Beta link up right did not stay there for long.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Accessing current template information on wiki commons description page

2014-06-03 Thread Gerard Meijssen
Hoi,
The good news is that work on the Wikidatafication of multimedia files
has started. You just provided an excellent example why it is so needed.

One drawback is that once it is done, your current work will need to be
revisited.
Thanks,
 GerardM


On 3 June 2014 07:59, james harvey jamespharve...@gmail.com wrote:

 Given a Wikimedia Commons description page URL - such as:

 https://commons.wikimedia.org/wiki/File:Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg

 I would like to be able to programmatically retrieve the information in the
 Summary header.  (Values for Artist, Title, Date, Medium,
 Dimensions, Current location, etc.)

 I believe all this information is in Template:Artwork.  I can't figure
 out how to get the wikitext/json-looking template data.

 If I use the API and call:

 https://commons.wikimedia.org/w/api.php?action=queryformat=xmltitles=File:Van%20Gogh%20-%20Starry%20Night%20-%20Google%20Art%20Project.jpgiilimit=maxiiprop=timestamp|user|comment|url|size|mimeprop=imageinfo|revisionsrvgeneratexml=rvprop=ids|timestamp|user|comment|content

 Then I don't get the information I'm looking for.  This shows the most
 recent revision, and its changes.  Unless the most recent revision changed
 this data, it doesn't show up.

 To see all the information I'm looking for, it seems I'd have to specify
 rvlimit=max and go through all the past revisions to figure out which is
 most current.  For example, if I do so and I look at revid 79665032, that
 includes: {{Artwork | Artist = {{Creator:Vincent van Gogh}} | . . . | Year
 = 1889 | Technique = {{Oil on canvas}} | . . .

 Isn't there a way to get the current version in whatever format you'd call
 that - the wikitext/json looking format?

 In my API call, I can specify rvexpandtemplates which even with only the
 most recent revision gives me the information I need, but it's largely in
 HTML tables/divs/etc format rather than wikitext/json/xml/etc.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-16 Thread Gerard Meijssen
Hoi,
For your information there are several things Wikidata can already do.

   - When for instance a district is associated with a shape, multiple
   shapes could be known and dated by Wikidata.
   - It could know of the existence of maps and when a map is defined in a
   way that allows for queries, typically the four corners of a map allow for
   the calculation of a radius
   - we can query maps to find out if a specific coordinate and
   consequently item is in that map.
   - Finally, there are several examples of the results of queries shown on
   a map. Reasonator has a button that shows a map for every item that is in a
   15km range of the item involved. Colossus has buid  tool that shows classes
   on a map based in Wikidata info..

Thanks,
  GerardM

[1]
http://tools.wmflabs.org/wikidata-todo/around.html?lat=47.3786lon=8.54
[2] http://tools.wmflabs.org/wp-world/wikidata/superclasses.php?lang=en


On 16 May 2014 00:21, Andrew Green agr...@wikimedia.org wrote:

 Folks, this looks really fantastic, way to go!!

 I'd really love to contribute to this, as time allows, BTW.


  * Data (one or more sets of data, can be geojson, topojson, tsv, csv,
 layers from OSM [3], OHM [4], etc.)

 snip /

  * Datasets in WikiData using an alternative data model

 I'm especially interested in what could be done with Wikidata queries, and
 hooking those into a Wiki's metadata. I think this has tons of potential
 for analyzing contributions to Wikipedia in a collaborative way. A bit a la
 Wikimetrics but with many more options for pulling in and hooking up data
 of various sorts, on-wiki.

 It would also be great to hook this into activity feeds for sets of users
 or articles, for example, so you could go back and forth between a stream
 of edits and aggregate data about them, or maybe even overlay a graph on a
 stream of edits along a timeline.

 Finally, it would be nice to be able to define data transformations and
 visualizations in a modular way. For example, if one user has an
 interesting data set or Wikidata query that pulls in information on events
 related to a certain topic, and another user has defined a nice way of
 visualizing events on a timeline, a third might be able to use the first
 user's data with the second user's visualization defintion, or even bring
 in data on a different set of events, and display both sets in a single
 visualization, etc., etc. Mmm, just a thought...


  It's a word that means to illuminate: Limn [5].

 3) It's a word that is difficult to translate.

 How about Munge? As in, it munges data so you can view it in different
 ways? That's easy to translate! Rhymes with grunge...

 Cheers,
 Andrew


 On 15/05/14 15:59, Dan Andreescu wrote:

 By the way, the Vega work is deployed to this wiki:

 http://analytics-wiki.wmflabs.org/mediawiki/Main_Page#Visualization

 Jon/DJ/Aude, if you want I can add a more generic proxy name for that wiki
 and put your stuff on there too?  It's in the analytics project in labs
 but
 I'm happy to give anyone rights to it.


 On Thu, May 15, 2014 at 1:35 PM, Dan Andreescu dandree...@wikimedia.org
 wrote:

  I like Visual:, any +1s? +2s?

 The only downside might be a slight conflict with VisualEditor


 On Thu, May 15, 2014 at 11:20 AM, dan-nl dan.entous.wikime...@gmail.com
 wrote:

  PictureIt:
 Envision:
 Imagine:

 On May 15, 2014, at 17:06 , Jon Robson jdlrob...@gmail.com wrote:

  Visual: ?
 On 14 May 2014 10:44, Derk-Jan Hartman d.j.hartman+wmf...@gmail.com
 
 wrote:

  PS, i'm building an instance that is running this extension.

 On Wed, May 14, 2014 at 12:34 AM, Jon Robson jrob...@wikimedia.org
 wrote:

 During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
 generic maps prototype extension [1]. We have noticed that many maps
 like extensions keep popping up and believed it was time we
 standardised on one that all these extensions could use so we share
 data better.

 We took a look at all the existing use cases and tried to imagine
 what
 such an extension would look like that wouldn't be too tied into a
 specific use case.

 The extension we came up with was a map extension that introduces a
 Map namespace where data for the map is stored in raw GeoJSON and can
 be edited via a JavaScript map editor interface. It also allows the
 inclusion of maps in wiki articles via a map template.

 Dan Andreescu also created a similar visualisation namespace which
 may
 want to be folded into this as a map could be seen as a
 visualisation.
 I invite Dan to comment on this with further details :-)!

 I'd be interested in people's thoughts around this extension. In
 particular I'd be interested in the answer to the question For my
 usecase A what would the WikiMaps extension have to support for me to
 use it.

 Thanks for your involvement in this discussion. Let's finally get a
 maps extension up on a wikimedia box!
 Jon

 [1] https://github.com/jdlrobson/WikiMaps



 ___
 Wikitech-l 

Re: [Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread Gerard Meijssen
Hoi,
This does pique my interest, what makes the en.wp so special ?
Thanks,
 GerardM


On 6 May 2014 00:11, David Gerard dger...@gmail.com wrote:

 On 5 May 2014 23:08, James Forrester jforres...@wikimedia.org wrote:
  On 5 May 2014 15:05, David Gerard dger...@gmail.com wrote:

  So ... how is 1.23 and Visual Editor?
  * Has anyone sat down and written out how to add VE magic to a 1.23
  tarball install?
  I do not believe so, no.


 If someone could do that, I hereby promise to beta-test their instructions!

 (Personally, I think VE is pretty much ready for all users except
 English Wikipedia.)


 - d.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Text-to-speech extension?

2014-04-29 Thread Gerard Meijssen
Hoi,
Are screen readers supported ? But do they understand what is the cruft
that is on a page, the kind of cruft that you do not want to get read out
to you?
Thanks,
 GerardM


On 29 April 2014 08:03, Benjamin Lees emufarm...@gmail.com wrote:

 On Mon, Apr 28, 2014 at 8:49 PM, Ilias Koumoundouros 
 ilias.k...@freemail.gr
  wrote:

  So I was wondering whether there is a text-to-speech extension for
  Mediawiki
 
 
 What advantages would this offer over a screen reader?
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] need help for downloadin Urdu data from Wikipedia

2014-04-24 Thread Gerard Meijssen
Hoi,
While everybody can start with sources, it is very much duplicated and
wasted effort. The suggestion to use Kiwix is excellent. Please do hack on
things where it makes a difference. Why bother start from scratch when the
results are available in a nicely formatted way ?
Thanks,
  GerardM


On 24 April 2014 09:16, sb_na...@yahoo.in sb_na...@yahoo.in wrote:

 Plz use the kiwix software to obtain urdu data of urdu wikipedia.

 Regards,
 Shuaib

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API attribute ID for querying wikipedia pages

2014-04-24 Thread Gerard Meijssen
Hoi,
I totally agree that you should be able to do this. However, would it not
make more sense to get structured information from Wikidata?
Thanks,
GerardM


On 24 April 2014 14:24, Daan Kuijsten daankuijs...@gmail.com wrote:


 On 23-Apr-14 21:29, wikitech-l-requ...@lists.wikimedia.org wrote:

 Re: API attribute ID for querying wikipedia pages


 @Matma Rex: This is way to general, I think it would be a lot better when
 this would be in more detail. For example when I want to fetch a table with
 all currencies on https://en.wikipedia.org/wiki/
 List_of_circulating_currencies, I would make an API call like this:
 https://en.wikipedia.org/w/api.php?action=parsepage=
 List%20of%20circulating%20currenciesprop=sectionsformat=jsonfm. This
 returns 5 sections with numbers which I can use as reference points, but
 I would rather have a number for the table in the section. A section can
 have multiple tables.

 Querying specific (structured) data from Wikipedia is still very difficult
 in my opinion. My suggestion is that every paragraph, image, link and table
 get a unique identifiable number. This way Wikipedia gets more machine
 readable.


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] VisualEditor template editing interface

2014-04-22 Thread Gerard Meijssen
Hoi,
Is there a way to link such templates easily to Wikidata?
Thanks,
   GerardM
Op 22 apr. 2014 17:51 schreef Trevor Parscal tpars...@wikimedia.org:

 We are introducing recommended fields which will be automatically added
 when the template is added. We already support required fields which are
 automatically added, but we are adding recommended fields so that required
 can be reserved for actually required fields. In time, we expect required
 will be enforced as well, but are still researching the impact of that.

 - Trevor


 On Tue, Apr 22, 2014 at 8:31 AM, Yury Katkov katkov.ju...@gmail.com
 wrote:

  Hi everyone!
 
  I'm using VisualEditor and TemplateData to insert template calls in
  the wiki page. My users find it prerrty hard to understand the Add a
  template Dialog: you should first find a template, then add all its
  fields to the call and only after that place some value in each of the
  fields. Is it possible to automatically add all the fields to the
  template call once the template is selected?
 
  Cheers,
  -
  Yury Katkov
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Free fonts and Windows users

2014-04-09 Thread Gerard Meijssen
Hoi,
Given that you do have statistics, how did all the huha around the ULS
performance issue and now all this hit the use of the functionality for
dyslexic people?

We know that it helps but how do we help people with dyslexia? They are
only 7 to 10% of a population.. I have not done the arithmetic but would
that be more than 7 to 10% of all our readers? (what is the breakdown in
fonts for our total public)
Thanks,
  GerardM


On 10 April 2014 02:45, Brian Wolff bawo...@gmail.com wrote:

  And has any testing been done with screen readers and dyslexia readers to
 

 I would be extremely surprised if this interacted negatively with screen
 readers. Screen readers normally dont even look at the font choice.

 --bawolff
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] page title

2014-04-05 Thread Gerard Meijssen
Hoi
Did you try Wikidata for this ? It works really well...

http://tools.wmflabs.org/reasonator/?find=John+Doe show you the result for
a search for Sonoma.. This is not an exact match but the point is that
Wikidata CAN have multiple instances of the same name. With sufficient
attention to distinguishing statements on each of the items shown, the
description helps with disambiguation in practically any language.
Thanks,
 GerardM


On 5 April 2014 11:36, Mattia Rial m.r...@hotmail.it wrote:

 I think it would be useful an alternative way to label pages with the same
 title, without using brackets, with smart disambiguation pages.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reducing the number of GeoData coordinates stored

2014-04-02 Thread Gerard Meijssen
Hoi Max,
Is it possible to compare your index with the items in Wikidata. There are
several things that I would like to do
* add all coordinates to Wikidata items
* report on all coordinates where what you know differs from Wikidata
A next obvious question is how can we keep the two data sets in sync..
Thanks,
 GerardM


On 2 April 2014 11:39, Max Semenik maxsem.w...@gmail.com wrote:

 Hi, I'd like a sanity check on my plans to reduce the GeoData index. I
 plan to:

 1) Not store coordinates from pages outside of content namespaces:
 main and file.
 2) Not store secondary coordinates that don't specify the coordinate
 name. There is currently a bunch of secondary coordinates that make no
 sense because there's to way to tell them apart or tell what are they
 for.

 I hope that these measures will make our database of geographical
 coordinates more useful on average, but please tell me if there's a
 fatal flaw in my plans:)

 --
 Best regards,
   Max Semenik ([[User:MaxSem]])


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Sharing mathematical and music notations across wikis

2014-03-21 Thread Gerard Meijssen
Hoi
I do not understand what you expect of Wikidata...
Thanks
   GerardM
Op 20 mrt. 2014 19:25 schreef Eugene Zelenko eugene.zele...@gmail.com:

 Hi!

 I think will be good idea to introduce support for files in TeX and
 ABC/Lilypond (Score extension) formats, so such files could be hosted
 on Commons.

 This will simplify maintenance of formulas and music across projects
 as well as allow to refer to mathematical and music notations from
 Wikidata.

 Eugene.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Webfonts

2014-03-14 Thread Gerard Meijssen
Hoi,
You do not understand what the existing implementation of Webfonts is
about. It is about not serving tofu. You know, those pesky boxes that
represent a character. When tofu is served, no characters can be seen and
consequently no information can be read.

Please ask any community if this is what they want.
Thanks,
  GerardM


On 14 March 2014 14:36, praveenp me.prav...@gmail.com wrote:


 On Thursday 13 March 2014 08:38 PM, Niklas Laxström wrote:

 There have recently been questions whether WMF is able to serve
 webfonts. Some people think that because of the issues that led to
 disabling webfonts by default in Universal Language Selector (ULS),
 WMF is not ready to consider webfonts for typography.

 I don't think that way. ULS is not a good comparison point because of
 the following.

 1) Universal Language selector is trying to solve a much harder issue
 than what webfonts are usually used for. It is trying to avoid tofu
 (missing fonts) which brings a whole list of issues which are not
 present or are much smaller otherwise:
 * large fonts for complex scripts,
 * detecting which fonts are missing,
 * many fonts per page,
 * the systems with greatest need of fonts often have bad renderers.

 2) WMF has a lot of experience working with web fonts by now. We know
 how to handle different formats, how to optimally compress fonts and
 how to check the results in different systems and browsers. In some
 areas we are even ahead of Google, like non-latin fonts.


 I  wonder why people want to serve webfonts by default, eventhough most
 Wikipedia users don't need it. Webfonts is a nice option, and it must be an
 option so as if any user or language need it, they can choose it.
 Overruling a user's / community's settings without asking their opinion is
 against any freedom WMF represents.

 There are various other 'real' issues which are affecting, contribution
 and rendering of Wikimedia wikis in non-latin languages.


  Thus, I think that delivering a relative small fonts for simple
 scripts like latin and cyrillic is something that is possible *if* we
 are willing to accept that it will take some bandwidth and that page
 load experience can be affected* if the font is not cached or present
 locally.

-Niklas

 * The unwanted effects of using webfonts are getting smaller and
 smaller with modern browsers.


 And requirement of webfonts is also getting smaller and smaller with
 modern OSs. Even today's mobile OSs have better language support than that
 of old desktop OSs.

 Praveen

  ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Multimedia team architecture update

2014-03-08 Thread Gerard Meijssen
Hoi,
Your assumption is that the metadata from Wikidata will be essentially the
same. It is not.

Because the metadata will be essentially different, I ask for attention. At
this moment in time all the metadata is essentially English. This will be
no longer the case with Wikidata. The data that is provided can be in any
language. In future there will no longer be a guarantee that there will be
information in English. Also, with Wikidata, the one suggestion that has
the most promise for helping FIND media files is that they will be tagged.
This will probably make categories less relevant.

It is quite funny in my opinion. You assume that business will be as usual
and consequently show no interest in how Wikidata can be a game changer. At
this time Commons is the repository where everyone can contribute. People
have a hard time finding anything. How do you find a picture from Batavia
from 1930 / 1940 ? How do you search for this in Dutch ? The Wikidata
treatment of Commons makes it possible to find them.

As metadata like GPS info will associated with a subject, a photo with GPS
information may be the source of providing GPS information to subjects. As
yet another picture of the Borobodur arrives at Commons, the GPS info may
suggest that the label in Zulu is (however Borobodur is writen in Zulu),
this enables the suggestion to add the label for Borobodur in Zulu.

When people are looking at pictures, we can ask them to add the missing
labels in their language.

Wikidata will be a game changer for the usability of Commons. It enables
FINDING media files. At this time I do not even bother it takes too much
effort. IMHO all this is exactly what a plan about an architecture update
should be about
Thanks,
   GerardM


On 9 March 2014 05:49, Gergo Tisza gti...@wikimedia.org wrote:

 On Fri, Mar 7, 2014 at 3:27 AM, Gerard Meijssen
 gerard.meijs...@gmail.comwrote:

  When Commons gets the Wikidata treatment, almost everything that has to
 do
  with meta data will gain wikidata statements on the Wikidata item that
  reflects a media file (sound photo movie no matter). When items refer to
  Creators or Institutions they will refer to Wikidata proper.
 

 MultimediaViewer is concerned with displaying images in fullscreen and
 showing associated metadata; the first is not affected by Wikidata at all,
 the second only inasmuch as metadata coming from Wikidata will be
 essentially different from metadata coming from other sources. As far as I
 can see, it won't.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Usability testing

2014-03-07 Thread Gerard Meijssen
Hoi,
Steven do I understand correctly that there is no user testing except in
English ?
Thanks,
 GerardM


On 7 March 2014 00:47, Steven Walling steven.wall...@gmail.com wrote:

 From: David Gerard dger...@gmail.com
 Date: Thu, Mar 6, 2014 at 9:44 PM
 Subject: Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?
 To: Wikimedia developers wikitech-l@lists.wikimedia.org

 (Veering off topic: So what does WMF use for a usability lab, anyway?)


 ...not sure what Kaldari did. In this case, he may have simply sat down
 with the UX designers and done a test in person.

 We do not have a usability testing lab on-site in San Francisco, and
 typically prefer to do remote usability tests. Either we do this manually
 via sending out a survey,[1] and then running a Google Hangout which we
 record for later. This is good since it is guided by the person who wrote
 the test script, so they can adapt to what the user is doing/failing to do.
 It takes a lot more leg work though.

 More often, we write a testing script and use usertesting.com, which is
 more automated remote testing and is $35/test (this is really cheap since
 the going US rate for an in-person test is something like a $50 Amazon gift
 card). The service uses people from all over the English-speaking world who
 have a variety of levels of technical expertise, and the tests are recorded
 for viewing after they're completed.[2]

 The UX team is actually in the process of hiring a UX researcher, so expect
 to hear more about this kind of qualitative research soon.

 Steven

 1. We recently did this kind of recruitment and testing for article drafts
 work. https://www.mediawiki.org/wiki/Draft_namespace/Usability_testing and
 the /Results subpage
 2. This kind of testing is something we used during the account creation
 redesign

 https://www.mediawiki.org/wiki/Account_creation_user_experience/User_testing



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Multimedia team architecture update

2014-03-07 Thread Gerard Meijssen
Hoi,

On the Wikidata roadmap it says that Commons will be targeted for inclusion
in the second half of 2014. This will have a big impact on Commons.
Consequently it will have a big impact on the things that you are
discussing. Chances are that much of what you come up with now will be
obsolete in a few months time or even worse make the development of the
inclusion of Wikidata into Commons even harder.

I find it odd that Wikidata is not mentioned at all in this overview.
Thanks,
  GerardM


On 7 March 2014 04:06, Gergo Tisza gti...@wikimedia.org wrote:

 Hi all,

 the multimedia team [1] had a chat about some architectural issues with
 MultimediaViewer [2] today, and Robla has pointed out that we should
 publish such discussions on wikitech-l to make sure we do no reinvent to
 many wheels, so here goes. Comments pointing out all the obvious solutions
 we have missed are very welcome.

 == Use of OOjs-UI ==

 Mark experimentally implemented a panel showing share/embed links [3][4] in
 OOjs-UI [5]; we discussed some concerns about that. We want to avoid having
 to roll our own UI toolkit, and also don't want to introduce yet another
 third-party toolkit as the list of libraries loaded by some Wikipedia
 extension or other is already too long, and the look-and-feel already too
 fractured, so we would be happy to free-ride on another team's efforts :)
 On the other hand, OOjs-UI is still under development, does not have tests
 and has very little documentation, and the browser support is less than we
 would prefer. We could contribute back in those areas, but it would slow us
 down significantly.

 In the end we felt that's not something we should decide on our own as it
 involves redefining the goals of the team somewhat (it was a developer-only
 meeting), so we left it open. (The next MultimediaViewer features which
 would depend heavily on a UI toolkit are a few months away, so it is not an
 urgent decision for us.)

 == The state of unit tests ==

 MultimediaViewer used to have a messy codebase; we felt at the time that it
 is better to have ugly tests than no tests, so we ended up with some large
 and convoluted tests which are hard to maintain. Since then we did a lot of
 refactoring but kept most of the tests, so now we have some tests which are
 harder to understand and more effort to maintain than the code they are
 testing. Also, we fixed failing tests after the refactorings, but did not
 check the working ones, we cannot be sure they are still testing what they
 are supposed to.

 We discussed these issues, and decided that writing the tests was still a
 good decision at the time, but once we are done with the major code
 refactorings, we should take some time to refactor the tests as well. Many
 of our current tests test the implementation of a class; we should replace
 them with ones that test the specification.

 == Plugin architecture ==

 We had plans to create some sort of plugin system so that gadgets can
 extend the functionality of MultimediaViewer [6]; we discussed whether that
 should be an open model where it is possible to alter the behavior of any
 component (think Symfony2 or Firefox) and plugins are not limited in their
 functionality, or a closed model where there are a limited number of
 junction points where gadgets can influence behavior (think MediaWiki hook
 system, just much more limited).

 The open model seems more in line with Wikimedia philosophy, and might
 actually be easier to implement (most of it is just good architecture like
 services or dependency injection which would make sense even if we did not
 want plugins); on the other hand it would mean a lot of gadgets break every
 time we change things, and some possibly do even if we don't. Also, many
 the community seems to have much lower tolerance for breakage in
 WMF-maintained tools breaking than in community-maintained tools, and most
 people probably wouldn't make the distinction between MultimediaViewer
 breaking because it is buggy and it breaking because a gadget interacting
 with it is buggy, so giving plugins enough freedom to break it might be
 inviting conflict. Some sort of hook system (with try-catch blocks, strict
 validation etc) would be much more stable, and it would probably require
 less technical expertise to use, but it could prevent many potential uses,
 and forces us to make more assumptions about what kind of plugins people
 would write.

 Decision: go with the closed model; reach out for potential plugin writers
 and collect requirements; do not guess, only add plugin functionality where
 it is actually requested by someone. In general try not to spend too much
 effort on it, having a useful plugin system by the time MultimediaViewer is
 deployed publicly is probably too ambitious a goal.


 [1] https://www.mediawiki.org/wiki/Multimedia
 [2] https://www.mediawiki.org/wiki/Extension:MultimediaViewer
 [3]
 https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/147
 [4]
 

Re: [Wikitech-l] Multimedia team architecture update

2014-03-07 Thread Gerard Meijssen
Hoi,

When Commons gets the Wikidata treatment, almost everything that has to do
with meta data will gain wikidata statements on the Wikidata item that
reflects a media file (sound photo movie no matter). When items refer to
Creators or Institutions they will refer to Wikidata proper.

This will replace much if not most of how information is stored about media
files.

Pretty much almost everything will be impacted and that is what I expect to
be reflected in considerations now because the alternative is that much of
it will need to be revisited on a massive scale.
Thanks,
  GerardM


On 7 March 2014 12:19, Andre Klapper aklap...@wikimedia.org wrote:

 On Fri, 2014-03-07 at 11:50 +0100, Gerard Meijssen wrote:
  On the Wikidata roadmap it says that Commons will be targeted for
 inclusion
  in the second half of 2014. This will have a big impact on Commons.
  Consequently it will have a big impact on the things that you are
  discussing. Chances are that much of what you come up with now will be
  obsolete in a few months time or even worse make the development of the
  inclusion of Wikidata into Commons even harder.
 
  I find it odd that Wikidata is not mentioned at all in this overview.

 Please elaborate where / in which specific areas you would have expected
 to see Wikidata being mentioned in Gergo's overview, as I cannot
 interpret the consequently in your statement yet.

 andre
 --
 Andre Klapper | Wikimedia Bugwrangler
 http://blogs.gnome.org/aklapper/


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?

2014-03-04 Thread Gerard Meijssen
Hoi,
Given that you only researched Latin fonts, did you consider the coverage
of these fonts for languages? Not all fonts are created equal, they often
do not necessarily cover all the characters that are needed for specific
languages.

Consequently, you cannot apply your work to other languages without
considering this as well.
Thanks,
GerardM


On 3 March 2014 23:33, Ryan Kaldari rkald...@wikimedia.org wrote:

 On Mon, Mar 3, 2014 at 2:19 PM, Brian Wolff bawo...@gmail.com wrote:

  On Mar 3, 2014 4:58 PM, Ryan Kaldari rkald...@wikimedia.org wrote:
  
   Yes, we only looked at Latin fonts. We are working on an FAQ, however,
  that
   explains how wikis that use non-Latin scripts can specify their own
 font
   stack using MediaWiki:Vector.css.
  
   Ryan Kaldari
  
 
  I dont think users should be responsible for that sort of thing. Could we
  do per-language css and only do this on en/other known good languages if
 we
  know that its going to be a bad choice for some languages?
 

 There are only a couple of languages in which we know this stack will be a
 bad choice (so far, Navajo and Vietnamese). In most languages that use
 non-Latin scripts, the font-family declaration will have little or no
 effect, but the readability will be improved by increasing the leading and
 font-size. Specifically the existing small leading is often a problem for
 Indic scripts and the existing small font-size is often a problem for
 logographic scripts.

 Ryan Kaldari
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adding new property value in WIKIDATA

2014-03-02 Thread Gerard Meijssen
Hoi,
The process is cumbersome; you have to request the creation of a new
property in Wikidata itself. It is anybody's guess how long it will take
for a positive response. Even a negative response is something that can
have you wait for a long time.
Thanks,
 GerardM


On 2 March 2014 10:15, Anjali Sharma sharma.anjali3...@gmail.com wrote:

 How to add a new property in Wikidata ?
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] The place of language support in MediaWiki

2014-02-17 Thread Gerard Meijssen
Hoi,

There has been a lot of controversy about language support. Some of the
arguments used indicate a lack of understanding what it is language support
does. In my opinion language support is a primary requirement of MediaWiki.
MediaWiki supports languages really well.

In a blog post [1] I have written what language support is about and, I
have indicated where Wikimedia engineers and developers are involved.

What I hope to achieve is some clarity on this subject. When you find
anything to add or have comments I really want to know.

One final consideration, English is not even 50% [2] of our page views.
Thanks,
 GerardM

[1]
http://ultimategerardm.blogspot.nl/2014/02/the-place-of-language-support-in.html
[2] http://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Should MediaWiki CSS prefer non-free fonts?

2014-02-16 Thread Gerard Meijssen
Hoi,
You say we have failed miserably. Many fonts are mentioned and they are
all for the Latin script. Many fonts are mentioned and you fail to mention
the Open Dyslexic font.

I know from personal experience that Open Dyslexic makes a difference in
being able to read Wikipedia [1]. We know that many people who want to
write in their own language need web fonts and input methods to do this in
the internet cafes where they write their articles.

This proves the point that webfonts does what it is there for; provide an
ability where there is none.

This whole huha of providing font support for the Latin script is stupid
unless a font does NOT support the characters needed to show a particular
language and YES most fonts are incomplete when they are to support all of
the Latin script.

Working towards a more beautiful viewing experience is a secondary
objective. Primary is that our readers and editors can read and edit.

ULS is a huge success in doing what it was intended to do. I am afraid that
we have lost sight of what our primary objective is about.
Thanks,
  GerardM


[1]
http://ultimategerardm.blogspot.nl/2013/12/the-best-sinterklaas-gift-ever.html


On 16 February 2014 09:13, Steven Walling swall...@wikimedia.org wrote:

 On Sat, Feb 15, 2014 at 11:52 PM, Denis Jacquerye moy...@gmail.com
 wrote:

  Maybe I haven't looked in the right place, but why aren’t webfonts being
  considered?
 
  Webfonts would mean the same fonts can be delivered everywhere, relying
 on
  installed font only as a last resort.
  There are more options than just the 4 fonts mentioned (DejaVu Serif,
  Nimbus Roman No9 L, Linux Libertine, Liberation Sans): PT Sans/PT Serif,
  Droid Sans/Droid Serif and likes (Open Sans, Noto), the other Liberation
  fonts and likes (Arimo, Tinos), Source Sans, Roboto, Ubuntu, Clear Sans,
 if
  you just want hinted fonts and household names.
 
  I’ll also point out that Georgia is a great font originally designed for
  small size, and Helvetica Neue/Helvetica/Arial was originally designed
 for
  display. When it comes to language coverage both are lacking but that
  cannot be fixed easily.
 

 To add on to what Jared said...

 On webfonts: it's not just that it would take more research. We have
 already tried webfonts and failed miserably so far.
 UniversalLanguageSelector is an example of how even the most
 well-intentioned efforts in this area can face serious setbacks. Keep in
 mind also that this typography work is largely being done with volunteer or
 side project time from myself, the developers, and most of the designers.
 We are simply not prepared to implement and test a webfonts system to work
 at Wikipedia scale.

 There are many gorgeous, well-localized free fonts out there... but few
 that meet our design goals are delivered universally in popular mobile and
 desktop operating systems. We can't get a consistent and more readable
 experience without delivering those as webfonts, and webfonts are not
 practically an option open to us right now. Maybe in the future we will get
 (as Jared says) a foundry to donate a custom free font for us, or maybe
 we'll just use a gorgeous free font out there now, like Open Baskerville or
 Open Sans.

 For now, however, we get the following result from the Typography Refresh
 beta feature:

1. the vast majority of our 500 billion or more users get a more
readable experience
2. we unify the typography across mobile and desktop devices, which is a
good thing for both Wikimedia and third party users of
 Vector/MobileFrontEnd
3. individual users and individual wikis can still change their CSS as
needed and desired
4. we don't jeopardize Vector and MediaWiki's status as FOSS, by not
distributing nor creating a dependency on any proprietary software
*whatsoever*. Thank you, CSS font-family property and fallbacks.

 That all sounds like a pretty good way to maintain freedom while improving
 readability and consistency to me.

 --
 Steven Walling,
 Product Manager
 https://wikimediafoundation.org/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Should MediaWiki CSS prefer non-free fonts?

2014-02-16 Thread Gerard Meijssen
Hoi,
Loading the Autonym stack was a solution to a much worse problem. When it
is still a problem, it can easily be disabled because having the Autonym
font is not essential. It is there to make things look good.

Having the OpenDyslexic font is essential.

Having the fonts for Hindi, Divehi, Tamil, Amharic is essential.

OpenDyslexic is easily the most used WebFont. It has the potential to serve
7% of a population.

When you indicate that the feelings are still high, you have to appreciate
that no recent changes lead to the disabling of primary functionality.
There may have been performance issues but they were there before. The
argument was not made that in order to save our infrastructure ULS had to
be disabled. The argument that was made was we want to improve the
performance of our site.

I do agree that this is important. It is not as important as providing
ability to read and edit. I do agree that delivering web fonts is not
trivial. However the non technical arguments have been trivialised.
Thanks,
   GerardM




On 16 February 2014 10:48, Steven Walling steven.wall...@gmail.com wrote:

 On Sun, Feb 16, 2014 at 1:16 AM, David Gerard dger...@gmail.com wrote:

  On 16 February 2014 08:54, Gerard Meijssen gerard.meijs...@gmail.com
  wrote:
 
   Working towards a more beautiful viewing experience is a secondary
   objective. Primary is that our readers and editors can read and edit.
   ULS is a huge success in doing what it was intended to do. I am afraid
  that
   we have lost sight of what our primary objective is about.
 
 
  Indeed. What precisely was the problem with ULS?


 From https://www.mediawiki.org/wiki/Universal_Language_Selector:
 Universal
 Language Selector has been disabled on 21-01-2014 to work out some
 performance issues that had affected the Wikimedia sites. To my
 understanding part of the major performance issues here related to issues
 like loading the Autonym font via webfonts.

 I probably should not have brought up ULS because feelings are still raw
 about it and I'm not interested in rehashing its problems, but my point is
 that it's an example of how delivering webfonts is not a trivial thing for
 us. No one has offered to spend time on a highly performant webfonts system
 that can deliver better typography reliably to all Wikimedia sites, and
 we're certainly not going to officially task a team to do so when there's a
 reasonable alternative that thousands of users are trying out right now in
 beta mode.


  What consideration
  did the designers give to non-Latin?


 The beta feature has involved lots of testing in non-Latin scripts. It's
 not perfect yet but we certainly haven't ignored scripts that represent so
 many users. (Remember we're not talking about something actually that new.
 A very similar font stack has been in use for 100% of mobile users for more
 than a year.)

 Steven

 P.S. Sorry for answering from a different account. My work address is not
 subscribed to Wikitech.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Should MediaWiki CSS prefer non-free fonts?

2014-02-16 Thread Gerard Meijssen
Hoi,
That may seem like a reasonable argument however, the functionality that
ULS provides is related. What is the point of providing input methods when
changes are that you can not read what you are about to write. What is the
point when you cannot select the language you want to use this font, input
method for?

Before ULS, in the bad old times, There was a need for both the font and
the input method.. Really, we are much better off with the ULS.
Thanks,
  Gerard

PS honest mistake I take it.


On 16 February 2014 11:35, K. Peachey p858sn...@gmail.com wrote:

 On 16 February 2014 18:54, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:

  ULS is a huge success in doing what it was intended to do. I am afraid
 that
  we have lost sight of what our primary objective is about.
  Thanks,
GerardM


 TBH we probably lost most of that when everything was rolled into one
 gigantic extensions, instead of separate tools that specialised in what
 they were designed for.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dynamic search results as category pages

2014-02-01 Thread Gerard Meijssen
Hoi,

With the autolist [1] and Widar you can make claims for the entries in a
category. So you can first claim occupation philosopher and then make
other claims about such philosophers ... Mind you claiming the Greek
nationality for people who spoke Greek is problematic when these people
were living before the creation of Greece. Possible claims are that they
spoke Greek ...
Thanks,
 GerardM

[1] http://tools.wmflabs.org/wikidata-todo/autolist.html#



On 24 January 2014 10:53, Yuri Astrakhan yastrak...@wikimedia.org wrote:

 Hi, I am thinking of implementing a

 #CATQUERY query

 magic keyword for the category pages.

 When this keyword is present, the category page would execute a query
 against the search backend instead of normal category behavior and show
 result as if those pages were actually marked with this category.

 For example, this would allow Greek Philosophers category page to be
 quickly redefined as
 a cross-section of greeks  philosophers categories:

 #CATQUERY incategory:Greek incategory:Philosopher

 Obviously the community will be able to define much more elaborate queries,
 including the ordering (will be supported by the new search backend)
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dynamic search results as category pages

2014-01-31 Thread Gerard Meijssen
Hoi,
In Reasonator we now support calendars [1] It does show you things like
people who were born / died on that date, events under way on that date ..

Play with and notice that longer periods are also supported. Some longer
periods may take some time but we have a waiting bar for you..It does
support any of the languages that we support. It uses Wikidata information
so if your favourite person or event is not mentioned. Just add the details
in Wikidata.
Thanks,
   GerardM

PS three cheers for Magnus


[1]


On 30 January 2014 19:50, Chad innocentkil...@gmail.com wrote:

 On Sun, Jan 26, 2014 at 1:10 AM, Lydia Pintscher 
 lydia.pintsc...@wikimedia.de wrote:

  On Fri, Jan 24, 2014 at 10:53 AM, Yuri Astrakhan
  yastrak...@wikimedia.org wrote:
   Hi, I am thinking of implementing a
  
   #CATQUERY query
  
   magic keyword for the category pages.
  
   When this keyword is present, the category page would execute a query
   against the search backend instead of normal category behavior and show
   result as if those pages were actually marked with this category.
  
   For example, this would allow Greek Philosophers category page to be
   quickly redefined as
   a cross-section of greeks  philosophers categories:
  
   #CATQUERY incategory:Greek incategory:Philosopher
  
   Obviously the community will be able to define much more elaborate
  queries,
   including the ordering (will be supported by the new search backend)
 
  In the future this will be possible using Wikidata and queries. My
  dream is that categories will to a very large extend go away then and
  be replaced with better Wikidata-based tools. I'd _really_ like us not
  to introduce more incentives to use categories like this one. Also
  it'd be quite a source for confusion to have both.
 
 
 Indeed. I think it would be very shortsighted to introduce any sort of new
 syntax here for this right now as proposed.

 There's lots of exciting query opportunities coming with Wikidata as Lydia
 says (which will very likely end up using Elasticsearch in some manner).

 On Sun, Jan 26, 2014 at 10:21 AM, Brian Wolff bawo...@gmail.com wrote:
 
  Category intersection proposals seem to appear on this list about once
  every 2 years as far as i remember. Usually they dont succede because
  nobody ends up writing any efficient code to implement it.
 

 Yep, we've been here before. Because category intersection isn't easy with
 the schema we've got. However, it's incredibly trivial to do in
 Elasticsearch.
 So I imagine if we invest more effort in helping Wikidata expand its syntax
 and uses we'll likely end up with what Yuri's wanting--if maybe in a
 different
 form.

 I'll also note we've already solved category intersection in Cirrus ;-)


 https://en.wikipedia.org/w/index.php?search=incategory%3A%221941+births%22+incategory%3A%221995+deaths%22title=Special%3ASearch


 -Chad
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] new PDF Renderer - mw-ocg-bundler issue

2014-01-27 Thread Gerard Meijssen
Hoi,
Just a question, the WMF has invested a lot of effort in producing fonts
for many of the languages it supports. These fonts are better than the
fonts used in packages like Ubuntu.

Is there a problem in using the WMF supported font ? It certainly makes the
inconsistency in Ubuntu moot.
Thanks,
 GerardM


On 28 January 2014 00:16, Marco Fleckinger marco.fleckin...@wikipedia.atwrote:



 C. Scott Ananian canan...@wikimedia.org wrote:
 I don't happen to know the Fedora packages, but I'd be glad to add
 them to the README when you get a set which works.
 
 may you also add it to the article:

 https://mediawiki.org/wiki/PDF_rendering/Installation/en

 Unfortunately, the hard parts are identifying all the font packages,
 as package names for non-latin fonts seem to be wildly inconsistent
 even between different Ubuntu releases. :(

 Hm interesting. Concerning Tex everything went quite for me on Debian,
 although Matthew shared Ubuntu's packet names. Is this relevant to the
 article as well?

 Marco
 --
 Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
 gesendet.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dynamic search results as category pages

2014-01-26 Thread Gerard Meijssen
Hoi,
Did you consider the use of Wikidata in stead of SMW? It should not only
indicate all the articles that have the appropriate statements but also the
articles written in other languages.
Thanks,
 GerardM


On 26 January 2014 03:25, Yuri Astrakhan yastrak...@wikimedia.org wrote:

 Of course - I am actually modeling this feature on Semantic MW. It's just
 that I doubt WMF will enable SMW on the cluster any time soon :)


 On Sat, Jan 25, 2014 at 4:13 PM, Jamie Thingelstad ja...@thingelstad.com
 wrote:

  You could do something exactly like this very, very easily using Semantic
  MediaWiki and Concepts.
  Jamie Thingelstad
  ja...@thingelstad.com
  mobile: 612-810-3699
  find me on AIM Twitter Facebook LinkedIn
 
  On Jan 24, 2014, at 9:55 AM, Brian Wolff bawo...@gmail.com wrote:
 
   On Jan 24, 2014 1:54 AM, Yuri Astrakhan yastrak...@wikimedia.org
  wrote:
  
   Hi, I am thinking of implementing a
  
   #CATQUERY query
  
   magic keyword for the category pages.
  
   When this keyword is present, the category page would execute a query
   against the search backend instead of normal category behavior and
 show
   result as if those pages were actually marked with this category.
  
   For example, this would allow Greek Philosophers category page to be
   quickly redefined as
   a cross-section of greeks  philosophers categories:
  
   #CATQUERY incategory:Greek incategory:Philosopher
  
   Obviously the community will be able to define much more elaborate
   queries,
   including the ordering (will be supported by the new search backend)
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
   I like the idea in principle, but think the syntax could use
  bikeshedding ;)
  
   If we use this as category pages, im a little worried that people could
  get
   confused and try to add [[category:Greek philosophers]] to a page, and
   expect it to work. We would need good error handling in that situation
  
   including the ordering (will be supported by the new search backend)
  
   Cool. I didnt realize search would support this. That's a pretty big
 deal
   since people expect there categorirs alphabetized.
  
   Another cool project would be to expand intersection/Dyanamic Page List
   (Wikimedia) to be able to use search as a different backend (however,
  that
   extension would need quite a bit of refactoring to get there)
  
   -bawolff
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Status of the new PDF Renderer

2014-01-25 Thread Gerard Meijssen
Hoi Liangent,

Does [1] this answer your question ? It is a page they use for testing.
Thanks,
Gerard

http://zh.wikipedia.org/wiki/納粹德國海軍


On 25 January 2014 08:56, Liangent liang...@gmail.com wrote:

 I didn't look at the new renderer carefully, but I guess it's a
 Parsoid-based one. Hope that the language conversion syntax issue in PDF
 output can be resolved together with Parsoid in the future, which blocks
 the deployment of PDF output on zhwiki currently. See
 https://bugzilla.wikimedia.org/show_bug.cgi?id=34919 .

 -Liangent


 On Fri, Jan 24, 2014 at 2:38 AM, Matthew Walker mwal...@wikimedia.org
 wrote:

  Marco,
 
  Is it also possible to set this up behind a firewall?
 
  Yes; with the caveat that your wiki must be running Parsoid. It is also
  theoretically possible to still use Print on Demand services behind a
  firewall as we can POST a zip bundle to them -- likely however you'd just
  disable that functionality and I'm not sure our new bundle format is
  entirely compatible with the old bundle format...
 
  If you want to set this up locally; I can help with that if you jump on
 IRC
  #mediawiki-pdfhack on freenode. I'm mwalker.
 
  ~Matt Walker
  Wikimedia Foundation
  Fundraising Technology Team
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] Reasonator use in Wikipedias

2014-01-22 Thread Gerard Meijssen
Hoi,

The mail I send is meant to be a warning in advance. If you are interested
in the Reasonator, it is in continuous development and information is
provided on an almost daily basis. When you have read it, you may
understand the potential it has. It will help you understand why it can
have a place as a stand in for an article in a Wikipedia and also why it
can beat the quality of information of most stubs.

There are many ways to skin a cat. The most obvious one is to add a
{{Reasonator}} template as a place holder in a Wikipedia. Another is to
capture a not found or a red link and insert Reasonator info. What I am
trying to do is to give a sense of direction. I am not indicating how it
will be done for sure.

When the English Wikipedia community makes a decision, it is what the
English Wikipedia community thinks best for itself. No problem in that. It
would only become a problem when it is inferred to be a decision for every
Wikipedia community.

The he Media Viewer is very similar to the situation at hand with Wikidata
and Reasonator. Wikidata data can be used on every Wikipedia and to some
extend this is done on many if not most Wikipedias (including en.wp). Like
with Media, it can be confusing that the information is actually not on
that local project. It is also not that obvious that Wikidata is not
necessarily interested in the policies that are dreamt up locally. The
alternative is NOT having central storage of images or NOT having central
data storage. Both are not really an option.

I like the fact that you come up with some suggestions however, your
proposal does not consider disambiguation. At this stage we are improving
the information that is provided by Reasonator; the latest iteration has
de-cluttered complicated pages like the one for Shakespeare a lot while
adding to the information that is made available.

Quality information is provided by the Reasonator, the biggest problem I
see is that we do not have info-boxes of high quality available when an
article is being written.
Thanks,
   GerardM


On 21 January 2014 20:07, Ryan Kaldari rkald...@wikimedia.org wrote:

 Can you explain how such a {{Reasonator}} template would actually work. You
 say that it would be a stand-in until the article was actually written, but
 how would it know when the article is actually written? Is there a way to
 access the target article's state via Lua?

 From a community perspective, linking to external sites from body content
 is normally frowned upon (on en.wiki at least), even if the link is to a
 sister project. There are two main reasons for this:
 1. It discourages the creation of new articles via redlinks
 2. It can be confusing for readers to be sent to other sites while surfing
 Wikipedia content. (This is one of reasons why the WMF Multimedia team has
 been developing the Media Viewer.)

 My suggestion would be to leave the redlinks intact, but to provide a
 pop-up when hovering over the redlinks (similar to Navigation pop-ups (
 https://en.wikipedia.org/wiki/Wikipedia:Tools/Navigation_popups)). This
 pop-up could provide a small set of core data (via an ajax request) and
 also a link to the full Reasonator page. I would probably implement this as
 a gadget first and do a few design iterations based on user-feedback before
 proposing it as something for readers.

 Ryan Kaldari


 On Tue, Jan 21, 2014 at 10:02 AM, Magnus Manske 
 magnusman...@googlemail.com
  wrote:

  On a technical note, Reasonator is pure JavaScript, so should be easily
  portable, even to a Wikipedia:Reasonator.js page (or several pages, with
  support JS).
 
  git here:
  https://bitbucket.org/magnusmanske/reasonator
 
 
  On Tue, Jan 21, 2014 at 5:13 PM, Ryan Lane rlan...@gmail.com wrote:
 
   On Tue, Jan 21, 2014 at 7:17 AM, Gerard Meijssen
   gerard.meijs...@gmail.comwrote:
  
Hoi,
   
At this moment Wikipedia red links provide no information
 whatsoever.
This is not cool.
   
In Wikidata we often have labels for the missing (=red link)
 articles.
  We
can and do provide information from Wikidata in a reasonable way that
  is
informative in the Reasonator. We also provide additional search
information on many Wikipedias.
   
In the Reasonator we have now implemented red lines [1]. They
  indicate
when a label does not exist in the primary language that is in use.
   
What we are considering is creating a template {{Reasonator}} that
 will
present information based on what is available in Wikidata. Such a
   template
would be a stand in until an article is actually written. What we
 would
provide is information that is presented in the same way as we
 provide
  it
as this moment in time [2]
   
This may open up a box of worms; Reasonator is NOT using any caching.
   There
may be lots of other reasons why you might think this proposal is
 evil.
   All
the evil that is technical has some merit but, you have to consider
  that
the other side

[Wikitech-l] Reasonator use in Wikipedias

2014-01-21 Thread Gerard Meijssen
Hoi,

At this moment Wikipedia red links provide no information whatsoever.
This is not cool.

In Wikidata we often have labels for the missing (=red link) articles. We
can and do provide information from Wikidata in a reasonable way that is
informative in the Reasonator. We also provide additional search
information on many Wikipedias.

In the Reasonator we have now implemented red lines [1]. They indicate
when a label does not exist in the primary language that is in use.

What we are considering is creating a template {{Reasonator}} that will
present information based on what is available in Wikidata. Such a template
would be a stand in until an article is actually written. What we would
provide is information that is presented in the same way as we provide it
as this moment in time [2]

This may open up a box of worms; Reasonator is NOT using any caching. There
may be lots of other reasons why you might think this proposal is evil. All
the evil that is technical has some merit but, you have to consider that
the other side of the equation is that we are not sharing in the sum of
all knowledge even when we have much of the missing requested information
available to us.

One saving (technical) grace, Reasonator loads round about as quickly as
WIkidata does.

As this is advance warning, I hope that you can help with the issues that
will come about. I hope that you will consider the impact this will have on
our traffic and measure to what extend it grows our data.

The Reasonator pages will not show up prettily on mobile phones .. so does
Wikidata by the way. It does not consider Wikipedia zero. There may be more
issues that may require attention. But again, it beats not serving the
information that we have to those that are requesting it.
Thanks,
  GerardM


[1]
http://ultimategerardm.blogspot.nl/2014/01/reasonator-is-red-lining-your-data.html
[2] http://tools.wmflabs.org/reasonator/test/?lang=ocq=35610
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Status of the new PDF Renderer

2014-01-18 Thread Gerard Meijssen
Hoi,
I have a few questions

   - do you support other scripts used by languages like Malayalam (ml),
   Persian (fa), Chinese (zh) Russian (ru) ??
  - when you do, do you have examples for these languages ?
   - are the messages not localised or are they also not internationalised ?
   - are support for other scripts and proper internationalisation and
   localisation blockers for deployment ?

Thanks,
  GerardM


On 18 January 2014 03:42, Matthew Walker mwal...@wikimedia.org wrote:

 All,

 We've just finished our second sprint on the new PDF renderer. A
 significant chunk of renderer development time this cycle was on non latin
 script support, as well as puppetization and packaging for deployment. We
 have a work in progress pipeline up and running in labs which I encourage
 everyone to go try and break. You can use the following featured articles
 just to see what our current output is:
 * http://ocg-collection-alpha.wmflabs.org/index.php/Alexis_Bachelot
 *
 http://ocg-collection-alpha.wmflabs.org/index.php/Atlantis:_The_Lost_Empire

 Some other articles imported on that test wiki:
 * http://ur1.ca/gg0bw

 Please note that some of these will fail due to known issues noted below.

 You can render any page in the new renderer by clicking the sidebar link
 Download as WMF PDF; if you Download as PDF you'll be using the old
 renderer (useful for comparison.) Additionally, you can create full books
 via Special:Book -- our renderer is RDF to Latex (PDF) and the old
 renderer is e-book (PDF). You can also try out the RDF to Text (TXT)
 renderer, but that's not on the critical path. As of right now we do not
 have a bugzilla project entry so reply to this email, or email me directly
 -- we'll need one of: the name of the page, the name of the collection, or
 the collection_id parameter from the URL to debug.

 There are some code bits that we know are still missing that we will have
 to address in the coming weeks or in another sprint.
 * Attribution for images and text. The APIs are done, but we still need
 to massage that information into the document.
 * Message translation -- right now all internal messages are in English
 which is not so helpful to non English speakers.
 * Things using the cite tag and the Cite extension are not currently
 supported (meaning you won't get nice references.)
 * Tables may not render at all, or may break the renderer.
 * Caching needs to be greatly improved.

 Looking longer term into deployment on wiki, my plans right now are to get
 this into beta labs for general testing and connect test.wikipedia.org up
 to our QA hardware for load testing. The major blocker there is acceptance
 of the Node.JS 0.10, and TexLive 2012 packages into reprap, our internal
 aptitude package source. This is not quite as easy as it sounds, we already
 use TexLive 2009 in production for the Math extension and we must apply
 thorough tests to ensure we do not introduce any regressions when we update
 to the 2012 package. I'm not sure what actual dates for those migrations /
 testing will be because it greatly depends on when Ops has time. In the
 meantime, our existing PDF cluster based on mwlib will continue to serve
 our offline needs. Once our solution is deployed and tested, mwlib
 (pdf[1-3]) will be retired here at the WMF and print on demand services
 will be provided directly by PediaPress servers.

 For the technically curious; we're approximately following the parsoid
 deployment model -- using trebuchet to push out a source repository
 (services/ocg-collection) that has the configuration and node dependencies
 built on tin along with git submodules containing the actual service code.

 It may not look like it on the surface, but we've come a long way and it
 wouldn't have been possible without the (probably exasperated) help from
 Jeff Green, Faidon, and Ori. Also big thanks to Brad and Max for their
 work, and Gabriel for some head thunking. C. Scott and I are not quite off
 the hook yet, as indicated by the list above, but hopefully soon enough
 we'll be enjoying the cake and cookies from another new product launch.
 (And yes, even if you're remote if I promised you cookies as bribes I'll
 ship them to you :p)

 ~Matt Walker
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dynamic unit conversion

2014-01-17 Thread Gerard Meijssen
Hoi,
The best place for such an effort would be Wikidata.. There are many units
that could do with some TLC. Particularly with the arrival of values in
Wikidata it is expedient to do this.

One other area where attention would be relevant are dates.. Did you know
that some calendars (that are used) have cycles of sixty years ??
Thanks,
 Gerard


On 17 January 2014 09:59, Petr Bena benap...@gmail.com wrote:

 there could be even some cool javascript based toolbar that would
 allow people to switch the unit, or see unit hint when they roll over
 the text

 On Fri, Jan 17, 2014 at 9:58 AM, Petr Bena benap...@gmail.com wrote:
  .Hi,
 
  There are many articles on wikipedia that contain different units.
  Some use cm, that are common in europe, other use inches that are more
  widely used in US, same with other unit types.
 
  I think it would be cool if an extension was created which would allow
  everyone to specify what units they prefer, and the values in articles
  would be converted automatically based on preference.
 
  For example you would say the object has width of {{unit|cm=20}} and
  people who prefer cm would see 20 cm in article text, but people who
  prefer inches would see 7.87 inch. This could be even based on
  geolocation for IP users
 
  What do you think?

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A Multimedia Vision for 2016

2014-01-10 Thread Gerard Meijssen
Hoi,
Fabrice, I very much love the two stories described in the vision. It
describes not only a functionality that is technical, it also describes how
our community may interact. That is great.

What I missed are the consequences of the planned integration of Commons
with Wikidata. I blogged about it [1] and I suggest three more stories that
could be told because they are enabled by this integration. What I do not
fully understand is how the community aspects will integrate in an
environment that will be more multi lingual and multi cultural as a
consequence.

I have confidence that the three stories that I suggest will be realised by
2016. Not only that, I am pretty sure that as a consequence the amount of
traffic that our servers will have to handle will grow enormously to the
extend that I am convinced that our current capacity will not be able to
cope. Then again, they are the luxury problems that make us appreciate how
much room we still have for growth.
Thanks,
 GerardM


[1]
http://ultimategerardm.blogspot.nl/2014/01/wikimedia-multimedia-featuresvision-2016.html


On 10 January 2014 01:39, Fabrice Florin fflo...@wikimedia.org wrote:

 Happy new year, everyone!

 Many thanks to all of you who contributed to our multimedia programs last
 year! Now that we have a new multimedia team at WMF, we look forward to
 making some good progress together this year.

 To kick off the new year, here is a proposed multimedia vision for 2016,
 which was prepared by our multimedia and design teams, with guidance from
 community members:

 http://blog.wikimedia.org/2014/01/09/multimedia-vision-2016/

 This possible scenario is intended for discussion purposes, to help us
 visualize how we could improve our user experience over the next three
 years. We hope that it will spark useful community feedback on some of the
 goals we are considering.

 After you’ve viewed the video, we would be grateful if you could let us
 know what you think in this discussion:


 https://commons.wikimedia.org/wiki/Commons_talk:Multimedia_Features/Vision_2016

 We are looking for feedback from all users who benefit from Commons, even
 if their work takes place on other sites. This vision explores ways to
 integrate Wikimedia Commons more closely with Wikipedia and other MediaWiki
 projects, to help users contribute more easily to our free media repository
 -- wherever they are.

 In coming weeks, we will start more focused discussions on some key
 features outlined in this presentation. If you would like to join those
 conversations and keep up with our work, we invite you to subscribe to our
 multimedia mailing list:

 https://lists.wikimedia.org/mailman/listinfo/multimedia

 We look forward to more great collaborations in the new year!

 All the best,


 Fabrice
 on behalf of the Multimedia team

 ___

 Fabrice Florin
 Product Manager, Multimedia
 Wikimedia Foundation

 Wikipedia Profile:
 http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)

 Multimedia Project Hub:
 https://www.mediawiki.org/wiki/Multimedia
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FWD: [Bug 58236] New: No longer allow gadgets to be turned on by default for all users on Wikimedia sites

2013-12-10 Thread Gerard Meijssen
Hoi,
A blanket ban like this is an extremely bad idea. The notion that you
should think twice before you make something a default option does have
merit.

Gadgets, java script all provide functionality that is sometimes / often
experimental. Without the benefit of experiments we will get into a rut, We
will have moved away from the credo of be bold.
Thanks,
  GerardM


On 9 December 2013 22:02, K. Peachey p858sn...@gmail.com wrote:

 For wider discussion

 ---
 From: bugzilla-daemon at wikimedia.org
 Subject: [Bug 58236] New: No longer allow gadgets to be turned on by
 default for all users on Wikimedia
 sites
 http://news.gmane.org/find-root.php?message_id=%3cbug%2d58236%2d3%40https.bugzilla.wikimedia.org%2f%3e
 
 Newsgroups: gmane.org.wikimedia.mediawiki.bugs
 http://news.gmane.org/gmane.org.wikimedia.mediawiki.bugs
 Date: 2013-12-09 20:41:41 GMT (19 minutes ago)

 https://bugzilla.wikimedia.org/show_bug.cgi?id=58236

Web browser: ---
 Bug ID: 58236
Summary: No longer allow gadgets to be turned on by default for
 all users on Wikimedia sites
Product: Wikimedia
Version: wmf-deployment
   Hardware: All
 OS: All
 Status: NEW
   Severity: normal
   Priority: Unprioritized
  Component: Site requests
   Assignee: wikibugs-l at lists.wikimedia.org
   Reporter: jared.zimmerman at wikimedia.org
 CC: benapetr at gmail.com,
 bugzilla+org.wikimedia at tuxmachine.com,
 dereckson at espace-win.org, greg at wikimedia.org
 ,
 tomasz at twkozlowski.net, wikimedia.bugs at
 snowolf.eu
 Classification: Unclassified
Mobile Platform: ---

 Gadgets being turned on for all site users (including readers) can cause a
 confusing users experience, especially when there is some disagreement and
 defaults change without notice (readers are rarely if ever part of these
 discussions)

 Move to model where gadgets are per user rather than part of a default
 experience for users

 --
 You are receiving this mail because:
 You are the assignee for the bug.
 You are on the CC list for the bug.
 ___
 Wikibugs-l mailing list
 Wikibugs-l at
 lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikibugs-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikidata-l] Italian Wikipedia complements search results with Wikidata based functionality

2013-12-06 Thread Gerard Meijssen
Hoi,
I received this mail ... It answers your question ..
Thanks,
Gerard

Hoi,

Thanks to some help by Ori Livneh, there are some new installation
instructions that I've added for Wdsearch:
https://en.wikipedia.org/wiki/MediaWiki:Wdsearch.js

Would you be able to go around to the various wikis that have enabled
this and get them to update the code? I've lost track of which wikis
are using it.

Thanks,
-- Legoktm



On 6 December 2013 09:52, Cristian Consonni kikkocrist...@gmail.com wrote:

 Il 05/dic/2013 07:32 Ori Livneh o...@wikimedia.org ha scritto:
  Please don't load the script on every page. It only targets the search
  result page. This means a redundant request on every page that isn't the
  search page. You can change it to:
 
  if ( mw.config.get( 'wgCanonicalSpecialPageName' ) === 'Search' ) {
  importScriptURI(//
 

 en.wikipedia.org/w/index.php?title=MediaWiki:Wdsearch.jsaction=rawctype=text/javascript
  );
  }

 It is used also in ns0 pages that do not exist yet.

 C
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Please use sitenotice when a new version of software is deployed

2013-12-05 Thread Gerard Meijssen
Hoi,
Let us be honest indeed. The others are not heard when they scream and
shout in their little corner of the world..
Thanks,
 Gerard


On 5 December 2013 23:41, Happy Melon happy.melon.w...@gmail.com wrote:

 On 5 December 2013 23:36, Bartosz Dziewoński matma@gmail.com wrote:

  On Thu, 05 Dec 2013 23:00:04 +0100, Dan Garry dga...@wikimedia.org
  wrote:
 
   How about getting this stuff included in the Signpost? I think that's a
  good medium for it.
 
 
  Signpost is English-specific, Wikipedia-specific and
  English-Wikipedia-specific.
 


 But let's be honest; the disproportionate majority of user discontent is
 enwiki-specific as well...  :-p

 --HM
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] (Update) HTTP on tools.wmflabs.org temporarily disabled

2013-12-04 Thread Gerard Meijssen
Hoi,
Thank you for resolving this :)
http://ultimategerardm.blogspot.nl/2013/12/the-tech-barnstar.html
Thanks,
 GerardM


On 4 December 2013 22:03, Marc A. Pelletier m...@uberbox.org wrote:

 More news:

 It turns out the DDOS was inadvertent and caused by an ill-considered
 addition to several projects rather than malice: a script hosted on tool
 labs intended to tie Wikidata to search results was made to load on
 every page view rather than just upon searching, causing every hit to a
 page on the projects to cause the user's client to connect to tool labs.

 Obviously, tool labs does not scale well to that many requests.  :-)

 It will probably take several hours for things to return to normal as we
 hunt down and remove those loads and cached versions expire from
 endusers' caches.

 -- Marc


 ___
 Labs-l mailing list
 lab...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/labs-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikidata-l] Italian Wikipedia complements search results with Wikidata based functionality

2013-12-03 Thread Gerard Meijssen
Hoi,

The logo looks fine to me... I already blogged about it [1]. We are also
looking for a logo for the Concept Cloud .. Lydia indicated that there
are more t-shirts that can be shipped for another great design :)

Thanks,
 GerardM

[1]
http://ultimategerardm.blogspot.nl/2013/12/will-this-be-reasonator-logo.html


On 3 December 2013 13:46, Federico Leva (Nemo) nemow...@gmail.com wrote:

 I've not been following the trademark policy draft discussion, but note
 that Labs is explicitly not part of Wikimedia sites according to the
 definition of this (formerly unknown) term on the privacy policy draft: 
 https://meta.wikimedia.org/wiki/Privacy_policy#A_Little_Background:
 «excluding, however, sites and services listed in the What This Privacy
 Policy Doesn't Cover section below [...] tools and sites, such as
 Wikimedia Labs».

 Nemo


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Italian Wikipedia complements search results with Wikidata based functionality

2013-12-02 Thread Gerard Meijssen
Hoi,

The Italian Wikipedia is the first project where people who use search will
find results added from Wikidata. As you may know, Wikidata knows has more
items with a label in a language than a Wikipedia has articles. With the
Wikidata based functionality people will gain several functionalities that
are new to them

   - a link to Commons categories for a subject
   - a link to Wikipedia articles in other languages
   - a link to the Wikidata item
   - visualisation care of the Reasonator

When there are multiple items found in the search request, disambiguation
will be provided based on the statements available on the items. Obviously
as more labels are available in a language for statements, the experience
will improve.

This is a really exciting new development and I want to thank Magnus and
Nemo for making it possible. I hope and expect that many Wikipedias will
follow the example of the Italian Wikipedia. Particularly the smaller
Wikipedias have much to gain from this new functionality.
Thanks,
  GerardM
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikidata-l] Italian Wikipedia complements search results with Wikidata based functionality

2013-12-02 Thread Gerard Meijssen
Hoi,
There are two ways of enabling this Wikidata search functionality; Wiki
wide and personally. It is done by adding this one line to common.js..


importScriptURI(//en.wikipedia.org/w/index.php?title=MediaWiki:Wdsearch.jsaction=rawctype=text/javascript);

You add it either to mediawiki:common.js or to USER/common.js.
Thanks,
  GerardM


On 3 December 2013 08:28, rupert THURNER rupert.thur...@gmail.com wrote:

 Hi gerard, that sounds really exciting! Is it necessary to change a setting
 to see this behaviour? If not i d appreciate if you could give an example
 where one could see this best.

 Rupert
 Am 02.12.2013 18:50 schrieb Gerard Meijssen gerard.meijs...@gmail.com:

  Hoi,
 
  The Italian Wikipedia is the first project where people who use search
  will find results added from Wikidata. As you may know, Wikidata knows
 has
  more items with a label in a language than a Wikipedia has articles. With
  the Wikidata based functionality people will gain several functionalities
  that are new to them
 
 - a link to Commons categories for a subject
 - a link to Wikipedia articles in other languages
 - a link to the Wikidata item
 - visualisation care of the Reasonator
 
  When there are multiple items found in the search request, disambiguation
  will be provided based on the statements available on the items.
 Obviously
  as more labels are available in a language for statements, the experience
  will improve.
 
  This is a really exciting new development and I want to thank Magnus and
  Nemo for making it possible. I hope and expect that many Wikipedias will
  follow the example of the Italian Wikipedia. Particularly the smaller
  Wikipedias have much to gain from this new functionality.
  Thanks,
GerardM
 
  ___
  Wikidata-l mailing list
  wikidat...@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikidata-l
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] OpenID deployment delayed until sometime in 2014

2013-11-27 Thread Gerard Meijssen
Hoi,

According to the website of myopenid [1], they are closing down Februari 1.
My hope was for the Wikimedia Foundation to come to the rescue and provide
a non commercial alternative to what Google and Facebook offer.

I am extremely disappointed that we are not. I hope that what is done
instead has at least a similar impact than leaving it all to the commercial
boys and girls. Privacy should not be left to commerce and national secret
services.
Thanks,
  GerardM


On 27 November 2013 03:27, Rob Lanphier ro...@wikimedia.org wrote:

 Hi everyone,

 If you were following our planning process this past spring/summer, you
 probably heard that we had planned to deploy both OAuth and OpenID by the
 end of 2013.

 The good news is that we were able to complete our OAuth deployment (see
 Dan Garry's blog post on the subject[1]).  The bad news is that we weren't
 able to get OpenID complete enough for deployment, we've decided to put
 OpenID on hold to make room for some of our other work.  Thomas Gries has
 done some great work on this, and has been very responsive to our input
 (we're a finicky bunch!)  We fully anticipate being able to deploy this
 sometime in 2014, but we don't yet have an exact plan for making this
 happen.

 We've communicated this through other channels, but hadn't broadcasted it
 here, so we're a bit overdue for an update.

 The Auth Systems page now has the latest information on what we were able
 to do, and will have more on our future plans as we make them:  
 https://www.mediawiki.org/wiki/Auth_systems

 Rob

 [1] OAuth now available on Wikimedia wikis Dan Garry: 
 https://blog.wikimedia.org/2013/11/22/oauth-on-wikimedia-wikis/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Operations buy in on Architecture of mwlib Replacement

2013-11-17 Thread Gerard Meijssen
Hoi,
The current PDF support is broken. It does not support all the languages we
support. I do not see that this an explicit requirement.
Thanks,
 GerardM


On 14 November 2013 00:02, Matthew Walker mwal...@wikimedia.org wrote:

 Hey,

 For the new renderer backend for the Collections Extension we've come up
 with a tentative architecture that we would like operations buy in on. The
 living document is here [1]. It's worth saying explicitly that whatever
 setup we use must be able to handle the greater than 150k requests a day we
 serve using the old setup.

 Basically we're looking at having
 * 'render servers' run node.js
 * doing job management in Redis
 * rendering content using PhantomJS and/or Latex
 * storing rendered files locally on the render servers (and streaming the
 rendered results through MediaWiki -- this is how it's done now as well).
 * having a garbage collector run routinely on the render servers to cleanup
 old stale content

 Post comments to the talk page please :)

 [1 ]https://www.mediawiki.org/wiki/PDF_rendering/Architecture

 ~Matt Walker
 Wikimedia Foundation
 Fundraising Technology Team
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Operations buy in on Architecture of mwlib Replacement

2013-11-17 Thread Gerard Meijssen
The objective is to get buy in and comments on what is proposed.

If it was as simple as adding a few fonts and other dependecies, would the
previous iteraton of the software have remained broken?

Sadly,  it is a wee bit more complicated.
Thanks,
GerardM
Op 17 nov. 2013 15:30 schreef Jeremy Baron jer...@tuxmachine.com:

 On Nov 17, 2013 9:17 AM, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:
  The current PDF support is broken. It does not support all the languages
 we
  support. I do not see that this an explicit requirement.

 That may effect dependencies (fonts/libs/etc.) but otherwise I think is
 irrelevant to this thread.

 Please comment about this onwiki.

 -Jeremy
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] input requested re: metrics for hackathons

2013-10-23 Thread Gerard Meijssen
Hoi,

Please understand that one of the most important aspects do not get caught
in numbers.. Particularly the social aspects are very important. The fact
that people meet is often enough to get to new levels of understanding,
inspiration. This is the reason why people travel all over the world... to
inspire and they do.
Thanks,
   GerardM


On 22 October 2013 21:42, Jonathan Morgan jmor...@wikimedia.org wrote:

 Hi all,

 OrenBachman has asked several questions about how to evaluate the impact of
 Hackathon events on the Evaluation Portal*[1]* and I'm hoping some folks
 from this list can share their advice and perspectives.

 Please read and respond there if you have input. Several outstanding
 questions are:

 1. do you know of any available publicly stats on how many first time
 MediaWiki hackers make use of git/gerrit?

 2. can you share any good strategies for getting as many attendees as
 possible to set up git/gerrit userids prior to the event?

 3. can you provide any examples of reports from previous Hackathons that do
 a good job of describing various impact/participation metrics?

 4. Safe space policy - How have breaches to the safe space policy been
 handled in recent events? How have they been reported?

 I'm going to follow up elsewhere about the legal issues OrenBachman raises,
 but feel free to share your experiences with those as well.

 Thanks for your assistance,
 Jonathan

 *[1]*

 https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Parlor/Questions#What_are_the_established_Goals_and_Metrics_used_to_evaluate_Hackathons

 --
 Jonathan T. Morgan
 Learning Strategist
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code-in: Lua templates

2013-10-20 Thread Gerard Meijssen
Hoi,

What would be awesome is when attention is given to the templates on
Wikidata as used on the Occitan Wikipedia. It is the one example where data
from Wikidata is used to provide information. It can do with a lot of
attention to get rid of the many script errors. It can do with attention to
make these templates usable in other Wikipedias.
Thanks,
 GerardM

https://oc.wikipedia.org/wiki/Bernat_de_Ventadorn
http://ultimategerardm.blogspot.nl/2013/08/the-occitan-wikipedia-and-wikidata.html


On 20 October 2013 23:53, Quim Gil q...@wikimedia.org wrote:

 On 10/20/2013 11:11 AM, Mark Holmquist wrote:

 A little warning: where a task can be split into multiple modules, try
 to make the modules reusable by other modules easily.


 We can't expect newcomer students to know that, therefore the trick is
 that someone knowing better modules available around document this in the
 specific task of a template.

 The typical description of a Lua template task could be organized like
 this:

 1. Common short intro: MediaWiki templates

 2. Common short intro: Wikitext vs Lua templates

 3. Specific short intro: Template:XYZ

 4. Optional specific instructions e.g. include Module:123

 5. Common instructions for deployment and testing.


 --
 Quim Gil
 Technical Contributor Coordinator @ Wikimedia Foundation
 http://www.mediawiki.org/wiki/**User:Qgilhttp://www.mediawiki.org/wiki/User:Qgil

 __**_
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 2013 Datacenter RFP - open for submissions

2013-10-19 Thread Gerard Meijssen
Hoi,
Are some of our partners in Wikipedia Zero not caching already ?
Thanks,
  GerardM


On 19 October 2013 12:59, Leslie Carr lc...@wikimedia.org wrote:

 On Sat, Oct 19, 2013 at 1:17 PM, Maarten Dammers maar...@mdammers.nl
 wrote:
  Hi Ken,
 
  Op 18-10-2013 22:05, Ken Snider schreef:
 
  The Wikimedia Foundation's Technical Operations team is seeking
 proposals
  on the provisioning of a new data-centre facility.
 
  After working through the specifics internally, we now have a public RFP
  posted[1] and ready for proposals. We invite any organization meeting
 the
  requirements outlined to submit a proposal for review.
 
  You have stated some technical requirements, but not the availability you
  would like to have. You probably want to include that you're looking for
 a
  tier-4 data center
  (https://en.wikipedia.org/wiki/Tier_4_data_center#Data_center_tiers). Is
  this one going to replace the Florida data center? Where are you keeping
  documentation these days? The information on wikitech seems to be very
  incomplete and outdated.

 Wikitech is our best source of documentation

 
  Something related: While travelling in China I noticed the bad
 performance
  of our sites. Would it be a good idea to investigate this (lack of)
  performance and maybe consider a caching site somewhere in Asia? The
 latency
  should be much better than getting is all the way from the USA.
  http://www.glif.is/publications/maps/GLIF_5-11_World_4k.jpg (from
  http://www.glif.is/publications/maps/) gives a good idea of
 connectivity by
  the way.

 (Employee hat) - we are starting to move some of the Asian traffic to
 a new caching center on the US west coast, which has helped latency.
 (Personal hat) - I would love to get an Asian caching center (Hong
 Kong or Tokyo would be my top 2 choices), but IMHO the biggest barrier
 to this the time and availability of people resources.

 
  Maarten
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Leslie Carr
 Wikimedia Foundation
 AS 14907, 43821
 http://as14907.peeringdb.com/

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 2013 Datacenter RFP - open for submissions

2013-10-19 Thread Gerard Meijssen
Hoi,

I am not so much interested in what they use. What I would like to suggest
is that we have partners in both Africa and Asia. They are likely to have
the expertise to run a caching centre on our behalf. They provide us a
service in bringing Wikipedia at no cost to their customers. When we pay
them to run a non-discriminatory caching service, it would increase our
service and maybe even the cost of transatlantic traffic.
Thanks,
  GerardM


On 19 October 2013 13:25, Leslie Carr lc...@wikimedia.org wrote:

 On Sat, Oct 19, 2013 at 2:18 PM, Gerard Meijssen
 gerard.meijs...@gmail.com wrote:
  Hoi,
  Are some of our partners in Wikipedia Zero not caching already ?
  Thanks,
GerardM
 
 
 Are you asking if they have varnish caches (they do not) or if they
 are using some web caching on their environment (which is possible,
 using transparent proxies , though I do not know of providers who use
 them... however my mobile web ecosystem knowledge is limited)

  On 19 October 2013 12:59, Leslie Carr lc...@wikimedia.org wrote:
 
  On Sat, Oct 19, 2013 at 1:17 PM, Maarten Dammers maar...@mdammers.nl
  wrote:
   Hi Ken,
  
   Op 18-10-2013 22:05, Ken Snider schreef:
  
   The Wikimedia Foundation's Technical Operations team is seeking
  proposals
   on the provisioning of a new data-centre facility.
  
   After working through the specifics internally, we now have a public
 RFP
   posted[1] and ready for proposals. We invite any organization meeting
  the
   requirements outlined to submit a proposal for review.
  
   You have stated some technical requirements, but not the availability
 you
   would like to have. You probably want to include that you're looking
 for
  a
   tier-4 data center
   (https://en.wikipedia.org/wiki/Tier_4_data_center#Data_center_tiers).
 Is
   this one going to replace the Florida data center? Where are you
 keeping
   documentation these days? The information on wikitech seems to be very
   incomplete and outdated.
 
  Wikitech is our best source of documentation
 
  
   Something related: While travelling in China I noticed the bad
  performance
   of our sites. Would it be a good idea to investigate this (lack of)
   performance and maybe consider a caching site somewhere in Asia? The
  latency
   should be much better than getting is all the way from the USA.
   http://www.glif.is/publications/maps/GLIF_5-11_World_4k.jpg (from
   http://www.glif.is/publications/maps/) gives a good idea of
  connectivity by
   the way.
 
  (Employee hat) - we are starting to move some of the Asian traffic to
  a new caching center on the US west coast, which has helped latency.
  (Personal hat) - I would love to get an Asian caching center (Hong
  Kong or Tokyo would be my top 2 choices), but IMHO the biggest barrier
  to this the time and availability of people resources.
 
  
   Maarten
  
  
  
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 
 
  --
  Leslie Carr
  Wikimedia Foundation
  AS 14907, 43821
  http://as14907.peeringdb.com/
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Leslie Carr
 Wikimedia Foundation
 AS 14907, 43821
 http://as14907.peeringdb.com/

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 2013 Datacenter RFP - open for submissions

2013-10-19 Thread Gerard Meijssen
Before it is send to a mobile phone, the data has to be retrieved in the
classic way. Do you really think that the companies who run mobile network
do not have the expertise to keep a bunch of caching servers in the air ?
When you do, do you think that all these mobile operators do not have that
skill? Do you think they do not have capacity at the key locations of the
Internet infrastructure?
Thanks,
 GerardM


On 19 October 2013 13:44, Leslie Carr lc...@wikimedia.org wrote:

 On Sat, Oct 19, 2013 at 2:39 PM, Gerard Meijssen
 gerard.meijs...@gmail.com wrote:
  Hoi,
 
  I am not so much interested in what they use. What I would like to
 suggest
  is that we have partners in both Africa and Asia. They are likely to have
  the expertise to run a caching centre on our behalf. They provide us a
  service in bringing Wikipedia at no cost to their customers. When we pay
  them to run a non-discriminatory caching service, it would increase our
  service and maybe even the cost of transatlantic traffic.
  Thanks,
GerardM
 

 Running a mobile network is completely different operation from running a
 CDN.


 --
 Leslie Carr
 Wikimedia Foundation
 AS 14907, 43821
 http://as14907.peeringdb.com/

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tarballs of all 2004-2012 Commons files now available at archive.org

2013-10-16 Thread Gerard Meijssen
Hoi,

Deleted files AFTER the creation of the tarballs were created will always
be part of the tarball. If you create logic that works on these archives,
it is likely that they will also work on the live data at Commons...

MY QUESTION... Yes, it is good to have a backup somewhere. However, what is
the point working on old data when the new data is available?

Thanks,
  Gerard


On 16 October 2013 06:02, Strainu strain...@gmail.com wrote:

 Hi Frederico,

 This is great news! I have two questions though:

 1. What happens to files deleted after your crawler retrieved them? I
 suppose they will still be available in the archives.
 2. Is the archive team willing to host 3rd party, specialized
 downloads, such as all the pictures from WLM (or all the pictures with
 monuments from a certain country?)

 Thanks,
 Strainu

 2013/10/13 Federico Leva (Nemo) nemow...@gmail.com:
  WikiTeam has just finished archiving all Wikimedia Commons files up to
 2012
  (and some more) on the Internet Archive:
  https://archive.org/details/wikimediacommons
  So far it's about 24 TB of archives and there are also a hundred torrents
  you can help seed, ranging from few hundred MB to over a TB, most around
 400
  GB.
  Everything is documented at
  
 https://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps#Media_tarballs
 
  and if you want here are some ideas to help WikiTeam with coding:
  https://code.google.com/p/wikiteam/issues/list.
 
  Nemo
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Getting birthday from the sidebar of a wiki page and showing in the home page

2013-10-03 Thread Gerard Meijssen
Hoi,
There are many ways of doing this.. However, when you want this to work for
you, it takes a big effort to make this happen. Long term you may have to
do the least by using Wikidata.
Thanks,
  Gerard


On 3 October 2013 15:02, Shrinivasan T tshriniva...@gmail.com wrote:

 The side bar of any famous person in Tamil wikipedia has the birthday of
 the person.

 Example:
 https://ta.wikipedia.org/s/im6
 https://en.wikipedia.org/wiki/Nagesh

 Is there any way to notify this in the front/home page of the Tamil
 wikipedia ?

 Just like a Today in history  etc.

 Thanks.

 --
 Regards,
 T.Shrinivasan


 My Life with GNU/Linux : http://goinggnu.wordpress.com
 Free E-Magazine on Free Open Source Software in Tamil : http://kaniyam.com

 Get CollabNet Subversion Edge : http://www.collab.net/svnedge
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Categories without interwiki

2013-09-22 Thread Gerard Meijssen
Hoi,
What is the problem with people NOT considering interwiki links to
categories as being relevant.. They are not the only ones.

In a similar way I have asked several times what the point is of interwiki
links for disambiguation pages.. The only answer I got was along the lines
of because we can. Similarly I do not understand the point of interwiki
links to categories. What is achieved by it ?

So technically you may do or have done a good job. When people appreciate
it, good for you. But I fail to see the point.

Thanks,
  GerardM


On 22 September 2013 21:54, Amir Ladsgroup ladsgr...@gmail.com wrote:

 Hello,
 Persian Wikipedia is one of the largest wikis based on number of categories
 but It's not very common that people consider adding interwiki of
 categories (they think interwiki is just for articles) so we have tons of
 tons (before writing my engine that was 30K out of 170K) categories without
 any interwikis which is really bad. I wrote some codes to make it better
 but It wasn't enough So I wrote an engine that gets two database: 1-list of
 categories without interwiki 2-list of categories with interwiki to a
 certain language (e.g. English) with the target interwiki and after that my
 bot analyzes and guess what is the correct interwiki of category based on
 patterns of naming them in the second database
 and bot reports. After running this code on fa.wp there was a very huge
 report [1] and we started to sort things out (merging duplicates [2],
 deleting extra ones, adding the correct iw) and now it's less than 25K
 categories without interwikis (and It's becoming less and less) we did the
 same on templates namespace [3] and we interwikified more than 10K
 templates after that.

 And because this engine doesn't use any language-related analyses It can be
 ran in any language and get interwiki from any language (we planned to run
 this on Persian Wikipedia again but this time we use Dutch and German
 languages as repo of interwiki)

 So here is my question: Is there similar situation in your wiki? Do you
 want to run this code in your wiki too? Do you have any suggestion?
 [1]: https://fa.wikipedia.org/w/index.php?title=کاربر:Ladsgroup/
 رده‌هاoldid=10959457
 [2]: One of the benefits of running this engine is we can find duplicates
 [3]:

 https://fa.wikipedia.org/wiki/%DA%A9%D8%A7%D8%B1%D8%A8%D8%B1:Ladsgroup/%D8%A7%D9%84%DA%AF%D9%88%D9%87%D8%A7%DB%8C
 Best
 ---
 Amir
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Categories without interwiki

2013-09-22 Thread Gerard Meijssen
Hoi,
grin your answer is like because they make similar categories like on
en.wp it does not answer the question if that makes sense and it does not
answer the question why this should be replicated in turn in Wikidata.
/grin
Thanks,
 GerardM


On 22 September 2013 22:41, Amir Ladsgroup ladsgr...@gmail.com wrote:

 That was a very good question!
 In Persian Wikipedia, there is another engine that adds categories in
 articles based on usage of them in English Wikipedia and in order to
 avoid mistakes that engine uses a pretty damn complicated algorithm
 that I have no clue what's that but It's working properly in Persian
 Wikipedia so We need interwiki of categories in order to fill them by
 bots

 Best

 On 9/22/13, Gerard Meijssen gerard.meijs...@gmail.com wrote:
  Hoi,
  What is the problem with people NOT considering interwiki links to
  categories as being relevant.. They are not the only ones.
 
  In a similar way I have asked several times what the point is of
 interwiki
  links for disambiguation pages.. The only answer I got was along the
 lines
  of because we can. Similarly I do not understand the point of interwiki
  links to categories. What is achieved by it ?
 
  So technically you may do or have done a good job. When people appreciate
  it, good for you. But I fail to see the point.
 
  Thanks,
GerardM
 
 
  On 22 September 2013 21:54, Amir Ladsgroup ladsgr...@gmail.com wrote:
 
  Hello,
  Persian Wikipedia is one of the largest wikis based on number of
  categories
  but It's not very common that people consider adding interwiki of
  categories (they think interwiki is just for articles) so we have tons
 of
  tons (before writing my engine that was 30K out of 170K) categories
  without
  any interwikis which is really bad. I wrote some codes to make it better
  but It wasn't enough So I wrote an engine that gets two database: 1-list
  of
  categories without interwiki 2-list of categories with interwiki to a
  certain language (e.g. English) with the target interwiki and after that
  my
  bot analyzes and guess what is the correct interwiki of category based
  on
  patterns of naming them in the second database
  and bot reports. After running this code on fa.wp there was a very huge
  report [1] and we started to sort things out (merging duplicates [2],
  deleting extra ones, adding the correct iw) and now it's less than 25K
  categories without interwikis (and It's becoming less and less) we did
 the
  same on templates namespace [3] and we interwikified more than 10K
  templates after that.
 
  And because this engine doesn't use any language-related analyses It can
  be
  ran in any language and get interwiki from any language (we planned to
 run
  this on Persian Wikipedia again but this time we use Dutch and German
  languages as repo of interwiki)
 
  So here is my question: Is there similar situation in your wiki? Do you
  want to run this code in your wiki too? Do you have any suggestion?
  [1]: https://fa.wikipedia.org/w/index.php?title=کاربر:Ladsgroup/
  رده‌هاoldid=10959457
  [2]: One of the benefits of running this engine is we can find
 duplicates
  [3]:
 
 
 https://fa.wikipedia.org/wiki/%DA%A9%D8%A7%D8%B1%D8%A8%D8%B1:Ladsgroup/%D8%A7%D9%84%DA%AF%D9%88%D9%87%D8%A7%DB%8C
  Best
  ---
  Amir
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l


 --
 Amir

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Categories without interwiki

2013-09-22 Thread Gerard Meijssen
Hoi,
Again this does not answer the question why it makes sense to have
interwiki links for categories in Wikidata.
Thanks,
 GerardM


On 22 September 2013 23:56, Brian Wolff bawo...@gmail.com wrote:

 In the past I was asked do certain statistics based on categories for
 wikinews. Having interwikis was rather key for that.

 -bawolff
 On 2013-09-22 5:26 PM, Gerard Meijssen gerard.meijs...@gmail.com
 wrote:
 
  Hoi,
  What is the problem with people NOT considering interwiki links to
  categories as being relevant.. They are not the only ones.
 
  In a similar way I have asked several times what the point is of
 interwiki
  links for disambiguation pages.. The only answer I got was along the
 lines
  of because we can. Similarly I do not understand the point of interwiki
  links to categories. What is achieved by it ?
 
  So technically you may do or have done a good job. When people appreciate
  it, good for you. But I fail to see the point.
 
  Thanks,
GerardM
 
 
  On 22 September 2013 21:54, Amir Ladsgroup ladsgr...@gmail.com wrote:
 
   Hello,
   Persian Wikipedia is one of the largest wikis based on number of
 categories
   but It's not very common that people consider adding interwiki of
   categories (they think interwiki is just for articles) so we have tons
 of
   tons (before writing my engine that was 30K out of 170K) categories
 without
   any interwikis which is really bad. I wrote some codes to make it
 better
   but It wasn't enough So I wrote an engine that gets two database:
 1-list of
   categories without interwiki 2-list of categories with interwiki to a
   certain language (e.g. English) with the target interwiki and after
 that my
   bot analyzes and guess what is the correct interwiki of category
 based on
   patterns of naming them in the second database
   and bot reports. After running this code on fa.wp there was a very huge
   report [1] and we started to sort things out (merging duplicates [2],
   deleting extra ones, adding the correct iw) and now it's less than 25K
   categories without interwikis (and It's becoming less and less) we did
 the
   same on templates namespace [3] and we interwikified more than 10K
   templates after that.
  
   And because this engine doesn't use any language-related analyses It
 can be
   ran in any language and get interwiki from any language (we planned to
 run
   this on Persian Wikipedia again but this time we use Dutch and German
   languages as repo of interwiki)
  
   So here is my question: Is there similar situation in your wiki? Do you
   want to run this code in your wiki too? Do you have any suggestion?
   [1]: https://fa.wikipedia.org/w/index.php?title=کاربر:Ladsgroup/
   رده‌هاoldid=10959457
   [2]: One of the benefits of running this engine is we can find
 duplicates
   [3]:
  
  

 https://fa.wikipedia.org/wiki/%DA%A9%D8%A7%D8%B1%D8%A8%D8%B1:Ladsgroup/%D8%A7%D9%84%DA%AF%D9%88%D9%87%D8%A7%DB%8C
   Best
   ---
   Amir
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] [Analytics] Mobile stats

2013-09-10 Thread Gerard Meijssen
Hoi,
Is the Wikipedia-Zero traffic information part of the mobile statistics or
is it something completely separate thing?
Thanks,
 GerardM


On 10 September 2013 03:26, Adam Baso ab...@wikimedia.org wrote:

 Wikipedia Zero traffic (IP address and MCC/MNC matching as expected) shows
 in one day of requests (zero.tsv.log-20130907) roughly 7-9% of page
 responses having a Content-Type response of text/vnd.wap.wml, presuming
 field #11 (or index 10 if you're indexing from 0) in zero.tsv.log-date is
 the Content-Type. Do I understand correctly that field as Content-Type?

 Thanks.
 -Adam


 On Thu, Sep 5, 2013 at 9:27 AM, Arthur Richards aricha...@wikimedia.org
 wrote:

  Would adding the accept header to the x-analytics header be worthwhile
 for
  this?
  On Sep 5, 2013 4:16 AM, Erik Zachte ezac...@wikimedia.org wrote:
 
  For a breakdown per country, the higher the sampling rate the better, as
  the data will become reliable even for smaller countries with a not so
  great adoption rate of Wikipedia.
 
  -Original Message-
  From: analytics-boun...@lists.wikimedia.org [mailto:
  analytics-boun...@lists.wikimedia.org] On Behalf Of Max Semenik
  Sent: Thursday, September 05, 2013 12:28 PM
  To: Diederik van Liere
  Cc: A mailing list for the Analytics Team at WMF and everybody who has
 an
  interest in Wikipedia and analytics.; mobile-l; Wikimedia developers
  Subject: Re: [Analytics] [WikimediaMobile] Mobile stats
 
  On 05.09.2013, 4:04 Diederik wrote:
 
   Heya,
   I would suggest to at least run it for a 7 day period so you capture
   at least the weekly time-trends, increasing the sample size should
   also be recommendable. We can help setup a udp-filter for this purpose
   as long as the data can be extracted from the user-agent string.
 
  Unfortunately, accept is no less important here.
  So, to enumerate our requirements as a result of this thread:
  * Sampling rate the same as wikistats (1/1000).
  * No less than a week worth of data.
  * User-agent:
  * Accept:
  * Country from GeoIP to determine the share of developing countries.
  * Wiki to determine if some wikis are more dependant on WAP than other
ones.
 
  Anything else?
 
  --
  Best regards,
Max Semenik ([[User:MaxSem]])
 
 
  ___
  Analytics mailing list
  analyt...@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/analytics
 
 
  ___
  Mobile-l mailing list
  mobil...@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/mobile-l
 
 
  ___
  Mobile-l mailing list
  mobil...@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/mobile-l
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A metadata API module for commons

2013-09-01 Thread Gerard Meijssen
Hoi,

Wikidata is able to support a subset of properties needed for infoboxes.
The technology is however implemented on several Wikipedias. Recently it
became available for use on Wikivoyage.

The support for interwiki links is well established on both Wikivoyage and
Wikipedia.

Probably much of the data that needs to be supported for data cannot be
imported yet because the required property types are not supported yet.
Thanks,
  GerardM


On 1 September 2013 05:32, MZMcBride z...@mzmcbride.com wrote:

 James Forrester wrote:
 However, how much more work would it be to insert it directly into
 Wikidata right now?

 I think a parallel question might be: is Wikidata, as a social or
 technical project, able and ready to accept such data? I haven't been
 following Wikidata's progress too much, but I thought the focus was
 currently infoboxes, following interwiki links. And even those two
 projects (interwikis + infoboxes) were focused on only Wikipedias.

 MZMcBride



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Javascript to find other articles having the same external link

2013-08-26 Thread Gerard Meijssen
Hoi,
With sources stored in Wikidata finding where the same source is used as
well is implicit. This is a hack, admittedly a nice hack/.
Thanks,
GerardM


On 26 August 2013 09:27, Lars Aronsson l...@aronsson.se wrote:

 By putting the following code (written by User:EnDumEn) in
 user:YOURNAME/common.jsany external link gets an extra little
 symbol (⎆), which leads to a link search for that link. This
 is very useful for finding other articles that cite the same
 source.


 jQuery( a.external ).after( function() {
 return jQuery( a⎆/a) )
 .attr( href, 
 //sv.wikipedia.org/wiki/**Special:Linksearch/http://sv.wikipedia.org/wiki/Special:Linksearch/
 + this.href )
 .before(   );
 });



 --
   Lars Aronsson (l...@aronsson.se)
   Aronsson Datateknik - http://aronsson.se



 __**_
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   3   >