Re: [Wikitech-l] git.wikimedia.org (Gitblit) going away on June 29th, redirected to Phabricator

2016-06-21 Thread Platonides

On 21/06/16 23:16, Greg Grossmeier wrote:

== tl;dr ==
On June 29th git.wikimedia.org (running Gitblit) will redirect all
requests to Phabricator. The vast majority of requests will be correctly
redirected.

== What is happening? ==
In an effort to reduce the maintenance burden of redudant services we
will be removing git.wikimedia.org. The software that has been serving
git.wikimedia.org, Gitblit, has given our Operations team many headaches
over the years[0] and now that we have all repositories hosted in
Phabricator[1] there is no reason to keep Gitblit around. Phabricator's
Diffusion (the name of the code browser) provides the needed
functionality that Gitblit served (mostly viewing/browsing repositories,
something which Gerrit does not do).



I hear this with dismay. When I wanted to view the repository online in 
the past, I always ended up heading to git.wikimedia.org, since I was 
unable to *find* the repository at phabricator.

At most, phabricator showed a somewhat related diffusion commit.

Gitblit may not be the most suited software regarding technical 
stability, but diffusion is far from having an acceptable UI,

I'm afraid.

☹

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Short license blocks

2015-10-27 Thread Platonides

On 27/10/15 19:54, Antoine Musso wrote:

I think we standardized the MediaWiki core files at one point to
include the recommended GPL headers.  The commit history should have
such trace.


We did. Copyright headers were added for files which lacked it, much to 
my dismay. Actual descriptions of what the file did would have been 
prefered.

Not that they are a big problem, though.

Regarding "keeping the big header is important", I don't think anyone 
barely into CS  on this century can not know what the GPL is (and not 
figure out in 5 minutes).


An excerpt like this would be perfectly fine imho:
«This MediaWiki file is licensed under the terms of GPL 2 or later, as 
published in http://www.gnu.org/licenses/gpl2 See the COPYING file for 
details.»



It is much more likely that they don't understand the GPL itself (just 
like many people does not understand the Wikipedia license *has 
requirements*), such as understanding "GPL2" as "You may copy and reuse 
it ignoring all of the license requirements".


Just my 2 cents.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] thumb generation

2015-09-14 Thread Platonides

On 15/09/15 01:34, wp mirror wrote:

Idea.  I am thinking of piping the *pages-articles.xml.bz2 dump file
through an AWK script to write all unique [[File:*]] tags into a file. This
can be done quickly. The question then is: Given a file with all the media
tags, how can I generate all the thumbs. What mediawiki function shall I
call? Can this be done using the web API? Any other ideas?

Sincerely Yours,
Kent


You know it will fail for all kind of images included through templates 
(particularly infoboxes), right?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Idea: Cryptographically signed wiki pages.

2015-09-14 Thread Platonides

On 13/09/15 18:20, Purodha Blissenbach wrote:

The idea is that third parties can publish texts, such as theis
statutes, via a open or public wiki, and readers can be sure to read,
download, sign, and mail the originals. Another use would be to have
pledges and petitions signed by many people. Etc. It is not about
WMF-run Wikis.

Purodha



You can already use PGP-armored wikitext if you wanted to (you may want 
to parse it locally, ensure that it doesn't call unsigned templates, 
etc. but the option is there).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator monthly statistics - 2015-08

2015-09-02 Thread Platonides

On 02/09/15 00:12, Quim Gil wrote:

Wikimedia Phabricator will be soon one year old!

On Tue, Sep 1, 2015 at 2:00 AM,  wrote:



Number of accounts created in (2015-08): 288



Kind of surprised about the fact that we keep having almost ten new
Phabricator users every day, I have created a graph at
https://www.mediawiki.org/wiki/Community_metrics#New_accounts_in_Phabricator

Pretty impressive. There are about 3640 valid users today.


Those accounts could be created for many things:
* New developers
* Users wishing to open a mediawiki bug
* Wikimedians wishing to open a "classic" wikitech bug
* Wikimedians filling tasks for the new Community Tech team
* Even wikimedia employees that never used phabricator before

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What happened to our user agent requirements?

2015-09-01 Thread Platonides

Brad Jorsch (Anomie) wrote:

I wonder if it got lost in the move from Squid to Varnish, or something
along those lines.


That's likely, given that it was enforced by squid.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Improving CAPTCHA friendliness for humans, and increasing CAPTCHA difficulty for bots

2015-08-25 Thread Platonides

Jamison Lofthouse wrote:

The subject sounds exactly like the reCAPTCHA
https://www.google.com/recaptcha/intro/index.html  tagline. Not sure how
beneficial the project would be but I have seen it used. Maybe worth
looking into.
Thanks,
Negative24


I should note that the latest reCAPTCHA¹ is actually *less* friendly.
 ¹ the “please select all images of foobars” version.


I don't think it should be considered as the “best solution” (for wikis 
where it's suitable to install), nor should we repeat their errors.


A small list:

1) It still has user assumptions based on
 1a) language: the user must understand what a foobar is before he 
can select those², and they aren't always common use words, precisely.


 1b) cultural: will the user easily discover all the photos expected to 
be coffee if that's not a common beverage on his country?


 ² no, the sample image³ is not enough to discern what they want. At 
least with the expected easiness.


 ³ confusing UI btw, since the naive assumption would be to expect you 
also had to tick it (which is disabled).




2) confusing images: Sometimes it's not clear what is depicted in the 
photograph. Not even being a human.




3) wrong images: Sometimes there are images that are not really foobars 
(suppose they are the similar barfoos), and thus *shouldn't* be marked 
as such. But according to recaptcha they are. (and your grudgingly 
selection of the barfoo in order to pass the captcha probably means that 
Google is performing a wrong training reinforcing their idea that it is 
indeed a foobar)





In terms of difficulty for humans I would score them as:
images recaptcha  original reCaptcha  door numbers recaptcha  
nocaptcha recaptcha


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Must the logging table be in chronological order?

2015-04-07 Thread Platonides

On 07/04/15 18:54, Daniel Barrett wrote:

Will anything bad happen if entries in the MediaWiki logging table are not 
inserted in chronological order?

Due to a bug, our logging table has incomplete data. I'd like to insert the 
missing data using a script.
However, the log_id column is auto-increment. This means that when the table is 
ordered by log_id,
the data will not be in chronological order by log_timestamp.

Is that bad in any way?
Or are all applications (like Special:Log) expected to order by log_timestamp 
rather than log_id?

Thanks,
Dan


It's perfectly fine. They should be sorting by log_timestamp

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] E-mail login to wiki - needs feedback

2015-02-22 Thread Platonides

On 19/02/15 16:15, MZMcBride wrote:

It's not a matter of choosing a single, simple user name, per se, it's
choosing a user name on Wikimedia wikis, on Twitter, on Facebook, on
Gmail, on GitHub, and on a million other sites on the Web. Yes, users
should choose memorable user names and secure passwords on each site and
never forget them, but that isn't the world we live in. We dramatically
reduce our barrier to entry by allowing login via e-mail address as users
can typically remember their own e-mail address. Do you disagree?

MediaWiki not only currently disallows login via e-mail address, login is
case-sensitive (e.g., MZ and Mz can be different users). In your
experience, is MediaWiki's current authentication architecture following
common or best practices? I personally think there's a lot of work needed.

MZMcBride


Emails are case-sensitive as well. platonides@gmail is different than 
Platonides@gmail and different than PLATONIDES@gmail (for everybody but 
gmail). (cf. T76169, T75818, T85137)



PS: Some people indeed can't remember their own email address.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] E-mail login to wiki - needs feedback

2015-02-22 Thread Platonides

On 20/02/15 00:58, phoebe ayers wrote:

Hi all,

I'm the one who started that bug-now-task a while back, and for
context, it was based directly on user feedback. What MzM says above
is right. I was working with a casual (but quite good) editor who said
to me well, I'd edit that Wikipedia page, but I don't edit very often
and I can never remember what my login is, since my usual login was
taken. But if I could enter my email address, it would be a lot easier
and I'd be more likely to just do it.


It looks like it would be enough to provide a send forgotten username 
to this email feature.

Which is bug 13015 [1], fixed in 2011 [2] and afaik never enabled.

As it provides a list of usernames, there's no issue with 
too-many-usernames, which to use for login?



1- https://phabricator.wikimedia.org/T15015
2- 
http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/CentralAuth/CentralAuth.php?view=logpathrev=86482



As an aside, I wonder if login-by-email may lead to lower-quality 
usernames, which is an important part of your identity in the community.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GPL upgrading to version 3

2015-02-09 Thread Platonides

On 09/02/15 20:37, Tyler Romeo wrote:

This entire conversation is a bit disappointing, mainly because I am a

supporter of the free software movement, and like to believe that users
should have a right to see the source code of software they use. 
Obviously
not everybody feels this way and not everybody is going to support the 
free

software movement, but I can assure you I personally have no plans on
contributing to any WMF project that is Apache licensed, but at the very
least MediaWiki core is still GPLv2, even if it makes things a bit 
more difficult.


Also, I have no idea how the MPL works, but I can assure you that licensing
 under the “GPLv2 or any later version” cannot possibly imply it is 
available
 under both the v2 and v3. The different GPL versions have conflicting 
terms.

You cannot possibly use the terms of the v2 and v3 simultaneously. It is
legally impossible. What is means is that you can use the software under
the terms of the v2 *or* the v3. And, as I mentioned, since Apache is 
only
compatible with v3, as long as using the software under the v2 is an 
option,

you cannot combine code that is under Apache.

It is *available*. You can use, at your choice either of them (or any 
later version not yet released). Though your options may be decreased if 
you combine the work with a different one not compatible with both of them.


Also note we have traditionally held the position of not considering MW 
extensions derived works (and thus allowing them to be licensed like eg. 
MIT), which would be arguable.


I wouldn't even be surprised if -supposing we had an AGPL mediawiki- a 
troll came requesting the full LocalSettings.php contents to be 
published, password DB included.



I also vote for maintaining the current GPLv2+ license.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-12-11 Thread Platonides

Max Semenik wrote:

I'm pretty sure most users technical enough to use IRC are able to solve
captchas well.


That the backend is irc-based doesn't mean the would use a IRC frontend. 
We routinely point to web irc, and plenty of noobs have proven able to 
reach there
(sometimes even thinking we are $Company since, after all, who else 
could you be talking to after following a Contact link on a [[$Company]] 
page at  wikipedia.org?)



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-09 Thread Platonides

On 09/11/14 06:21, Pine W wrote:

Discussing an option with the community to test replacing registration
CAPTCHAs with an email requirement makes sense to me. I would support a
small, carefully designed test. If someone is motivated to create a
Wikimedia account and they don't want to register an email, they can be
given the option to have someone help them to set up an account via IRC,
Facebook, or other communications methods.


It would be nice to have a link that leads the user to a 
#wikipedia-accountcreation channel for getting help creating an account. 
There are few ways to get thelp for an inexperienced user if they fail 
at the create account form. It is orthogonal however on whether we use a 
captcha or not.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-09 Thread Platonides

On 07/11/14 02:52, Jon Harald Søby wrote:

The main concern is obviously that it is really hard to read, but there are
also some other issues, namely that all the fields in the user registration
form (except for the username) are wiped if you enter the CAPTCHA
incorrectly. So when you make a mistake, not only do you have to re-type a
whole new CAPTCHA (where you may make another mistake), you also have to
re-type the password twice *and*  your e-mail address. This takes a long
time, especially if you're not a fast typer (which was the case for the
first group), or if you are on a tablet or phone (which was the case for
some in the second group).


Only the password fields are cleared (in addition to the captcha). It is 
debatable if clearing them is the right thing or not, there must be some 
papers talking about that. But I think we could go with keeping them 
filled with the user password.


Another idea I am liking is to place the captcha at a different page (as 
a second step), where we could offer several options (captchas, puzzles, 
irc chat, email…) to confirm them, then gather their success rate.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-09 Thread Platonides

On 09/11/14 17:19, Marc A. Pelletier wrote:

On 11/09/2014 10:20 AM, Brian Wolff wrote:

Does anyone have any attack scenario that is remotely plausible which
requiring a verified email would prevent?


Spambots (of which there are multitude, and that hammer any mediawiki
site constantly) have gotten pretty good at bypassing captchas but have
yet to respond properly to email loops (and that's a more complicated
obstacle than first appears; throwaway accounts are cheap but any
process that requires a delay - however small - means that spambot must
now maintain state and interact rather than fire-and-forget).


We have so far talked about spambots, but what about *vandals*?

We have a whole class of users interested in damaging/manipulating our 
projects. Some of them just want to create problems, while others have 
an agenda (eg. SEO). A number of them know how to program (even though 
they would probably not create a neural network to OCR our captcha!)


Removing the captcha also lowers the bar for an account creator bot, 
becoming very easy.


Given that a hundred of dormant wikipedia accounts are valuable, will 
$wgAccountCreationThrottle be enough to deter them? Is changing the IP 
every 6 accounts hard enough?


(Actually, you would also need not to raise sysop suspicions from the 
names you generate, but given the weird names people is already using...)



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] !ask

2014-01-24 Thread Platonides

On 20/01/14 08:26, Petr Bena wrote:

There is one more feature available on wm-bot.

First of all, that thing is smart enough to recognize who is irc
newbie and who is not. It is possible to direct this only to people
who are known to the bot (their cloak is trusted) so that it doesn't
bite the newbies, but rather slap the experienced who keep bad
habits. The bot is able to recognize what is a question if they can
ask and automatically tell the person to ask the question instead of
asking if they can ask, for example:


+1
That was precisely what I was going to propose: that the bot should 
recognise the original question and answer it itself.
I don't think there's anything wrong in getting a bot inviting  you to 
answer. And will be much nicer than having a human tell the bot to 
explain you that you can ask :)


Frankly, the reason to do !ask | foobar is basically to avoid looking up 
the appropiate template.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] $wgDBmysql5

2013-12-31 Thread Platonides

On 30/12/13 23:21, Tyler Romeo wrote:

As the subject implies, there is a variable named $wgDBmysql5. I am
inclined to believe that the purpose of this variable is to use features
only available in MySQL 5.0 or later. The specific implementation affects
the encoding in the database.

https://www.mediawiki.org/wiki/Manual:$wgDBmysql5

Right now MediaWiki installation requirements say you need MySQL 5.0.2 or
later. So my question is: is this configuration variable still necessary,
or should it be deprecated?


I think it is, for schemas created without it.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Facebook Open Academy

2013-11-24 Thread Platonides

On 23/11/13 02:03, Marc A. Pelletier wrote:

Interestingly enough, I /do/ have a project I'd be happy to put forward
that's quite in scope if we'd like to touch on something more
system-level than our usual UX fare:  there is a serious hole in
reasonably self-contained distributed cron-like schedulers in the open
source world, and I can see a number of valuable uses for one in Tool
Labs (and probably some spots in prod!).

I've had that idea at the back of my mind for some time, and I was
planning on starting it myself but for the lack of actual free time.
This is a very well delineated project of sufficiently modest scope that
a small team of students may well be able to tackle succesfully in that
kind of timeframe, and it's sufficiently generally useful that it makes
for a worthwhile open source project.

Thoughts?  Is it worth suggesting and putting forward?

-- Marc


Looks very good.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] PHP 5.4 (we wish)

2013-06-21 Thread Platonides

On 21/06/13 12:32, Paul Selitskas wrote:

We should have stayed on PHP4 if this is a trouble. Perfomance may be a
problem (which is not much in 5.4 iirc), syntax flaws may be a problem (and
this one fixes one of the flaws). Yes, it will take us some time to upgrade
MW to 5.4 or 5.5, but deprecations are detected by any contemporary PHP
IDE, and it is a matter of minutes to fix them.


I have been running 5.4 for a long time without problems.
The only relevan difference I can think is the difference on 
mb_check_encoding(), restricted to newer (smaller) unicode range, which 
then causes a difference on StringUtils::isUtf8.

We can change our php implementation and StringUtilsTest to match that,
but then those 5 tests will fail on PHP 5.3.

See bug 43679 - https://bugzilla.wikimedia.org/43679


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] PHP 5.4 (we wish)

2013-06-21 Thread Platonides

Le 21/06/13 21:03, Antoine Musso a écrit:

If you want a playground, we could get both backported packages on the
beta cluster (labs project: deployment-prep).  That might help catch
some potential issues.

I have no idea how we could get different versions in production and
labs, but I am sure someone know :-D


Just the same way you could be using debian stable plus a few packets 
from debian testing. It's dpkg




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Just noticed GitBlit

2013-06-06 Thread Platonides

On 06/06/13 07:21, Daniel Friesen wrote:

Side topic, anyone want to voice their bikeshed opinions on their
favorite the different ways of disambiguating a / inside urls for
various types of web UIs to repositories:

- Rejecting slash in repository names /.../mediawiki-core/... (ie:
GitHub :/)
- Urlencoding the slash /.../mediawiki%2Fcore/...
- Appending a .git to the end of the name /.../mediawiki/core.git/...
- Wrapping it in syntax/.../{mediawiki/core}/...
- Escaping it /.../mediawiki\/core/...


Accept the longest substring.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Welcome to Ken Snider, Wikimedia Operations

2013-06-06 Thread Platonides

On 06/06/13 18:17, Erik Moeller wrote:

I want to again take this opportunity to thank CT Woo for his tireless
operations leadership since December 2010. I’d also like to thank
everyone who’s participated in the Director of TechOps search process.


Since Dec 2010. Time flies!

Welcome Ken!


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Separation of Concerns

2013-06-05 Thread Platonides

On 05/06/13 15:42, Brad Jorsch wrote:

There's nothing wrong with having a large list of fine-grained rights to
grant as long as you format them properly for the user.


In other words, implement another rights-grouping system just as
complicated and less clear than the approach currently proposed.


You seem to prefer a new set of user groups. But that doesn't allow 
restricting the rights to hold as few permissions as possible. And I'm 
not only considering general-purpose apps, but also bots, whose 
credentials (token) may not be in the best safe.
It should be possible to restrict a program to just read deleted 
revisions, instead of granting a generic act as a sysop scope, being 
able to read blocks/abusefilters or restoring them. If a program only 
imports flickr images, it doesn't need reupload or reupload-own.
Hey, even restricting a token to editing one specific page would be 
useful for many bots (ok, we don't need to support _that much_).


Also, having a foo scope different than foo right, just creates confusion.



By the way, did you notice that the Granularity of Permissions table can 
be the same in both cases, and the only difference is if the apps should 
ask for the scope (shown as-is to the user, the wiki converts it to 
rights) or the user rights (and the wiki presents them as scopes to the 
user) ?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tomorrow (June 6th): The start of one-week deploy cycle!

2013-06-05 Thread Platonides

On 05/06/13 20:07, Greg Grossmeier wrote:

Hi all!

Tomorrow we start the one-week deploy cycle here at WMF!

You can see the general overview of the cycle here:
https://wikitech.wikimedia.org/wiki/Deployments/One_week

And the specific roadmap/plan for MediaWiki (with version numbers/dates)
here:
https://www.mediawiki.org/wiki/MediaWiki_1.22/Roadmap#Schedule_for_the_deployments

Let me know if you have any questions!

Greg


Nice!



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Separation of Concerns

2013-06-04 Thread Platonides

On 05/06/13 01:17, Tyler Romeo wrote:

By saying you can only use OAuth if you're open source, it's the same as
saying if you're closed source you must use insecure authentication
methods. Because just saying OAuth must be open source isn't going to stop
closed source developers.


Yes, of course. It makes no sense. I changed it to a _should_ in the 
wiki page.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Separation of Concerns

2013-06-04 Thread Platonides

On 05/06/13 02:37, Tyler Romeo wrote:

On Tue, Jun 4, 2013 at 8:35 PM, Chris Steippcste...@wikimedia.org  wrote:


We initially were going to use your patch and limit based on module,
but there were a few places where that seemed too course. But then if
we just used user rights, then to edit a page the user needed to grant
8 (iirc) permissions.



Maybe I'm missing something, but how does editing a page require 8
permissions. Shouldn't you just need edit?


You also need read, but that could be an implied permission by 
presenting any of the others.


Chris, I would use the real permissions in the api. For user interface, 
they can be summarised by the user groups (as defined in the wiki), with 
an advanced option if you want the details.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Brian Wolff's summer gig, with Wikimedia!

2013-06-03 Thread Platonides
Rob Lanphier wrote:
 Hi everyone,
 
 Many of you already know Brian Wolff, who has been a steady
 contributor to MediaWiki in the the past several years (User:Bawolff),
 having gotten a start during Google Summer of Code 2010[1].
 
 Brian is back for another summer working with us, working generally to
 improve our multimedia contribution and review pipeline.  In addition
 to his normal GMail address, he's also available at
 bawo...@wikimedia.org, and is on Freenode as bawolff.
 
 Welcome Brian! (again! \o/)
 
 Rob

I guess this simply means that WMF has contracted Bawolff. If it wasn't
for the email bit, I would had thought in a GSoC-like program.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Replacement for tagging in Gerrit

2013-05-07 Thread Platonides
On 06/05/13 18:12, Guillaume Paumier wrote:
 Hi,
 
 On Sun, Mar 10, 2013 at 2:11 AM, Rob Lanphier ro...@wikimedia.org wrote:

 Short version: This mail is fishing for feedback on proposed work on
 Gerrit-Bugzilla integration to replace code review tags.
 
 I was wondering: has a decision been made regarding this? I'm resuming
 work on (notably) identifying/marking noteworthy changes, and I'm
 interested to know if the tagging system is something that we could
 possibly take advantage of for this (and if so, what a rough timeline
 would be :).
 
 --
 Guillaume Paumier

IMHO we should go the git notes route, and at the same time pushing that
format upstream, so that when it does get integrated into gerrit, it
saves the data in the same way (they are also trying to move to store as
many things as possible in git, so it's just a matter of agreeing in the
notes format).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSoC mentors: selection process

2013-05-07 Thread Platonides
 On 05/06/2013 01:12 PM, Siebrand Mazeland (WMF) wrote:
 I would like to provide some feedback, too: The whole process of GSoC
 was very confusing to me. Students communicated on melange,
 mediawiki.org http://mediawiki.org, and mailing lists. Some also
 emailed me and others privately. This scattered communication made me
 feel I was not able to properly inform myself of the feedback cycles a
 proposal went through. Melange not having any capabilities to show
 differences between versions of proposals, does not help - that's
 unfortunately not something we can directly information. I hope that the
 number of communication platforms for GSoC communication and
 documentation can be reduced in the next iteration, to make the process
 easier to follow to those that are supposed to comment on, evaluate and
 rank the proposals.

You may be interested in this script
 http://people.freedesktop.org/~cbosdo/melange-mails-to-git

It converts melange emails to git, so that should be able to give you a
history of the proposal (I haven't tried it). If you miss something on
it, try dropping a line to Cedric.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Extensions History in enwiki

2013-05-04 Thread Platonides
On 04/05/13 20:57, Krinkle wrote:
 PS: The public mediawiki-config.git only dates back to 24 Feb 2012. Before 
 that date the information was in a non-public svn repository inside the 
 wmf-production cluster.

Rather than using mediawiki-config, you may have more luck using
 http://web.archive.org/web/*/http://en.wikipedia.org/wiki/Special:Version



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSoC Project Idea

2013-04-29 Thread Platonides
On 26/04/13 22:23, Kiran Mathew Koshy wrote:
 Hi guys,
 
 I have an own idea  for my GSoC project that I'd like to share with you.
 Its not a perfect one, so please forgive any mistakes.
 
 The project is related to the existing GSoC project *Incremental Data dumps
 * , but is in no way a replacement for it.
 
 
 *Offline Wikipedia*
 
 For a long time, a lot of offline solutions for Wikipedia have sprung up on
 the internet. All of these have been unofficial solutions, and  have
 limitations. A major problem is the* increasing size of  the data dumps*,
 and the problem of *updating the local content. *
 
 Consider the situation in a place where internet is costly/
 unavailable.(For the purpose of discussion, lets consider a school in a 3rd
 world country.) Internet speeds are extremely slow, and accessing Wikipedia
 directly from the web is out of the question.
 Such a school would greatly benefit from an instance of Wikipedia on  a
 local server. Now up to here, the school can use any of the freely
 available offline Wikipedia solutions to make a local instance. The problem
 arises when the database in the local instance becomes obsolete. The client
 is then required to download an entire new dump(approx. 10 GB in size) and
 load it into the database.
 Another problem that arises is that most 3rd part programs *do not allow
 network access*, and a new instance of the database is required(approx. 40
 GB) on each installation.For instance, in a school with around 50 desktops,
 each desktop would require a 40 GB  database. Plus, *updating* them becomes
 even more difficult.

Well, some programs allow network access, and even if not, the school
should download once, and distribute from there to the desktops, not
downloading once per installation. But I agree having a copy on each
computer could be problematic.


 So here's my *idea*:
 Modify the existing MediaWiki software and to add a few PHP/Python scripts
 which will automatically update the database and will run in the
 background.(Details on how the update is done is described later).
 Initially, the MediaWiki(modified) will take an XML dump/ SQL dump (SQL
 dump preferred) as input and will create the local instance of Wikipedia.
 Later on, the updates will be added to the database automatically by the
 script.

Actually, you only need to add some scripts, not to modify mediawiki :)

 The installation process is extremely easy, it just requires a server
 package like XAMPP and the MediaWiki bundle.



 Process of updating:
 
 There will be two methods of updating the server. Both will be implemented
 into the MediaWiki bundle. Method 2 requires the functionality of
 incremental data dumps, so it can be completed only after the functionality
 is available. Perhaps I can collaborate with the student selected for
 incremental data dumps.
 
 Method 1: (online update) A list of all pages are made and published by
 Wikipedia. This can be in an XML format. The only information  in the XML
 file will be the page IDs and the last-touched date. This file will be
 downloaded by the MediaWiki bundle, and the page IDs will be compared with
 the pages of the existing local database.

This is available in page.sql.gz


 case 1: A new page ID in XML file: denotes a new page added.
 case 2: A page which is present in the local database is not among the page
 IDs- denotes a deleted page.
 case 3: A page in the local database has a different 'last touched'
  compared to the one in the local database- denotes an edited page.
(here you would compare the revision id)


 In each case, the change is made in the local database and if the new page
 data is required, the data is obtained using MediaWiki API.
 These offline instances of Wikipedia will be only used in cases where the
 internet speeds are very low, so they *won't cause much load on the servers*
 .
 
 method 2: (offline update): (Requires the functionality of the existing
 project Incremental data dumps):
In this case, the incremental data dumps are downloaded by the
 user(admin) and fed to the MediaWiki installation the same way the original
 dump is fed(as a normal file), and the corresponding changes are made by
 the bundle. Since I'm not aware of the XML format used in incremental
 updates, I cannot describe it now.
 
 Advantages : An offline solution can be provided for regions where internet
 access is a scarce resource. this would greatly benefit developing nations
 , and would help in making the world's information more free and openly
 available to everyone.
 
 All comments are welcome !

Some work on improving the import scripts would be welcome, although I
wonder if what you propose would be big enough for GSoC.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Heads up: small new feature in ConfirmEdit

2013-04-19 Thread Platonides
On 19/04/13 00:21, Steven Walling wrote:
 Hi all,
 
 This is a heads up that we've added a small new feature which hopefully
 will make things less painful for users across the projects: the ability to
 refresh the CAPTCHA you're presented without refreshing the entire page. It
 should work everywhere ConfirmEdit can throw the image CAPTCHA at someone:
 account creation, login, the edit form, etc. (It won't modify the simple
 math CAPTCHA, and so on.)
 
 The original enhancement request for this (
 https://bugzilla.wikimedia.org/show_bug.cgi?id=14230) goes back to 2008. A
 patch was submitted back in January by lalei:
 https://gerrit.wikimedia.org/r/#/c/44376/
 
 If you want to test this out yourself before it's deployed, you can use
 http://toro.wmflabs.org/wiki/Main_Page

Great!
Although, is Refresh the best term for the UI?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Project Idea for GSoC 2013 - Bayesian Spam Filter

2013-04-15 Thread Platonides
On 14/04/13 15:41, anubhav agarwal wrote:
 I don't we could take in account the roll back for automated learning. It
 is not necessary that the person who edited the document, then rolled it
 back did because it was a spam.

Getting the right data to train from is hard, since wiki is so flexible.
The good point of rollback is that a) It's easy to detect, b) It's
restricted (a random user can't use it) and c) On some wikis policy
restricts it's use to “clearly bad edits”.

So you _should_ be training with unwanted edits. But there will be
false positives.



 Though a Train as spam checkbox is a good idea. I was thinking about the
 report spam button along with edit button on the top-right hand corner
 of a section.

However, that only tells you that somewhere in the page there is spam,
not what the spam is (the last revision? an edit from 2 months ago?) nor
does it encourage for fixing it.


 I was thinking of creating a Job Queue for big websites like Wikipedia,
 each edit will go in a queue which will be processed offline and then later
 roll backed to the original content if it triggers the alarm.

I'm not a big fan of this. You will have edit-conflicts to handle, and
it looks messy to have reverts by an extension. I recommend you to work
on the bayesian detection of spam, and leave the potential refactoring
to configure it to work through the job queue for later.

I think I could look in the archives of deleted pages from the WM-ES
wiki for spam data for you.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikivoyage and Universal Language Selector

2013-04-15 Thread Platonides
On 15/04/13 21:40, ro...@rogerchrisman.com wrote:
 Hi,
 
 What is the best way to install, modify and configure the Universal
 Language Selector extension so that I can see how it might work with
 Wikivoyage?
 
 I created an account at
 https://wikitech.wikimedia.org/wiki/User:Rogerhc but it is not clear
 to me what my next step should be towards above stated objective.
 Could someone with clue please nudge me in the right direction?
 
 Thanks!
 
 Roger
 http://meta.wikimedia.org/wiki/User:Rogerhc

Request shell access.
Then you should either join a project like deployment-prep or replicate
wikivoyage config in a new one.

See https://wikitech.wikimedia.org/wiki/Help:Getting_Started


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Project Idea for GSoC 2013 - Bayesian Spam Filter

2013-04-12 Thread Platonides
On 09/04/13 18:20, Quim Gil wrote:
 Hi Anubhav,
 
 I have done a first reality check with Chris Steipp, who oversees the
 area of security and also spam prevention. Your idea is interesting and
 it seems to be feasible. This is a very good first step!
 
 It would require adding a hook to MediaWiki core, but this could be a
 small, acceptable change.
I agree. Adding a hook is no problem.

 The rest could be developed as an extension of
 the ConfirmEdit extension.

I'm not sure on adding it to ConfirmEdit. I would develop it as an
independent extension, which could then hook into ConfirmEdit or
AbuseFilter.

Anubhav wrote:
 Tasks
 
 Create a tool for wiki users to report Spam. A a simple way to
 train the a Bayesian DB. This should be accessible for any user 
 with the permissions to undo or rollback those changes or to
 delete the new page/file. Understanding the metadata(IP, links,
 user) I can extract from the data (perhaps harnessing other
 services like blacklists).

I think it would be more interesting if it could be trained
automatically. Perhaps by automatically learning rollbacks as wrong.
Maybe there could be a checkbox to train as spam when doing a revert,
but I would avoid anything complex like Go to Special:TrainSpam and
enter the revision number to mark as spam.

Good luck!


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] User signature time wrapped in a span

2013-04-04 Thread Platonides
Yes, you would have to change it at Parser.php That point would be the
appropiate one. However, given the large amount of already-posted
timestamps (and that some people may not want the spans in the wiki
source), why not simply use a regex to replace the dates in the page?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Cannot run the maintenance script

2013-03-26 Thread Platonides
On 25/03/13 23:19, Rahul Maliakkal wrote:
 I installed SMW extension in my local wiki yesterday and now when i visit a
 page in my local wiki i get this message A database query syntax error has
 occurred. This may indicate a bug in the software. The last attempted
 database query was:
 
 (SQL query hidden)
 
 from within function ShortUrlUtils::encodeTitle. Database returned
 error 1146:
 Table 'my_wiki.w1_shorturls' doesn't exist (127.0.0.1)
 
 Along with the page being displayed untidily.
 
 So i tried to fix the problem ,as suggested by people i tried to run php
 update.php
 Then i got the following error message
 
 A copy of your installation's LocalSettings.php
 must exist and be readable in the source directory.
 Use --conf to specify it.
 
 I have my LocalSettings.php in the same place where my default index.php is
 located,earlier i had made some logo changes to my wiki and they were
 succesfully reflected in my wiki,so the localhost has access to the
 LocalSettings.php
 
 I am working on Ubuntu and have mediawiki 1.20 installed
 
 Please Help!!Its Urgent
 
 Thanks In Advance

That's very odd. Perhaps you are running the script as a different user
which doesn't have read access? Is your file printed if from the folder
you do php update.php you run  cat ../LocalSettings.php ?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Moving a GitHub Pull Request to Gerrit Changeset manually

2013-03-26 Thread Platonides
This should work:

WIKIMEDIA_REPOS=/path/where/you/have/your/clones
REPO=$1 # qa/browsertests
PULL=$2 # https://github.com/brainwane/qa-browsertests.git

TEMP=`mktemp --tmpdir -d pull-request.XXX`
git clone --reference=$WIKIMEDIA_REPOS/$REPO  $PULL $TEMP
cd $TEMP

if [ ! -f .gitreview ]; then
cat  .gitreview EOF
[gerrit]
host=gerrit.wikimedia.org
port=29418
project=$REPO.git
defaultbranch=master
defaultrebase=0
EOF
fi

git-review -R

rm -rf $TEMP


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Who is responsible for communicating changes in MediaWiki to WMF sites?

2013-03-26 Thread Platonides
On 25/03/13 23:35, Greg Grossmeier wrote:
 Thanks for the link, but the reason I brought it up is because my first
 week here I saw a removal of a function without an explicit @deprecated
 warning.
 
 :-)
 
 Greg

Is it possible that it was a recently-introduced function that hadn't
been published on any release yet?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Who is responsible for communicating changes in MediaWiki to WMF sites?

2013-03-25 Thread Platonides
On 25/03/13 18:39, Greg Grossmeier wrote:
  * Deprecations - SELF-TODO: We don't have any guarantee, that I can see,
that we deprecate for X releases before we remove

Not exactly a guarantee, but the general rule we use is to keep
deprecated for a couple releases before removing.
It's briefly explained at http://www.mediawiki.org/wiki/Deprecation


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC How/whether MediaWiki could use ZendOptimizuerPlus -- ZendOptimizerPlus, opcode cache, PHP 5.4, APC, memcache ???

2013-03-22 Thread Platonides
Trying to clarify:

APC can do two things:
1) Keep the compiled php opcodes, so php execution is faster.
2) Allow the application to store values in the web server memory (kept
accross requests).

ZendOptimizer only does 1.

MediaWiki only needs to be changed for 2, since 1 is done automatically
by all php opcode cache.

You can't use 2 once you have several servers. An alternative for 2 is
to use memcached or another cache that allows to be accessed from
multiple servers.

The «APC  is a must have for larger MediaWikis» is due to 1. In fact,
wikimedia is not using APC for 2, but memcached.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] CAPTCHA

2013-03-22 Thread Platonides
On 21/03/13 08:05, Federico Leva (Nemo) wrote:
 Restrictive wikis for captchas are only a handful (plus pt.wiki which is
 in permanent emergency mode).
 https://meta.wikimedia.org/wiki/Newly_registered_user
 For them you could request confirmed flag at
 https://meta.wikimedia.org/wiki/SRP
 Personally I found it easier to do the required 10, 50 or whatever edits
 on a userpage. 5 min at most and you're done.
 
 Nemo

Their problem is likely that their accounts are new, not that those
wikis additionally require a minimum number of edits (only a handful of
wikis have that).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Who is responsible for communicating changes in MediaWiki to WMF sites?

2013-03-21 Thread Platonides
Is sending an email to wikitech-ambassadors enough for unblocking it?

Although such should contain a timeframe expectation, which probably
only WMF can give.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reminder about the best way to link to bugs in commits

2013-03-20 Thread Platonides
On 20/03/13 11:59, Niklas Laxström wrote:
 1) Why is Bug:43778 different from bug:43778 when searching?
 
 2) Can we do the same for all things in the footer? I tried it but
 bug seems to be a special case and nothing else works.

The stored things are set in gerrit config.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] open positions at WMF

2013-03-19 Thread Platonides
On 19/03/13 00:54, Sumana Harihareswara wrote:
 Thomas, thank you for writing to us and mentioning that you're available
 for work!  I'd love for you to take a look at the Wikimedia Foundation's
 job openings.  For most of them, telecommuting is fine.  

That's a bit misleading, as only a couple of them offer a remote
position (out of 19)


 Oh, and I noticed that you have some OTRS expertise -- could you maybe
 check out https://bugzilla.wikimedia.org/show_bug.cgi?id=22622 and let
 us know if you have some free time to volunteer your help? :-)

Is it really something where volunteers can help? I thought it wasn't
possible (private mail concerns blocking volunteer action).


BTW, why is WMF looking for a WordPress Developer? I think that if we
outgrew the current blog, the way to go would be to mediawikize it, not
to make something new still based in WP.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Default policy for third-party cookies in Firefox

2013-03-19 Thread Platonides
On 19/03/13 14:38, Seb35 wrote:
 Hello,
 
 According to [1] and [2], Firefox 22 (release June 25, 2013) will change
 the default third-party cookie policy: a third-party cookie will be
 authorized only if there is already a cookie set on the third-party
 website.
 
 This would break most of the automatic login on sister projects on
 Wikimedia websites, since the page just after the log in will no more
 set cookies of sister projects, and you will have to manually log in to
 each domain (of level wikipedia.org, not of level de.wikipedia.org) -- I
 tested with Firefox 16.
 
 What could be done to mitigate this effect? (...)
 
 [1] http://webpolicy.org/2013/02/22/the-new-firefox-cookie-policy/
 [2]
 https://developer.mozilla.org/en-US/docs/Site_Compatibility_for_Firefox_22
 
 ~ Seb35

Copying Jonathan Mayer.
Background information: When you log into eg. en.wikipedia.org, you get
cookies logging you into not only *.wikipedia.org but also
*.wiktionary.org, *.wiktionary.org, *.wikibooks.org,
commons.wikimedia.org, etc.

Obviously, that uses third-party cookies.

Firefox 22 will block our single-login (unless you are already logged on
the other project, which would be the case in which you would already
have cookies there).
If it can't be corrected, we will have to publicise this fact quite
well, as I expect many complaints of Unified login doesn't work.


Jonathan, do you have any suggestion?

An idea to fix it would be to take advantage of the new certificate
which includes all projects, by having firefox detect that the
‘third-party site’ belong to the same entity, since they share the https
certificate (we would need to enable https to all logins, but that was
planned, anyway).

Regards

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Default policy for third-party cookies in Firefox

2013-03-19 Thread Platonides
On 19/03/13 17:41, Chris Steipp wrote:
 On Tue, Mar 19, 2013 at 8:57 AM, Brion Vibber br...@pobox.com wrote:
 On Tue, Mar 19, 2013 at 7:52 AM, Platonides platoni...@gmail.com wrote:
 An idea to fix it would be to take advantage of the new certificate
 which includes all projects, by having firefox detect that the
 ‘third-party site’ belong to the same entity, since they share the https
 certificate (we would need to enable https to all logins, but that was
 planned, anyway).

 I'm pretty sure Firefox won't detect this condition; the security
 model is based on domains, not SSL certificates.
 
 I hadn't heard of this technique to get around the issue, but if there
 is an exception for it, we're already doing this in our certs, so it
 would already be fixed.

It was an idea I *made up* that firefox *could* implement to detect that
the two domains are owned by the same entity, and so relax the «ignore
third-party cookies» rules.
It scales quite well for other types login cookies (eg. flickr.com and
yahoo.com) but doesn't open a hole for advertising companies (eg.
example.com and google-analytics.com).



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Default policy for third-party cookies in Firefox

2013-03-19 Thread Platonides
On 19/03/13 19:21, Jon Robson wrote:
 Chris: On the latest iPhone cookies were not accepted from iframes
 from sites that were not visited. You had to physically visit the site
 by following a link or typing the url into the address bar first. We
 are currently investigating whether meta refresh etc can help here -
 although that's not ideal. For our projects that would result in over
 13 redirects - a horrible user experience!!
 
 Correct me if I'm wrong but the 2 problems that CentralAuth solves are
 1) Takes away the inconvenience of having to login across multiple sites
Yes.

Typical usecase: you logged in to wikipedia, but then go to Wikimedia
Commons to upload a photo → No need to log in again (this is also
problematic for newbies, as it's counterintuitive).


 2) Allows communication between wiki sites via CORS that require 
 authentication.
We aren't using CORS right now.


 I'm guessing openid / oauth will solve #1 ?
Not really. That could solve the one password for all sites problem,
but as that's done at server level, that would still work. It won't fix
the you are logged in, then you opened another page [from a different
project] and you aren't.



 An idea I've banded around to solve #2 would be to allow wikis to
 access other projects via the api.
 
 e.g.
 http://en.wikipedia.org/w/api.php?action=querytitles=Photoproject=commons
 would allow a developer to access the page Photos on
 wikimedia.commons.org rather than having to resort to a CORS request
 (ie. it would route the query to the database for commons rather than
 wikipedia)
 
 For api requests that require credentials it would send the
 credentials of the current project (in this case wikipedia).
 
 Is that something that is feasible?

We had that in query.php and moved away from it. Feasible, but not going
to happen.


 (FWIW I actually dislike that CentralAuth currently logs me into
 various projects that I never use such as wiktiversity...)

But perhaps you do use meta.wikimedia and wikipedia.

Although some preference for which sites you want to be logged in
 could help to control the proliferation of sites there.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Detect running from maintenance script in a parser function extension

2013-03-12 Thread Platonides
On 12/03/13 18:47, Toni Hermoso Pulido wrote:
 Hello,
 
 I'm checking whether I can detect that a process is run from a
 maintenance script in a parser function extension.
 
 Which would be the best way / more recommendable to detect it?
 
 Thanks!

Why do you want to do it? It is probably a bad idea.




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Replacement for tagging in Gerrit

2013-03-12 Thread Platonides
Doing the tags in Gerrit is the right thing.
Who can change them is not a problem, just make a changetags log. (BTW,
you would have the same problem in bugzilla with people removing that
superimportant tag)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Nightly shallow clones of mediawiki/core

2013-03-11 Thread Platonides
On 11/03/13 00:17, Yuri Astrakhan wrote:
 Answered Inline. Also, I apologize as I think my email was slightly
 off-topic to Ori's question.
 
 On Sun, Mar 10, 2013 at 6:57 PM, Matthew Flaschen
 mflasc...@wikimedia.orgwrote:
 
 PHPMyAdmin also has major security issues.  It isn't allowed on
  Wikimedia Labs and probably shouldn't be used here.  Why does SQLite
 need to be installed exactly?

 
 Matthew, we are talking about a developer's virtual machine that has no
 network connection to anything except the developer's machine itself, and
 used purely for development. MySql could be accessed through the network if
 I have the local tools, but if we are talking about the lowest barrier of
 entry, the novice could follow a few steps to get VM and edit code with
 anything including notepad, while the VM would have most of the tools to
 examine and experiment with MediaWiki. BTW, samba would also be a massive
 security hole - I set mine up to share / with no password root access (no
 network, no issues)

While popular, I don't think PHPMyAdmin is the best tool for managing
the tables, either.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Category sorting in random order

2013-03-11 Thread Platonides
On 12/03/13 00:05, Paul Selitskas wrote:
 Git review, of course. The log is here: http://pastebin.com/iC4N1am0

Everyone can send patches to all repositories. If you get an error doing
that, it's not that You are denied, it's some configuration problem.

In this case, it looks like you didn't have a ssh agent running. Did you
have ssh-agent or pageant running and with your ssh key loaded?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcing the Wikimedia technical search tool

2013-03-10 Thread Platonides
On 10/03/13 15:50, Waldir Pimenta wrote:
 The motivation for the tool came from a post by Niklas [1], specifically
 the section Coping with the proliferation of tools within your community.
 In the comments section, Nemo announced his initiative to create a custom
 google search to fit at least some of the requirements presented in that
 section, and I've offered to help him tweak it further. The URL list is
 still incomplete and can be customized by editing the page
 http://www.mediawiki.org/wiki/Wikimedia_technical_search (syncing with the
 actual engine still will have to happen by hand, but should be quick).

I'm not convinced about [[en:MediaWiki_talk:*]] and
[[en:Template_talk:*]], they can bring quite a bit of noise (similarly
for [[en:Wikipedia:Village_pump_(technical)]]). I see how interesting
discussions could be happening there, though.



 Besides feedback on whether the engine works as you'd expect, I would like
 to start some discussion about the ability for Google's bots to crawl some
 of the resources that are currently included in the URL filters, but return
 no results. For example, the IRC logs at bots.wmflabs.org/~wm-bot/logs/.
 Some workarounds are used (e.g. using github for code search since gitweb
 isn't crawlable) but that isn't possible for all resources. What can we do
 to improve the situation?
Do we really want Google to index them?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Nightly shallow clones of mediawiki/core

2013-03-10 Thread Platonides
On 10/03/13 22:39, Chad wrote:
 Hi,
 
 I've been thinking about this for the last week or so because it's becoming
 incredibly clear to me that core isn't scaling. It's already taking up over
 4GB on the Gerrit box, and this is the primary reason core operations are
 slow.

4GB??
My not specially wellpacked clone takes 191M (and that includes all the
changes references), plus other 74 MB for the checkout.


 Core would have to be read-only for about an hour or two.
Having core read-only occasionally should not be a problem with jenkins
collaboration. Instead of merging make it say this change has been
approved and is waiting for the epoch for merging, and have it actually
merge everything when it reopens.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Bug 1542 - Log spam blacklist hits

2013-03-09 Thread Platonides
Chris Steipp wrote:
 csteipp. Feel free to ping me whenever.

And platonides :)



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki's access points and mw-config

2013-03-09 Thread Platonides
On 09/03/13 15:47, Waldir Pimenta wrote:
 So mw-config can't be deleted after all? Or you mean the installer at
 includes/installer?
 Is you mean the former, then how about run-installer instead of my
 previous proposal of first-run?
 Any of these would be clearer than mw-config, imo.
 
 --Waldir

You can delete it, but then you can't use it to upgrade the wiki (or
rather, you would need to copy it again from the new tree).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Seemingly proprietary Javascript

2013-03-07 Thread Platonides
On 06/03/13 16:28, Jay Ashworth wrote:
 To “convey” a work means any kind of propagation that enables other
 parties to make or receive copies. Mere interaction with a user
 through a computer network, with no transfer of a copy, is not
 conveying.

 As javascript is executed in the client, it probably is.
 
 Perhaps.  But HTML is also executed in the client, and some legal
 decisions have gone each way on whether the mere viewing of a page 
 constitutes copying in violation of copyright (the trend is towards
 no, thankfully. :-)
 
 Cheers,
 -- jra

Interesting. Although HTML is presentational, while js is executable.

I wouldn't consider most of our javascript as significant -even though
we have plenty of usages considered non-trivial by [1]- since it is
highly based on MediaWiki classes and ids. However, we also have some
big javascript programs (WikiEditor, VisualEditor...)

@Alexander: I would consider something like
 script 
 src=//bits.wikimedia.org/www.mediawiki.org/load.php?debug=falseamp;lang=enamp;modules=jquery%2Cmediawiki%2CSpinner%7Cjquery.triggerQueueCallback%2CloadingSpinner%2CmwEmbedUtil%7Cmw.MwEmbedSupportamp;only=scriptsamp;skin=vectoramp;version=20130304T183632Z
  
 license=//bits.wikimedia.org/www.mediawiki.org/load.php?debug=falseamp;lang=enamp;modules=jquery%2Cmediawiki%2CSpinner%7Cjquery.triggerQueueCallback%2CloadingSpinner%2CmwEmbedUtil%7Cmw.MwEmbedSupportamp;only=scriptsamp;skin=vectoramp;version=20130304T183632Zmode=license/script

with license attribute pointing to a JavaScript License Web Labels page
for that script (yes, that would have to go up to whatwg).

Another, easier, option would be that LibreJS detected the debug=false
in the url and changed it to debug=true, expecting to find the license
information there.
It's also a natural change for people intending to reuse such
javascript, even if they were unaware of such convention.

@Chad: We use free licenses since we care about the freedom of our cde
to be reused, but if the license is not appropiate to what we really
intend, or even worse, is placing such a burden that even us aren't
properly presenting them, it's something very discussion worthy.
Up to the point where we could end up relicensing the code to better
reflect our intention, as it was done from GFDL to CC-BY-SA with
wikipedia content.


1- http://www.gnu.org/philosophy/javascript-trap.html


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Bug 1542 - Log spam blacklist hits

2013-03-07 Thread Platonides
On 07/03/13 21:03, anubhav agarwal wrote:
 Hey Chris
 
 I was exploring SpamBlaklist Extension. I have some doubts hope you could
 clear them.
 
 Is there any place I can get documentation of
 Class SpamBlacklist in the file SpamBlacklist_body.php. ?
 
 In function filter what does the following variables represent ?
 
 $title
Title object (includes/Title.php) This is the page where it tried to save.

 $text
Text being saved in the page/section

 $section
Name of the section or ''

 $editpage
EditPage object if EditFilterMerged was called, null otherwise

 $out

A ParserOutput class (actually, this variable name was a bad choice, it
looks like a OutputPage), see includes/parser/ParserOutput.php


 I have understood the following things from the code, please correct me if
 I am wrong. It extracts the edited text, and parse it to find the links.

Actually, it uses the fact that the parser will have processed the
links, so in most cases just obtains that information.


 It then replaces the links which match the whitelist regex, 
This doesn't make sense as you explain it. It builds a list of links,
and replaces whitelisted ones with '', ie. removes whitelisted links
from the list.

 and then checks if there are some links that match the blacklist regex.
Yes

 If the check is greater you return the content matched. 

Right, $check will be non-0 if the links matched the blacklist.

 it already enters in the debuglog if it finds a match

Yes, but that is a private log.
Bug 1542 talks about making that accesible in the wiki.


 I guess the bug aims at creating a sql table.
 I was thinking of the following fields to log.
 Title, Text, User, URLs, IP. I don't understand why you denied it.

Because we don't like to publish the IPs *in the wiki*.

I think the approach should be to log matches using abusefilter
extension if that one is loaded.
I concur that it seems too hard to begin with.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Seemingly proprietary Javascript

2013-03-06 Thread Platonides
On 05/03/13 21:55, Matthew Flaschen wrote:
 If it does turn out we legally *need* more license
 preservation/disclosure, we should add more license preservation.
 
 Getting a special get out of jail free card for WMF only is not
 acceptable.  Our sites run free software, software that anyone can also
 run under the same (free) licenses.
 
 It may also not be realistic (many authors probably would not
 cooperate).  But it's something we shouldn't even ask for.
 
 Matt Flaschen

I just checked and there are 73 authors of the resources of MediaWiki
core. More than I expected, but not unworkable. We could relicense our
css and javascript as MIT, MPL, GPL-with-explicit-exception...

Regarding GPL requisites, it seems clear that minified javascript is
“object code” [1], which we can convey per section 6d [2], which is
already possible if you know how the RL works, although we should
probably provide those “clear directions”. Most problematic would be
that you should also obey sections 4 and 5 (although I see a bit of
contradiction there, how are you supposed to “keep intact all notices”
where most notices are present in comments, designed to be stripped when
compiled?)

But are we conveying it?
 To “convey” a work means any kind of propagation that enables other 
 parties to make or receive copies. Mere interaction with a user
through a computer network, with no transfer of a copy, is not conveying.

As javascript is executed in the client, it probably is.


1- «The “source code” for a work means the preferred form of the work
for making modifications to it. “Object code” means any non-source form
of a work.» - Section 1

2- «Convey the object code by offering access from a designated place
(gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. If the place to copy the object code is a network
server, the Corresponding Source may be on a different server (operated
by you or a third party) that supports equivalent copying facilities,
provided you maintain clear directions next to the object code saying
where to find the Corresponding Source. (...)»


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Seemingly proprietary Javascript

2013-03-06 Thread Platonides
On 06/03/13 13:24, Platonides wrote:
 I just checked and there are 73 authors of the resources of MediaWiki
 core. More than I expected, but not unworkable. We could relicense our
 css and javascript as MIT, MPL, GPL-with-explicit-exception...

I was going to provide the full list:

$ git log --format=format:%an --no-merges resources/ | sort -u
Aaron Schulz
Alexandre Emsenhuber
Alex Monk
Amir E. Aharoni
Andrew Garrett
Antoine Musso
Aryeh Gregor
aude
awjrichards
Brad Jorsch
Brandon Harris
Brian Wolff
Brion Vibber
Bryan Tong Minh
Catrope
Chad Horohoe
csteipp
Daniel Friesen
Danny B
Derk-Jan Hartman
edokter
Eranroz
Happy-melon
Hashar
helder.wiki
Henning Snater
Hoo man
Ian Baker
Jeremy Postlethwaite
jeroendedauw
Jeroen De Dauw
Joan Creus
John Du Hart
jrobson
Juliusz Gonera
Kaldari
Kevin Israel
Krinkle
Leo Koppelkamm
Liangent
lupo
Marius Hoch
Mark A. Hershberger
Mark Holmquist
Matěj Grabovský
MatmaRex
Matthew Flaschen
Matthias Mullie
Max Semenik
Minh Nguyễn
Neil Kandalgaonkar
Niklas Laxström
Ori Livneh
Pavel Selitskas
Raimond Spekking
Reedy
Roan Kattouw
Robin Pepermans
Rob Lanphier
Rob Moen
Ryan Kaldari
Sam Reed
Santhosh Thottingal
Siebrand
Siebrand Mazeland
Szymon Świerkosz
Thomas Gries
Timo Tijhof
Tim Starling
Trevor Parscal
Tyler Anthony Romeo
umherirrender
vlakoff


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Seemingly proprietary Javascript

2013-03-05 Thread Platonides
On 05/03/13 14:07, Alexander Berntsen wrote:
 On 05/03/13 13:18, Max Semenik wrote:
 If you mean that we have to insert that huge chunk of comments from
  [1] into every page, the answer is no because we'll have to
 include several licenses here, making it ridiculously long.
 Please see the JavaScript Web Labels section of the article[0]. Is this
 a possibility?

http://www.gnu.org/licenses/javascript-labels.html

Yes, it would be. I expect the generated page to be insanely huge, but
if LibreJS loads a page so big that blocks your browser, it's not our
fault at all :)

I see however that it tries to confirm that the source js matches the
minified version, which may be quite hard.


Furthermore, the resourceloader can multiple modules in one request,
producing apparently different urls, so if we had to create all possible
urls, expect a factorial growth.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Seemingly proprietary Javascript

2013-03-05 Thread Platonides
On 05/03/13 21:53, Matthew Flaschen wrote:
 On 03/05/2013 12:29 PM, Luke Welling WMF wrote:
 We should discuss them separately, but this core mediawiki JS is GPL2
 https://github.com/wikimedia/mediawiki-core/tree/master/resources
 
 I am referring to Isarra's comment:
 
 The licensing information is on the page itself, of which the minified
 js winds up a part.
 
 As far as I can tell, that is not true for the *code* license(s) for
 core and extensions.
 
 Matt Flaschen

Did you look at http://en.wikipedia.org/w/COPYING ?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reminder about the best way to link to bugs in commits

2013-03-02 Thread Platonides
On 01/03/13 23:59, Daniel Friesen wrote:
 On Fri, 01 Mar 2013 14:45:14 -0800, Nischay Nahata wrote
 I also prefer it in the header. The bug report is the best description :)

 Is it not possible for Gerrit to search if its in the header? or make
 it so
 
 +1
 
 Tools should be coded around people. Not the other way around.

+1

Chad wrote:
 No, Gerrit cannot detect these in the header.
It should learn to do it.


Re: Quim Gil:
[[Gerrit/Commit message guidelines]] should be changed, too.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reminder about the best way to link to bugs in commits

2013-03-02 Thread Platonides
On 02/03/13 19:13, Bartosz Dziewoński wrote:
 So you're volunteering to write release notes for my commits? By all
 means, if so.
 
 But I'm afraid this would end with simply no release notes being written.
 
 Who would want to read and deeply understand 2000 commit messages per
 release to note all the bugs being fixed and all the implications they
 might have for end-users? I certainly wouldn't (and wouldn't even want
 to do this for my own commits some three months after I made them).

You would at least need some release-notes marker added by the commiter
so that you can skip non-RL-worthy ones.




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Purpose of #wikimedia-dev

2013-02-28 Thread Platonides
On 28/02/13 18:48, Brion Vibber wrote:
 On Thu, Feb 28, 2013 at 6:20 AM, Petr Bena benap...@gmail.com wrote:
 
 it's blocked in my office as well, there are many ways to get through
 the firewall... most simple is just to install a bouncer or use irssi
 in a terminal of remote server if port 22 is open...

 
 Note that if just the *port* is firewalled you` may be able to use the web
 interface:
 https://webchat.freenode.net/
 
 -- brion

Note that if just the *port* is firewalled you can connect on another
port «All freenode servers listen on ports 6665, , 6667, 6697 (SSL
only), 7000 (SSL only), 7070 (SSL only), 8000, 8001 and 8002»
http://freenode.net/irc_servers.shtml


The funny thing is that sometimes you have to evade these no-irc blocks
to get into a channel ‘supported’ by the company.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Nagios is dead, long live icinga!

2013-02-27 Thread Platonides
On 27/02/13 18:47, Tim Landscheidt wrote:
 Petr Bena benap...@gmail.com wrote:
 
 In addition to this I migrated labs nagios to icinga as well, few
 minutes ago - http://nagios.wmflabs.org/icinga/
 
 [...]
 
 Interestingly, Google Chrome claims that this page is in
 French and asks whether it should translate it :-)
 (http://icinga.wikimedia.org/icinga/ doesn't).
 
 Tim

Maybe because it has this at the top:
html xmlns=http://www.w3.org/1999/xhtml; xml:lang=fr lang=fr

:)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Purpose of #wikimedia-dev

2013-02-27 Thread Platonides
On 27/02/13 15:23, Tyler Romeo wrote:
 Also, with the exception of asking for technical help, I don't really like
 IRC for developer discussion, and it's not just because I don't go on IRC.

If you are coding/reviewing MW code, I recommend you to be available on
the irc channel. That way we could isntantly ask you wtf are you
commiting here?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] cleaning database of spam

2013-02-26 Thread Platonides
On 26/02/13 11:57, Petr Bena wrote:
 Hi, this is more related to mediawiki rather than wikimedia, but this
 list is being watched a bit more I guess.
 
 Is there any extension that allows permanent removal of deleted pages
 (or eventually selected deleted pages) from database and removal of
 blocked users from database?
 
 Imagine you have a mediawiki wiki that has 20 gb database, where
 19.99gb of database is spam and indefinitely blocked users. I think
 lot of wikis has this problem, making extension to deal with this
 would be useful for many small wikis.
 
 What is exact procedure of properly removing page from database so
 that it doesn't break anything? What needs to be deleted and in which
 order?

maintenance/deleteArchivedRevisions.php permanently removes the content
of deleted pages from the db.

For removing those users, see
http://www.mediawiki.org/wiki/Extension:User_Merge_and_Delete

Also remember that due to the way mysql works, it may not release those
20GB back to the filesystem.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LQT and MediaWiki

2013-02-24 Thread Platonides
On 23/02/13 23:58, Mark A. Hershberger wrote:
 That is, I think it is safe to say LQT will remain usable in its current
 state on any coming MW versions for the foreseeable future.
 
 Right now, though, all I'm looking for is a confirmation that it will
 remain usable.  I imagine one of the first things that we would need to
 do is include it in some testing plans.

It is used by some of WMF wikis, so it has to remain usable not to broke
them.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Caching Discussion: Dealing with old (deleted) wmf branches

2013-02-23 Thread Platonides
Another option is just to only keep old versions of skins folder (1.4M)

Given that CSS and JS go through the RL, the few images we use and link
from the html could probably be squeezed in a single static path, though.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: [Wmfall] Luis Villa joins WMF as Deputy General Counsel

2013-02-21 Thread Platonides
El 20/02/13 00:49, Luis Villa escribió:
 Hah... I think calling me a developer is, at this point, a bit of a
 stretch, but I look forward to working with the tech team :)

¡Welcome, Luis!



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] EasyRDF dependency

2013-02-21 Thread Platonides
On 21/02/13 10:18, Denny Vrandečić wrote:
 After evaluating different options, we want to use for generating
 Wikidata's RDF export the EasyRDF library: http://www.easyrdf.org/
 
 We only need a part of it -- whatever deals with serializers. We do not
 need parsers, anything to do with SPARQL, etc.
 
 In order to minimize reviewing and potential security holes, is there an
 opinion on what is the better approach:
 
 * just use it as a dependency, review it all, and keep it up to date?
 
 * fork the library, cut out what we do not need, and keep up with work
 going on the main branch, backporting it, but reducing the used code size
 thus?
 
 How is this handled with other libraries, like Solarium, as a reference?
 
 Cheers,
 Denny

I would use it as a dependency, avoiding to fork our own version from
upstream.
That said, not exposing the files to web requests is probably a good idea.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki's access points and mw-config

2013-02-19 Thread Platonides
On 18/02/13 18:53, Waldir Pimenta wrote:
 It somewhat breaks the pattern, considering that all the other access
 points (and their corresponding php5 files) are located in the root. So
 that leaves only overrides.php, which I'm not sure why it was kept in
 mw-config, considering that (quoting Platonides) the installer used to be
 in the config folder, until the rewrite, which *moved the classes* to
 includes/installer (emphasis mine). If the classes were moved to
 includes/installer, why did those of overrides.php's remain?

Read the beginning of overrides.php:
 ?php
 /**
  * MediaWiki installer overrides.
  * Modify this file if you are a packager who needs to modify the behavior of 
 the MediaWiki installer.
  * Altering it is preferred over changing anything in /includes.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Using wiki pages as databases

2013-02-19 Thread Platonides
On 19/02/13 13:56, Tyler Romeo wrote:
 So unfortunately I don't have a clear idea of what the problem is,
 primarily because I don't know anything about the Parser and its inner
 workings, but as far as having all the data in one page, here's something.
 Maybe this is a bad idea, but how about having a PHP-array content type. In
 other words, MyNamespace:MyPage would render the entire data structure, but
 MyNamespace:MyPage/index/test/0 would take $arr['index']['test'][0]. In the
 database, it would be stored as individual sub-pages, and leaf sub-pages
 would render exactly like a normal page would, but non-leaf pages would
 build the array from all child sub-pages and display it to the user. Would
 this solve the problem? Because if so, I've put some thought into it and
 would be willing to maybe draft an extension giving such a capability.

You can already use subpages to store data. Access is then O(1) The
problem is that then you have one page per entry.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Welcome Greg Grossmeier, Release Manager

2013-02-19 Thread Platonides
Welcome Greg!



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki's access points and mw-config

2013-02-18 Thread Platonides
On 15/02/13 17:28, Waldir Pimenta wrote:
 On Fri, Feb 15, 2013 at 11:58 AM, Platonides platoni...@gmail.com wrote:
 
 On 15/02/13 09:16, Waldir Pimenta wrote:
 1) should all access points be on the root directory of the wiki, for
 consistency?

 No. The installer is on its on folder on purpose, so that you can delete
 that folder once you have installed the wiki.

 
 Sorry if I wasn't clear. I meant that mw-config/index.php should be in te
 root, not that the installer files should. Unless I'm misunderstanding you,
 and by the installer you mean the contents of mw-config/ rather than
 /includes/installer (which would prove my point about unclear nomenclature).

Well, every bit of the installer used to be in the config folder, until
the rewrite, which moved the classes to includes/installer
Now you mention it, you're probably right in that it could be moved to
eg. /installer.php


 Thanks, that sounds quite useful, but I don't seem to be able to run it
 properly (I get a few php warnings, and 0/0 as output). I placed it in
 the root dir of my local wiki. Am I missing anyting?

Yes, you need to pass it a list of files as parameters.

If you want to run the script on the whole folder you could for instance
run:
 find -name *.php -exec php find-entries.php \{\} +


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki's access points and mw-config

2013-02-15 Thread Platonides
On 15/02/13 09:16, Waldir Pimenta wrote:
 While trying to add some more information to
 https://www.mediawiki.org/wiki/Manual:Code, I came across a slightly
 peculiar issue regarding the entry points for MediaWiki:
 
 Right now, among all the entry points that I know of (those are listed in
 Manual:Code), only mw-config/index.php doesn't sit in the root folder.
 Furthermore, it's related to the installer at includes/installer/, but that
 is not clear at all from the code organization, specifically the directory
 names (and the lack of documentation both in the file and on mediawiki.org).
 
 I have two questions, then:
 1) should all access points be on the root directory of the wiki, for
 consistency?

No. The installer is on its on folder on purpose, so that you can delete
that folder once you have installed the wiki.


 2) should the name mw-config be changed to something that more clearly
 indicates its relationship with the installer?
 
 Note that these aren't merely nitpicking: a consistent structure and
 intuitive names for files and directories play an important role in the
 self-documenting nature of the code, and make the learning curve smoother
 for new developers (e.g. yours truly :-)).

It was originally named config. It came from the link that sent you
there: You need to configure your wiki first. Then someone had
problems with other program that was installed sitewide on his host
appropiating the /config/ folder, so it was renamed to mw-config.


 Also, I used Tim Starling's suggestion on IRC to make sure the list of
 entry point scripts listed in Manual:Code was complete: git grep -l
 /includes/WebStart.php
 I am not sure that exhausts the list, however, since thumb_handler.php
 doesn't show up on its results. Any pointers regarding potential entry
 points currently omitted from that list are most welcome.

That's probably because it doesn't include WebStart (it included
thumb.php, which is the one including WebStart).

Take a look at tools/code-utils/find-entries.php I have updated it to
add a few new rules in https://gerrit.wikimedia.org/r/49230

It will give you about 100 files to check, most of them cli scripts.
Although there are a few web-enabled ones, such as
tests/qunit/data/styleTest.css.php

Use -d to see why that file was considered an entry point. As you'll
see, it is very strict -with reason- in what it considers safe.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] When to automerge (Re: Revoking +2 (Re: who can merge into core/master?))

2013-02-15 Thread Platonides
On 15/02/13 01:38, Brad Jorsch wrote:
 I'd propose one more:
 
 * Someone else gives +2, but Jenkins rejects it because it needs a
 rebase that is not quite trivial enough for it to do it automatically.
 For example, something in RELEASE-NOTES-1.21.

Seems a better example.
I'm not convinced that backporting should be automatically merged, though.
Even if the code at REL-old is the same as master (ie. the backport
doesn't needs any code change), approving something from master is
different than agreeing that it should be merged to REL-old (unless
explicitly stated in the previous change). I'm not too firm on that for
changes that it's obvious should be backported, such as a XSS fix*, but
I would completely oppose to automerge a minor feature because it was
merged into master.
Note that we are not alone opinating about what it's worth backporting,
since downstream distros will also call into question if our new release
is “just bugfixes” before they agree into accepting it as-is.


* Still, we could be making a complete new class in master but just
stripping the vulnerable piece in the old release.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] When to automerge (Re: Revoking +2 (Re: who can merge into core/master?))

2013-02-15 Thread Platonides
On 15/02/13 18:51, Chris Steipp wrote:
 This process is painful (no one like reviewing patches in bugzilla),
 and the wrangling to get the right people to review patches in
 bugzilla is slowing down our security releases. It would be much
 better if we had a way to submit the patches in gerrit, go through the
 normal review process by a trusted group of developers ending in a
 +2's, and then the final merge is just a single click when we release
 the tarballs. But we haven't been able to get gerrit to do that yet
 (although if any java developers want to work on that, I would be very
 excited).

gerrit drafts?
Although those are not as private as we would like. Another option would
be to email those amongst +2 reviewers, although willingness to review
through email perhaps won't be bigger than in bugzilla.




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Maria DB

2013-02-14 Thread Platonides
On 14/02/13 18:26, Faidon Liambotis wrote:
 Ubuntu has experimented in the past with the concept of automatically
 generating and shipping symbols for *all* packages, packaged up in a
 ddebs (same format as .deb) and shipped via a different repository
 that isn't mirrored by all of the downstream mirrors.
 
 This was years ago, I'm not sure what has happened since then. I
 remember being discussed in Debian as well, but it was never adopted,
 probably because noone ever implemented it :)

Good question. There are a few bugs and blueprints about it, and they
show as *implemented*

https://blueprints.launchpad.net/ubuntu/+spec/apt-get-debug-symbols
https://bugs.launchpad.net/ubuntu/+bug/14484
https://lists.ubuntu.com/archives/ubuntu-devel-announce/2006-September/000195.html

What happened, then?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcement: Ed Sanders joins Wikimedia as Visual Editor Software Engineer

2013-02-12 Thread Platonides
Welcome Ed!



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Corporate needs are different (RE: How can we help Corporations use MW?)

2013-02-12 Thread Platonides
On 12/02/13 06:26, Brian Wolff wrote:
 For subpages to really fill this use case I think the page title would have
 to show only (or primarily emphasize) the subpage name instead of the full
 page name.

I think it has been brought up in the past, there may be an extension
doing that.


 Also it sounds like in such a use case, one would want links to be relative
 to the current path first. If on page a/b/c you would want [[foo]] to link
 to a/b/foo if it exists and link to just foo if that page does not exist.

And where should the red-link send you to?
That may be more confusing for some users.

We have ../ links, perhaps add also ./ ?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Corporate needs are different (RE: How can we help Corporations use MW?)

2013-02-09 Thread Platonides
On 08/02/13 21:51, Lee Worden wrote:
 As an aside, you could almost certainly do this cheaper with
 WorkingWiki.  If you can write a make rule to retrieve the Excel file
 from the network drive and make it into html and image files (and maybe
 a little wikitext to format the page), you're done.
 
 LW

You could do it with openoffice.org/libreoffice, although I agree that
getting all the dependencies right for running in the server is a bit
tedious. You can also use Excel itself for that (eg. COM automation), as
suggested by vitalif, supposing you are using a Windows server.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia engineering January 2013 report

2013-02-09 Thread Platonides
On 07/02/13 21:54, Chad wrote:
 On Thu, Feb 7, 2013 at 3:50 PM, Platonides wrote:
 Also worth mentioning, our SVN is now read-only.

 
 This actually happened on Feb 1st :)
 
 -Chad

I did check before sending.
«Marking all of SVN as read-only» sent Jan 24th.
Follow-up the next day saying: «This is now complete»

Did January lose a week and nobody noticed me? :)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia engineering January 2013 report

2013-02-07 Thread Platonides


On 07/02/13 20:57, Guillaume Paumier wrote:
 *Git conversion https://www.mediawiki.org/wiki/Git/Conversion*
 The 
 ExtensionDistributorhttps://www.mediawiki.org/wiki/Extension:ExtensionDistributorwas
 rewritten in early January. While this was primarily done to support
 the data center
 migrationhttp://wikitech.wikimedia.org/view/Eqiad_Migration_Planning,
 this was the first time ExtensionDistributor had received any signification
 attention since the migration to Git. The new version now utilizes the
 Github API to generate extension snapshots. We hope that the new version
 will be more reliable for users. SVN-based extensions are no longer
 supported, but this is not expected to impact many users since these
 extensions are largely unmaintained (all popular and active extensions have
 long since moved to Gerrit). As always, these extensions will remain in SVN
 should anyone still want the code.

Also worth mentioning, our SVN is now read-only.


 *Site performance https://www.mediawiki.org/wiki/Site_performance*
 A patch to allow moving the DB job queue to another cluster is under
 review. An experimental redis-based job queue patch also exists in gerrit.

According to the link, it has already been merged.
I guess that's change 39716, references to the changes should be
prefered to ambiguous text like A patch.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: RFC: Introducing two new HTTP headers to track mobile pageviews

2013-02-02 Thread Platonides
I don't like it's cryptic nature.

Someone looking at the headers sent to his browser would be very
confused about what's the point of «X-MF-Mode: b».

Instead something like this would be much more descriptive:
X-Mobile-Mode: stable
X-Mobile-Request: secondary

But that also means sending more bytes through the wire :S


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Audio derivatives, turning on MP3/AAC mobile app feature request.

2013-02-02 Thread Platonides
Your system does not seem to support OGG audio format. Downloading this
file without ogg support may need a significant amount of space and
bandwith. Downloading article.wav ... 20%


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Mediawiki-api] Reporting errors from the API

2013-01-25 Thread Platonides
On 25/01/13 16:54, Yuri Astrakhan wrote:
 Localization in v2 - all errors AND warnings are localized in default
 language unless lang= is given, in which case you can get parameter
 array or a non-default language. All standard translation magic
 (plural/gender/etc) will be supported. Warnings will always include a
 warning code.
 http://www.mediawiki.org/wiki/Requests_for_comment/API_Future#Errors_and_Warnings_Localization

There are two usecases very different. If the api client is a
browser/javascript, it is sensible to use $wgLanguageCode, but if it's a
bot, we probably want English.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Welcome, Munagala Ramanath (Ram)

2013-01-15 Thread Platonides
 Please join me in welcoming Ram!
 
 Rob

It's always good to get more ram! :)
Welcome!


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The ultimate bikeshed: typos in commit messages

2013-01-15 Thread Platonides
Well, I would prefer to get a notice that I made a typo than having that
embarrassing typo in the commit log forever. That's the point of using a
gating system, right? :)

So yes, I do think they should be corrected. (And I have committed typos
in both commit messages and inside files, just as anyone else)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Problem to get an article's content in MW 1.20.2

2013-01-15 Thread Platonides
On 15/01/13 14:44, Andreas Plank wrote:
 Hi,
 
 I'm using MW 1.20.2  and I want to get the content of a page for
 further parsing in a PHP application. The PHP application is triggered
 via a special page (Special:MobileKeyV1) and parses nature guides for
 mobile devices.
 
 I tried to get the content via getArticleID() ...
 $titleObj=Title::newFromText(Existing page);
 $articleID=$titleObj-getArticleID();
 Article::newFromID($articleID)-fetchContent();
 etc.
 ... but it returns $articleID=0 although the page exits. With MW 1.18
 this approach worked fine, but after upgrade to MW 1.20.2 it does not
 any more.


It should be working, and it works for me on 1.20.2
Can you provide more details on that $title-getArticleID(); which is
not working?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Html comments into raw wiki code: can they be wrapped into parsed html?

2012-12-29 Thread Platonides
On 29/12/12 22:23, Alex Brollo wrote:
 I'd like to use html comment into raw wiki text, to use them  as effective,
 server-unexpensive data containers that could be read and parsed by a js
 script in view mode. But I see that html comment, written into raw wiki
 text, are stripped away by parsing routines. I can access to raw code of
 current page in view mode by js with a index.php or an api.php call, and I
 do, but this is much more server-expensive IMHO.
 
 Is there any sound reason to strip html comments away? If there is no sound
 reason, could such a stripping be avoided?

They are wikitext comments, defined to be stripped for the final user.

I think there is an extension allowing to output html comments. You can
also use some tag properties as containers.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Can we help Tor users make legitimate edits?

2012-12-29 Thread Platonides
On 28/12/12 18:29, Tilman Bayer wrote:
 On Fri, Dec 28, 2012 at 1:26 AM, Sumana Harihareswara wrote:
 I've floated this problem past Tor and privacy people, and here are a
 few ideas:

 1) Just use the existing mechanisms more leniently.  Encourage the
 communities (Wikimedia  Tor) to use
 https://en.wikipedia.org/wiki/Wikipedia:Request_an_account (to get an
 account from behind Tor) and to let more people get IP block exemptions
 even before they've made any edits ( 30 people have gotten exemptions
 on en.wp in 2012).  Add encouraging get an exempt account language to
 the you're blocked because you're using Tor messaging.  Then if
 there's an uptick in vandalism from Tor then they can just tighten up
 again.

This seems the right approach.


 2) Encourage people with closed proxies to re-vitalize
 https://en.wikipedia.org/wiki/Wikipedia:WOCP .  Problem: using closed
 proxies is okay for people with some threat models but not others.


I didn't know about it. This is an interesting concept. It would be
possible to setup some 'public wikipedia proxys' (eg. by an European
chapter) and encourage its use.
It would still be possible to checkuser people going through that, but
a 2-tier process would be needed (wiki checkuser + proxy admin) thus
protecting from a “rogue checkuser” (Is that the primary concern of good
editors wishing to use proxys?). We could use that setup for gaining
information about usage (eg. it was 90% spam).


 3) Look at Nymble - http://freehaven.net/anonbib/#oakland11-formalizing
 and http://cgi.soic.indiana.edu/~kapadia/nymble/overview.php .  It would
 allow Wikimedia to distance itself from knowing people's identities, but
 still allow admins to revoke permissions if people acted up.  The user
 shows a real identity, gets a token, and exchanges that token over tor
 for an account.  If the user abuses the site, Wikimedia site admins can
 blacklist the user without ever being able to learn who they were or
 what other edits they did.  More: https://cs.uwaterloo.ca/~iang/ Ian
 Golberg's, Nick Hopper's, and Apu Kapadia's groups are all working on
 Nymble or its derivatives.  It's not ready for production yet, I bet,
 but if someone wanted a Big Project

 As Brad and Ariel point out, Nymble in the form described on the linked
 project page does not seem to allow long-term blocks, and cannot deal with
 dynamic IPs. In other words, it would only provide the analogue of
 autoblock functionality for Tor users. The linked paper by Henry and
 Goldberg is more realistic about these limitations, discussing IP addresses
 only as one of several possible unique identifiers (§V). From the
 concluding remarks to that chapter, it seems most likely that they would
 recommend some form of PKI or government ID-based registration for our
 purposes.

Requiring a government ID for connecting through tor would be even worse
for privacy.

I completely agree that matching with the IP address used to request the
nymble token is not enough. Maybe if the tokens were instead based in
ISP+zone geolocation, that could be a way. Still, that would still miss
linkability for vandals which use eg. both their home and work connections.


 3a) A token authorization system (perhaps a MediaWiki extension) where
 the server blindly signs a token, and then the user can use that token
 to bypass the Tor blocks.  (Tyler mentioned he saw this somewhere in a
 Bugzilla suggestion; I haven't found it.)

Bug 3729 ?


 Thoughts? Are any of you interested in working on this problem?  #tor on
 the OFTC IRC server is full of people who'd be interested in talking
 about this.

This is a social problem. We have the tools to fix it (account creation
+ ip block exemption). If someone asked me for that (in a project where
I can) because they are censored by their government I would gladly
grant it.
That also means that when they replaced 'Jimbo' with 'penis', 5 minutes
after getting their account, I would notice and kick them out.
In my experience, far more people is trying to use tor in wikipedia for
vandalising than for doing constructive edits / due to local censorship.
Although I concede that it's probably the opposite on ‘certain wikis’ I
don't edit.
The problem with global solutions are vandals abusing it.

If I don't get caught on 10 edits I can edit through tor is a candle
for vandals. Note that I don't get caught is different than doing a
constructive edit.

An idea would be to force some recaptcha-style work before giving such
tokens, so even though we know they will abuse the system, we are still
using them as improving force (although the following vandalism could
still be worse than what we gained).


I also wonder if we are not aiming too high, trying to solve the
anonimity and traceability problems on the internet, while we have for
instance captchas forced to anons and newbies on a couple wikis due to a
bot vandalism done years ago (bug 41745).


___
Wikitech-l mailing list

Re: [Wikitech-l] Clarification on unit tests requiring CR+2

2012-12-19 Thread Platonides
On 19/12/12 20:48, Krinkle wrote:
 This issue will be definitely solved by isolating tests in dedicated virtual
 machines for each run. We are investigating Vagrant.

A VM seems overkill when it can be solved with standard user permissions
+ chroot (or even better, a bsd jail)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Simplified Gerrit ACLs.

2012-12-13 Thread Platonides
I found that I no longer can +1 to operations (in this case
operations/mediawiki-config, c38631).

The only options are 0 and -1.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] gerrit support question: how to show raw file content after the commit in the browser, not as zip download

2012-12-12 Thread Platonides
On 12/12/12 09:09, Ryan Kaldari wrote:
 I found a solution to the problem:
 If a gerrit administrator declares the mimetypes of the files to be safe
 they will be displayed in-browser rather than downloaded as zip files:
 https://gerrit-review.googlesource.com/Documentation/config-gerrit.html#_a_id_mimetype_a_section_mimetype
 
 
 Could someone edit the gerrit.config file to declare php, javascript,
 and css files as 'safe'?
 
 Ryan Kaldari

I don't think we should set javascript mimetype as safe. Not when using
its own one at least.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla: Waiting for merge status when patch is in Gerrit?

2012-12-12 Thread Platonides
On 12/12/12 21:23, Rob Lanphier wrote:
 My 2cif we add a new status, it should equate to deployed on the
 cluster, along with judicious use of milestone so that people who are
 just interested in the tarball can infer from our numbering what the
 corresponding release will be.

On which wiki? We could have it deployed on part of the cluster only.

The difference is between Waiting for merge and Fixed.
Once it is fixed, the landing is automatic, from the commit hash, there
could be a tool which showed you if it's on wmfxy, on which tarball
release it will appear, if you can see the fix in test2, to which wikis
it's deployed and approximate time for deployment to your home wiki.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins and extension parser tests

2012-12-11 Thread Platonides
If you only want to run parser tests from a different file, there's no
need to create a new class.

You simply add in the main file:
 $wgParserTestFiles[] = dirname( __FILE__ ) . /lstParserTests.txt;
(already in lst.php)

It will be automagically picked when running the phpunit tests (if you
have the extension enabled in LocalSettings, which would be a
precondition for them to work).

You can also run them with tests/parserTests.php

With phpunit:
$ make destructive

Tests: 5125, Assertions: 927178, Failures: 2, Incomplete: 3, Skipped: 5.
Tests: 5125, Assertions: 880313, Failures: 2, Incomplete: 3, Skipped: 5.

Add lst to LocalSettings.

$ make destructive
Tests: 5157, Assertions: 883148, Failures: 2, Incomplete: 3, Skipped: 5.
Tests: 5157, Assertions: 942272, Failures: 2, Incomplete: 3, Skipped: 5.
Tests: 5157, Assertions: 913264, Failures: 2, Incomplete: 3, Skipped: 5.

You can also run make parser if you prefer to run less tests.
(no, I don't know why would the number of assertions randomly change
between runs with the same config...)


And fyi, they do pass.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Recent test failures for many extensions

2012-12-11 Thread Platonides
I don't think they are really a problem. core was broken (it shouldn't
contain submodules), thus all tests failed.
As c37975 was an automerge, jenkins didn't have the chance to stop it.

It would be interesting if jenkins would -2 whenever a change is sent to
master branch with a parent in a different branch (wmf/1.21wmf6 in this
case).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki Groups are official: start your own!

2012-12-11 Thread Platonides
On 12/12/12 00:44, bawolff wrote:
 I'm actually quite curious to see if there are actually enough MW devs
 in a single city (Other then WMF's home town) to form a group.
 
 To be honest though, I kind of feel that if such groups were going
 to form, they probably would have already. Formality rarely makes
 people come together that wouldn't by themselves.
 
 -bawolff

It's possible that some people make one group because they are new and
it'd be cool. So we would have one more group listed. Would it last/be
useful? Who knows.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  1   2   3   4   5   6   7   8   9   10   >