I suggest renaming of pages
1. Cross-site scripting = Cross-site scripting (XSS, XSSI)
https://www.mediawiki.org/wiki/Cross-site_scripting
2. Cross-site request forgery = Cross-site request forgery (CSRF)
https://www.mediawiki.org/wiki/Cross-site_request_forgery
Before doing that I want to be
What actual benefit with having their abbreviation in the title archive?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Am 22.03.2013 07:29, schrieb K. Peachey:
What actual benefit with having their abbreviation in the title archive?
Make users aware at the first glance to the TOC, that XSS is this and
CSRF is that if they did not yet know this.
You, as expert, can overread the part in parentheses.
I am the
On Thu, 21 Mar 2013 23:35:27 -0700, Thomas Gries m...@tgries.de wrote:
Am 22.03.2013 07:29, schrieb K. Peachey:
What actual benefit with having their abbreviation in the title archive?
Make users aware at the first glance to the TOC, that XSS is this and
CSRF is that if they did not yet know
Am 22.03.2013 08:26, schrieb Daniel Friesen:
You, as expert, can overread the part in parentheses.
Including the abbreviation into the title seems quite a non-standard
thing to do. Parenthesis are typically used for disambig, not adding
extra versions of the title.
I fixed it locally on
On 13-03-22 12:41 AM, Thomas Gries wrote:
Am 22.03.2013 08:26, schrieb Daniel Friesen:
You, as expert, can overread the part in parentheses.
Including the abbreviation into the title seems quite a non-standard
thing to do. Parenthesis are typically used for disambig, not adding
extra versions
On 21 March 2013 20:11, Greg Grossmeier g...@wikimedia.org wrote:
Tim rolled back wmf12 after a nasty bug last night:
https://bugzilla.wikimedia.org/show_bug.cgi?id=46397
I assume this included all the extension as well.
-Niklas
___
Wikitech-l
Right now a lot of our links to generated documentation on php classes are
manually copied.
One of the reasons for this seems to be how badly Doxygen handles the
anchors to sections on individual methods.
Instead of simply using the method name it generates a md5 hash of a
signature that
Trying to clarify:
APC can do two things:
1) Keep the compiled php opcodes, so php execution is faster.
2) Allow the application to store values in the web server memory (kept
accross requests).
ZendOptimizer only does 1.
MediaWiki only needs to be changed for 2, since 1 is done automatically
On 21/03/13 08:05, Federico Leva (Nemo) wrote:
Restrictive wikis for captchas are only a handful (plus pt.wiki which is
in permanent emergency mode).
https://meta.wikimedia.org/wiki/Newly_registered_user
For them you could request confirmed flag at
https://meta.wikimedia.org/wiki/SRP
Am 22.03.2013 12:16, schrieb Platonides:
Trying to clarify:
APC can do two things:
1) Keep the compiled php opcodes, so php execution is faster.
2) Allow the application to store values in the web server memory (kept
accross requests).
ZendOptimizer only does 1.
MediaWiki only needs to
On Fri, Mar 22, 2013 at 9:43 AM, Daniel Friesen
dan...@nadir-seen-fire.comwrote:
Right now a lot of our links to generated documentation on php classes are
manually copied.
One of the reasons for this seems to be how badly Doxygen handles the
anchors to sections on individual methods.
On Fri, Mar 22, 2013 at 7:39 AM, Thomas Gries m...@tgries.de wrote:
I have to change CACHE_ACCEL to CACHE_NONE in my LocalSettings.php,
and will still enjoy opcode caching by ZendOptimizerPlus,
but have no memory cache - currently.
Is this correct ?
Can the setup be improved, and how ?
I've noticed that sometimes Jenkins +1s changes and other times it +2s them
(for Verified, that is). Is there any specific pattern to this? It's not a
problem or anything; I'm just curious. I feel like it was explained back
when the system was changed to +1 and +2 but I forget and can't seem to
Hi Quim,
comments are inline.
On Thu, Mar 21, 2013 at 12:43 AM, Quim Gil q...@wikimedia.org wrote:
(...)
After this filtering, we seem to be left with:
(...)
(If you think your project should also be considered here please speak up!)
Browser test automation[1]?
Most of these projects
Hi,
On Thu, Mar 21, 2013 at 12:43 AM, Quim Gil q...@wikimedia.org wrote:
Many of the ideas listed there are too generic (Write an extension),
improvements of existing features (Improve Extension:CSS)
This may sound naive, but why are improvements of existing features
discarded? My thinking
I've a good number of Mobile app ideas to contribute, but do not think
I'll be able to mentor. Should I still put those in? Does putting them
in with your name attached convey 'hey, this guy might be able to
mentor?' to people looking?
--
Yuvi Panda T
http://yuvi.in/blog
On Fri, Mar 22, 2013 at 6:16 PM, Yuvi Panda yuvipa...@gmail.com wrote:
Does putting them
in with your name attached convey 'hey, this guy might be able to
mentor?' to people looking?
It usually does. At least interested students tend to assume as much.
--
sankarshan mukhopadhyay
On Fri, Mar 22, 2013 at 10:30 PM, Željko Filipin zfili...@wikimedia.org wrote:
Most of these projects seem to be extension (and PHP?) centric. Can we
have more diversity?
Browser test automation? Not an extension, not in PHP, but in Ruby[2].
OPs don't want [any more] ruby on the clusters, So
Hi,
Last November, I started to clean up on the Glossary page on meta, as
an attempt to revive it and expand it to include many technical terms,
notably related to Wikimedia Engineering (see e-mail below).
There were (and are) already many glossaries spread around the wikis:
* one for MediaWiki:
On 2013-03-22 9:37 AM, Guillaume Paumier gpaum...@wikimedia.org wrote:
Hi,
On Thu, Mar 21, 2013 at 12:43 AM, Quim Gil q...@wikimedia.org wrote:
Many of the ideas listed there are too generic (Write an extension),
improvements of existing features (Improve Extension:CSS)
This may sound
On 2013-03-22 9:20 AM, Tyler Romeo tylerro...@gmail.com wrote:
On Fri, Mar 22, 2013 at 7:39 AM, Thomas Gries m...@tgries.de wrote:
I have to change CACHE_ACCEL to CACHE_NONE in my LocalSettings.php,
and will still enjoy opcode caching by ZendOptimizerPlus,
but have no memory cache -
On 2013-03-22 9:24 AM, Tyler Romeo tylerro...@gmail.com wrote:
I've noticed that sometimes Jenkins +1s changes and other times it +2s
them
(for Verified, that is). Is there any specific pattern to this? It's not a
problem or anything; I'm just curious. I feel like it was explained back
when
On Fri, Mar 22, 2013 at 9:42 AM, Brian Wolff bawo...@gmail.com wrote:
I think +1 is a merge and lint check where +2 is tests that actully execute
the code, so only happens if your on the trusted list of users. Jenkins
posts which tests its doing in a gerrit comment.
-bawolff
Ah, OK. That
On Fri, Mar 22, 2013 at 9:38 AM, Brian Wolff bawo...@gmail.com wrote:
Some people have claimed that CACHE_DB might even slow things down compared
to CACHE_NONE when used as main cache type (cache db is still better than
cache none for slow caches like the parser cache). Anyhow you should do
Guillaume Paumier, 22/03/2013 14:27:
* Status quo: We keep the current glossaries as they are, even if they
overlap and duplicate work. We'll manage.
Ugly.
* Wikidata: If Wikidata could be used to host terms and definitions
(in various languages), and wikis could pull this data using
On Fri, Mar 22, 2013 at 4:28 AM, Niklas Laxström
niklas.laxst...@gmail.com wrote:
On 21 March 2013 20:11, Greg Grossmeier g...@wikimedia.org wrote:
Tim rolled back wmf12 after a nasty bug last night:
https://bugzilla.wikimedia.org/show_bug.cgi?id=46397
I assume this included all the extension
just one message, just arrived:
http://php.net/archive/2013.php#id2013-03-21-1
PHP 5.5.0 beta1 available 21-Mar-2013
The PHP development team announces the release of the first beta of PHP
5.5.0. This release is the first to include the Zend OPCache. Please
help our efforts to provide a stable
On 2013-03-22 10:45 AM, Tyler Romeo tylerro...@gmail.com wrote:
On Fri, Mar 22, 2013 at 9:38 AM, Brian Wolff bawo...@gmail.com wrote:
Some people have claimed that CACHE_DB might even slow things down
compared
to CACHE_NONE when used as main cache type (cache db is still better
than
cache
On Fri, Mar 22, 2013 at 10:29 AM, Brian Wolff bawo...@gmail.com wrote:
That would be a mediawiki bug though. Does throtling actually work with
cache_db now? I remember it used to only work with the memcached backend.
Anyways if that's been fixed, throtling should be changed to use CACHE_ANY
On Fri, 22 Mar 2013 05:05:13 -0700, Waldir Pimenta wal...@email.com
wrote:
I would go further and suggest a way to integrate code comments into
manual
pages at mediawiki.org, so that we could have good documentation in both
code and mw.org, without needing to sync it manually. For example,
On Fri, Mar 22, 2013 at 6:55 AM, K. Peachey p858sn...@gmail.com wrote:
On Fri, Mar 22, 2013 at 10:30 PM, Željko Filipin zfili...@wikimedia.org
wrote:
Most of these projects seem to be extension (and PHP?) centric. Can we
have more diversity?
Browser test automation? Not an extension,
Le 22/03/13 10:43, Daniel Friesen a écrit :
What does everyone think of making it so that when Jenkins generates
this documentation. It processes the tagfile, splits it up and converts
it into multiple lua tables, then uses the API to update a Module: page
on mediawiki.org.
If you can come
Antoine Musso hashar+...@free.fr wrote:
What does everyone think of making it so that when Jenkins generates
this documentation. It processes the tagfile, splits it up and converts
it into multiple lua tables, then uses the API to update a Module: page
on mediawiki.org.
If you can come back
On 03/22/2013 07:16 AM, Platonides wrote:
APC can do two things:
1) Keep the compiled php opcodes, so php execution is faster.
2) Allow the application to store values in the web server memory (kept
accross requests).
ZendOptimizer only does 1. [...]
The «APC is a must have for larger
There was a discussion recently about OAuth, and I just saw this blog
posthttp://insanecoding.blogspot.com/2013/03/oauth-great-way-to-cripple-your-api.html
(posted
on
slashdothttp://tech.slashdot.org/story/13/03/22/1439235/a-truckload-of-oauth-issues-that-would-make-any-author-quit)
with some
Most of those concerns are valid. Daniel Friesnen has managed to convince
me that OAuth is absolutely horrible, and that we will probably have to
make our own authentication framework.
*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2015
Major in Computer Science
Hoi,
MAY I QUOTE YOU ???
Thanks,
GerardM
On 22 March 2013 17:11, Tyler Romeo tylerro...@gmail.com wrote:
Most of those concerns are valid. Daniel Friesnen has managed to convince
me that OAuth is absolutely horrible, and that we will probably have to
make our own authentication
On Thu, Mar 21, 2013 at 11:35 PM, Thomas Gries m...@tgries.de wrote:
Am 22.03.2013 07:29, schrieb K. Peachey:
What actual benefit with having their abbreviation in the title archive?
Make users aware at the first glance to the TOC, that XSS is this and
CSRF is that if they did not yet know
On 03/21/2013 09:54 PM, Tim Starling wrote:
Also, community managers generally see it as their responsibility to
extract as much work from volunteers as possible
The community managers for MediaWiki (Quim and me) don't think like
this. If you believe we do, please say so. :-)
and will ask
I think the caricature of OAuth there should be taken with a grain of
salt. The author talks about OAuth, but seems to be referring to
OAuth 2 primarily, which is very different from OAuth 1. Also, the
author says that the protocol was designed for authorizing
website-to-website communication, but
On Fri, Mar 22, 2013 at 8:59 AM, Yuri Astrakhan
yastrak...@wikimedia.org wrote:
There was a discussion recently about OAuth, and I just saw this blog
posthttp://insanecoding.blogspot.com/2013/03/oauth-great-way-to-cripple-your-api.html
(posted
on
Hi,
Just a quick reminder that we'll be holding IRC office hours in about
50 minutes. If you have questions about how to use Lua, or issues
you'd like help with, join us in #wikimedia-office on Freenode.
More information about how to connect to IRC is available at
Oh yay, I actually convinced someone.
This post is a little different than mine. A random spattering of
high-level qualms with it. OAuth 2 not being a protocol. Flow issues
(though a little debatable). And some stuff about enterprise that
besides being irrelevant to us sounds like berating
On 03/22/2013 12:48 PM, Chris Steipp wrote:
I think the caricature of OAuth there should be taken with a grain of
salt. The author talks about OAuth, but seems to be referring to
OAuth 2 primarily, which is very different from OAuth 1. Also, the
author says that the protocol was designed for
+ops
On Thu, Mar 21, 2013 at 8:20 AM, Juliusz Gonera jgon...@wikimedia.orgwrote:
We've been having a hard time making photo uploads work in
MobileFrontend because of CentralAuth's third party cookies problem (we
upload them from Wikipedia web site to Commons API). Apart from the
newest
Right now, I think many of us profile locally or in VMs, which can be
useful for relative metrics or quickly identifying bottlenecks, but doesn't
really get us the kind of information you're talking about from any sort of
real-world setting, or in any way that would be consistent from engineer to
Juliusz Gonera wrote:
We've been having a hard time making photo uploads work in
MobileFrontend because of CentralAuth's third party cookies problem (we
upload them from Wikipedia web site to Commons API). Apart from the
newest Firefox [1,2], mobile Safari also doesn't accept third party
cookies
On 2013-03-22 5:22 PM, MZMcBride z...@mzmcbride.com wrote:
Juliusz Gonera wrote:
We've been having a hard time making photo uploads work in
MobileFrontend because of CentralAuth's third party cookies problem (we
upload them from Wikipedia web site to Commons API). Apart from the
newest
On Fri, Mar 22, 2013 at 1:22 PM, MZMcBride z...@mzmcbride.com wrote:
commons.wikipedia.org already redirects to commons.wikimedia.org (for
historical reasons, maybe), so that has to be considered. I think what
you're proposing is also kind of confusing and I'm wondering if there
aren't better
Asher Feldman wrote:
I'd like to push for a codified set of minimum performance standards that
new mediawiki features must meet before they can be deployed to larger
wikimedia sites such as English Wikipedia, or be considered complete.
These would look like (numbers pulled out of a hat, not
fyi, we have all these:
DNS:
root@sockpuppet:~/pdns-templates# ls -l | grep commons
lrwxrwxrwx 1 root root 13 Jun 7 2012 wikimediacommons.co.uk -
wikimedia.com
lrwxrwxrwx 1 root root 13 Jun 7 2012 wikimediacommons.eu - wikimedia.com
lrwxrwxrwx 1 root root 13 Jun 7 2012
On Wed, Mar 20, 2013 at 8:23 PM, James Heilman jmh...@gmail.com wrote:
Hey All
I have someone helping me add translation done by Translators Without
Borders of key medical articles. An issue that slows the work is that
many languages require CAPTCHA to save the edits. Is their anyway
around
On Thu, Mar 21, 2013 at 6:40 PM, Asher Feldman afeld...@wikimedia.orgwrote:
Right now, varying amounts of effort are made to highlight potential
performance bottlenecks in code review, and engineers are encouraged to
profile and optimize their own code. But beyond is the site still up for
been a way to reliably measure performance...
The measure part is important. As it stands I have no way of measuring code
in action (sure i can set up profiling locally, and actually have but its
not the same [otoh i barely ever look at the local profiling i did set
up...). People throw around
quote name=Greg Grossmeier date=2013-03-21 time=11:11:15 -0700
We're still diagnosing/etc.
Thanks to Aaron Schulz for debugging with help from Chris Steipp and
Aude testing we fixed the issue.
We now have re-deployed 1.21wmf12 to the phase 1 and 2 wikis (see:
May this be connected with the deployment?
https://www.wikidata.org/wiki/Wikidata:Project_chat#Special:ItemByTitle_changes_.22_.22_with_.22_.22_in_.22site.22
On Sat, Mar 23, 2013 at 12:26 AM, Greg Grossmeier g...@wikimedia.orgwrote:
quote name=Greg Grossmeier date=2013-03-21 time=11:11:15
quote name=Paul Selitskas date=2013-03-23 time=00:32:28 +0300
May this be connected with the deployment?
https://www.wikidata.org/wiki/Wikidata:Project_chat#Special:ItemByTitle_changes_.22_.22_with_.22_.22_in_.22site.22
I don't believe so. We had some issues with lua/wikidata related to this
People throw around words like graphite, but unless im mistaken us
non staff folks do not have access to whatever that may be.
Graphite refers to the cluster performance logger available at:
http://graphite.wikimedia.org/
Anyone with a labs account can view it -- which as a commiter you do
On 2013-03-22 6:46 PM, Matthew Walker mwal...@wikimedia.org wrote:
People throw around words like graphite, but unless im mistaken us
non staff folks do not have access to whatever that may be.
Graphite refers to the cluster performance logger available at:
http://graphite.wikimedia.org/
Hello Niklas and all,
quote name=Niklas Laxström date=2013-03-21 time=11:55:24 +0200
I've seen a couple of instances where changes to MediaWiki are blocked
until someone informs the community.
Just so I know, could you share these? Either on list or in a private
email to me if you don't want
Hello!
This is your weekly preview of higher-risk or general you should be
aware of items for the slew of deployments coming in the near term.
== During the week of March 25th ==
* Wikidata Phase 2 is going to test2 on Monday
** Then on Wednesday, if all goes well, it is going to about 10% of
On Fri, Mar 22, 2013 at 5:36 AM, Guillaume Paumier
gpaum...@wikimedia.org wrote:
Hi,
On Thu, Mar 21, 2013 at 12:43 AM, Quim Gil q...@wikimedia.org wrote:
Many of the ideas listed there are too generic (Write an extension),
improvements of existing features (Improve Extension:CSS)
This may
I know Aaron has spent a lot of time on the job queue. But I have
several observations and would like some feedback. The current workers
apparently select jobs from the queue at random. A FIFO method would
make far more sense. We have some jobs that can sit there in the queue
for extended periods
64 matches
Mail list logo