Re: [Wikitech-l] Can/should my extensions be deleted from the Wikimedia Git repository?

2018-06-07 Thread Ryan Lane
The most likely way for people to see codes of conduct is through
repositories, which lets them know they have some way to combat harassment
in the tool they're using to try to contribute to a particular repository.
It makes sense to have a CODE_OF_CONDUCT.md in the repos; however, if all
the repos are using the same policy, it's often better to have a minimal
CODE_OF_CONDUCT.md that simply says "This repo is governed by the blah blah
code of conduct, specified here: ". This makes it possible to have a
single boilerplate code of conduct without needing to update every repo
whenever the CoC changes.

It's a reasonable ask to have the file there, and this discussion feels
like a thinly veiled argument against CoCs as a whole. If you're so against
the md file, or against the CoC as a whole, github and/or gitlab are fine
places to host a repository.

On Thu, Jun 7, 2018 at 5:39 PM, John  wrote:

> Honestly I find forcing documentation into repos to be abrasive, and
> overstepping the bounds of the CoC.I also find the behavior of those
> pushing such an approach to be hostile and overly aggressive. Why do you
> need to force a copy of the CoC into every repo? Why not keep it in a
> central location? What kind of mess would you need to cleanup if for some
> reason you needed to adjust the contents of that file? Instead of having
> one location to update you now have 800+ copies that need fixed.
>
> On Thu, Jun 7, 2018 at 8:23 PM, Yaron Koren  wrote:
>
> >  Chris Koerner  wrote:
> > > “Please just assume for the sake of this discussion that (a) I'm
> willing
> > > to abide by the rules of the Code of Conduct, and (b) I don't want the
> > > CODE_OF_CONDUCT.md file in my extensions.”
> > > Ok, hear me out here. What if I told you those two things are
> > > incompatible? That abiding by the community agreements requires the
> file
> > > as an explicit declaration of said agreement. That is to say, if we had
> > > a discussion about amending the CoC to be explicit about this
> expectation
> > > you wouldn’t have issues with including it? Or at least you’d be OK
> with
> > > it?
> >
> > Brian is right that adding a requirement to include this file to the CoC
> > would be an odd move. But, if it did happen, I don't know - I suppose I'd
> > have two choices: either include the files or remove my code. I would be
> an
> > improvement over the current situation in at least one way: we would know
> > that rules are still created in an orderly, consensus-like way, as
> opposed
> > to now, where a small group of developers can apparently make up rules as
> > they go along.
> >
> > -Yaron
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reducing the environmental impact of the Wikimedia movement

2016-05-16 Thread Ryan Lane
On Mon, May 16, 2016 at 12:45 AM, Lukas Mezger 
wrote:

> Yes, we're also looking into reducing the environmental impact of the rest
> of the activities in the Wikimedia movement. And I am very aware that many
> websites consume a lot more energy than Wikipedia does. (Please see
> https://meta.wikimedia.org/wiki/Environmental_impact for more
> information.)
>
> But this doesn't mean we should not try to have the Wikimedia servers run
> on renewable energy. Even some big for-profit companies like Apple and
> Yahoo are already doing this. So, how can we get there as well and what
> would it cost us?
>
>
When you're as large as Apple or Yahoo, it's easy to pressure your
infrastructure providers to run on renewables. Wikimedia has basically no
bargaining power because they spend very little money (because they don't
run a lot of servers). I know Wikimedia feels huge and important, and it's
important in a lot of ways, but when it comes to pressuring datacenter
providers, it may as well not exist.

It's possible that the only available option is to bring up new datacenters
in areas with renewable energy, and those datacenters may not be as
reliable, they may not be as well connected from a networking point of
view, they may have poor security and many other issues. I wouldn't expect
much movement towards renewables here until there's some really large
companies pushing for this in the relevant datacenters.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security patch

2016-04-26 Thread Ryan Lane
On Tue, Apr 26, 2016 at 12:01 PM, Alex Monk  wrote:

> It's not an extension that gets bundled with MediaWiki releases.
>
>
That doesn't mean third parties aren't using it. When I say a release of
the extension, I mean give it a version number, increase the version
number, tag it in git, then tell people "ensure you are using version x or
greater of MobileFrontend".

This is a pretty normal process that Wikimedia does well for other things.
I have a feeling this isn't going through a normal process...

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security patch

2016-04-26 Thread Ryan Lane
Any chance that Wikimedia Foundation can actually do proper releases of
this extension, rather than sending people a link to a phabricator page
that has a link to a gerrit change buried in the comments?

This seems like a pretty poor way to do a security release to third parties
that may be relying on this.

On Tue, Apr 26, 2016 at 11:44 AM, Jon Robson  wrote:

> A security vulnerability has been discovered in MediaWiki setups which
> use MobileFrontend.
>
> Revisions who's visibility had been alerted were showing up in parts
> of the mobile UI.
>
> All projects in the Wikimedia cluster have been since patched but if
> you use this extension please be sure to apply the fix.
>
> Patch file and issue are documented on
> https://phabricator.wikimedia.org/T133700
>
> Note there is some follow-up work to do which is tracked in:
> https://phabricator.wikimedia.org/T133722
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feelings

2016-04-01 Thread Ryan Lane
To be totally honest, I think this is a great idea, but it should use
emojis. Github added this for PRs and messages for instance, and it's
amazingly helpful:

https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments

Slack has the same thing. It's fun, people like it, and it tends to
encourage better interactions.

On Fri, Apr 1, 2016 at 2:12 PM, Pine W  wrote:

> I can't tell if this is intended to be an April 1st joke or if it's
> serious. In any case, I could see this being an interesting option for talk
> pages, particularly where Flow is enabled.
>
> Pine
>
> On Fri, Apr 1, 2016 at 12:24 PM, Legoktm 
> wrote:
>
> > Hi,
> >
> > It's well known that Wikipedia is facing threats from other social
> > networks and losing editors. While many of us spend time trying to make
> > Wikipedia different, we need to be cognizant that what other social
> > networks are doing is working. And if we can't beat them, we need to
> > join them.
> >
> > I've written a patch[1] that introduces a new feature to the Thanks
> > extension called "feelings". When hovering over a "thank" link, five
> > different emoji icons will pop up[2], representing five different
> > feelings: happy, love, surprise, anger, and fear. Editors can pick one
> > of those options instead of just a plain thanks, to indicate how they
> > really feel, which the recipient will see[3].
> >
> > Of course, some might consider this feature to be controversial (I
> > suspect they would respond to my email with "anger" or "fear"), so I've
> > added a feature flag for it. Setting
> >  $wgDontFixEditorRetentionProblem = true;
> > will disable it for your wiki.
> >
> > Please give the patch a try, I've only tested it in MonoBook so far, it
> > might need some extra CSS in Vector.
> >
> > [1] https://gerrit.wikimedia.org/r/280961
> > [2] https://phabricator.wikimedia.org/F3810964
> > [3] https://phabricator.wikimedia.org/F3810963
> >
> > -- Legoktm
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feelings

2016-04-01 Thread Ryan Lane
/me marks this email with "anger"

On Fri, Apr 1, 2016 at 1:41 PM, Ricordisamoa 
wrote:

> This sounds like something that some random WMF team would actually
> implement in the near future...
>
>
> Il 01/04/2016 21:24, Legoktm ha scritto:
>
>> Hi,
>>
>> It's well known that Wikipedia is facing threats from other social
>> networks and losing editors. While many of us spend time trying to make
>> Wikipedia different, we need to be cognizant that what other social
>> networks are doing is working. And if we can't beat them, we need to
>> join them.
>>
>> I've written a patch[1] that introduces a new feature to the Thanks
>> extension called "feelings". When hovering over a "thank" link, five
>> different emoji icons will pop up[2], representing five different
>> feelings: happy, love, surprise, anger, and fear. Editors can pick one
>> of those options instead of just a plain thanks, to indicate how they
>> really feel, which the recipient will see[3].
>>
>> Of course, some might consider this feature to be controversial (I
>> suspect they would respond to my email with "anger" or "fear"), so I've
>> added a feature flag for it. Setting
>>   $wgDontFixEditorRetentionProblem = true;
>> will disable it for your wiki.
>>
>> Please give the patch a try, I've only tested it in MonoBook so far, it
>> might need some extra CSS in Vector.
>>
>> [1] https://gerrit.wikimedia.org/r/280961
>> [2] https://phabricator.wikimedia.org/F3810964
>> [3] https://phabricator.wikimedia.org/F3810963
>>
>> -- Legoktm
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] bluejeans

2016-03-01 Thread Ryan Lane
The right question here is: is it more important for Wikimedia foundation
to use only open source than it is to focus on work that directly benefits
the movement? There's no reasonable open source to do this function. The
ones that exist are terrible, are less efficient, and have to have hardware
dedicated to them. In either case it's going to cost money to handle this,
the question is, should it also cost engineering time?

Idealism comes at a pretty high cost. The foundation in the past has made a
pretty reasonable choice in the past in that they're willing to use
proprietary software for functions that aren't directly associated with the
projects. The decision is often focused on "if the community wanted to fork
the projects, would this proprietary software we're using be a problem?".
In this case the answer would be no.

On Tue, Mar 1, 2016 at 12:55 PM, Jeremy Baron  wrote:

> Hi,
>
> On Tue, Mar 1, 2016 at 3:36 PM, David Strine 
> wrote:
> > We will be holding this brownbag in 25 minutes. The Bluejeans link has
> > changed:
> >
> >  https://bluejeans.com/396234560
>
> I'm not familiar with bluejeans and maybe have missed a transition
> because I wasn't paying enough attention. is this some kind of
> experiment? have all meetings transitioned to this service?
>
> anyway, my immediate question at the moment is how do you join without
> sharing your microphone and camera?
>
> am I correct thinking that this is an entirely proprietary stack
> that's neither gratis nor libre and has no on-premise (not cloud)
> hosting option? are we paying for this?
>
> -Jeremy
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Windows Single Sign-On Extension

2016-02-09 Thread Ryan Lane
The best option here is:
https://www.mediawiki.org/wiki/Extension:LDAP_Authentication

I'm not sure why you think LDAP is a wart on Windows. Active Directory is
just LDAP with Kerberos.

Anyway, the LDAP Authentication extension has examples of how to do
auto-auth using kerberos. You still need LDAP for things like group
membership, username conversion, and other integrations.

- Ryan

On Tue, Feb 9, 2016 at 9:20 AM, François St-Arnaud 
wrote:

> Hello,
>
> To enable Single Sign-On to a MediaWiki hosted on IIS in a Windows Domain,
> the best MediaWiki extension I could find was NTLMActiveDirectory.
> https://www.mediawiki.org/wiki/Extension:NTLMActiveDirectory
>
> However, I had two peeves with this extension:
> 1) Its name; I'm not doing NTLM, but Negotiate and Kerberos; and
> 2) Its use of LDAP; feels too much like a wart on Windows!
>
> See, I'm sitting on an IIS box on a Windows domain with Integrated Windows
> Authentication enabled. By the time the MW extension gets hit, IIS has
> already authenticated the user, so why not just leverage that instead?
>
> I therefore used NTLMActiveDirectory as a starting point, but threw out
> all the LDAP stuff and replaced it with a simple Web call to an IIS-hosted
> handler to get the AD group membership for the already authenticated user.
> Of NTLMActiveDirectory, I kept the AD / MW group mapping configuration
> required for authorization.
>
> Personally, I find this solution much simpler and intuitive for AD
> integration when hosting MW on a Windows/IIS box.
>
> Does this make sense to others in the community?
> Do others feel there was a need for a better AD integration extension?
> Would others in the community benefit from such an extension?
>
> If so, I would be happy to share my work, following instructions found
> here:
> https://www.mediawiki.org/wiki/Writing_an_extension_for_deployment
>
> Regards,
>
> François
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Windows Single Sign-On Extension

2016-02-09 Thread Ryan Lane
If this is what you'll need, you're going to need to write a custom
extension. None of the existing auth extensions do this.

On Tue, Feb 9, 2016 at 2:35 PM, François St-Arnaud <fstarn...@logisphere.ca>
wrote:

> Thanks, I'll take a closer look at your extension.
>
> Well, although I understand that using LDAP against AD is supposed to work
> mostly seamlessly, I've had troubles trying to use it in our client's
> domain, mostly due to GPOs and other security constraints. For one thing,
> LDAP, even TLS-secured, is not authorized for authentication in the domain.
> Also, LDAP starts to feel like a wart -- or an overkill -- when I have to
> require and configure a PHP LDAP client on the Web server and send LDAP
> requests when I know that the web server I'm sitting on, IIS, has already
> authentified the user via Negotiate/Kerberos and already knows the user's
> AD group membership and other such information.
>
> Hence, I feel that the approach of a simple loopback call from the
> extension back to a .NET ASHX web handler -- which is readily available via
> an API in that environment -- is more elegant. For example, to get the AD
> group membership of the currently logged-in user (some lines removed for
> clarity):
>
> In PHP, using curl:
>
> $curl = curl_init();
> curl_setopt($curl, CURLOPT_URL, 'roles.ashx');
> $result = curl_exec($curl);
> $wgAuth->userADGroups = Array($result);
>
> In C#, in a roles.ashx file deployed with the extension on the IIS server:
>
> public void ProcessRequest (HttpContext context) {
>   context.Response.ContentType = @"text\json";
>   context.Response.Write("[");
>   int i = 0;
>   int count = Roles.GetRolesForUser().Length;
>   foreach (var role in Roles.GetRolesForUser())
>   {
> context.Response.Write('"' + role + '"');
> if (++i != count) context.Response.Write(',');
>   }
>   context.Response.Write(']');
>   context.Response.End();
> }
>
> - François
>
> -Original Message-
> From: Wikitech-l [mailto:wikitech-l-boun...@lists.wikimedia.org] On
> Behalf Of Ryan Lane
> Sent: Tuesday, February 09, 2016 14:43
> To: Wikimedia developers <wikitech-l@lists.wikimedia.org>
> Subject: Re: [Wikitech-l] Windows Single Sign-On Extension
>
> The best option here is:
> https://www.mediawiki.org/wiki/Extension:LDAP_Authentication
>
> I'm not sure why you think LDAP is a wart on Windows. Active Directory is
> just LDAP with Kerberos.
>
> Anyway, the LDAP Authentication extension has examples of how to do
> auto-auth using kerberos. You still need LDAP for things like group
> membership, username conversion, and other integrations.
>
> - Ryan
>
> On Tue, Feb 9, 2016 at 9:20 AM, François St-Arnaud <
> fstarn...@logisphere.ca>
> wrote:
>
> > Hello,
> >
> > To enable Single Sign-On to a MediaWiki hosted on IIS in a Windows
> > Domain, the best MediaWiki extension I could find was
> NTLMActiveDirectory.
> > https://www.mediawiki.org/wiki/Extension:NTLMActiveDirectory
> >
> > However, I had two peeves with this extension:
> > 1) Its name; I'm not doing NTLM, but Negotiate and Kerberos; and
> > 2) Its use of LDAP; feels too much like a wart on Windows!
> >
> > See, I'm sitting on an IIS box on a Windows domain with Integrated
> > Windows Authentication enabled. By the time the MW extension gets hit,
> > IIS has already authenticated the user, so why not just leverage that
> instead?
> >
> > I therefore used NTLMActiveDirectory as a starting point, but threw
> > out all the LDAP stuff and replaced it with a simple Web call to an
> > IIS-hosted handler to get the AD group membership for the already
> authenticated user.
> > Of NTLMActiveDirectory, I kept the AD / MW group mapping configuration
> > required for authorization.
> >
> > Personally, I find this solution much simpler and intuitive for AD
> > integration when hosting MW on a Windows/IIS box.
> >
> > Does this make sense to others in the community?
> > Do others feel there was a need for a better AD integration extension?
> > Would others in the community benefit from such an extension?
> >
> > If so, I would be happy to share my work, following instructions found
> > here:
> > https://www.mediawiki.org/wiki/Writing_an_extension_for_deployment
> >
> > Regards,
> >
> > François
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Windows Single Sign-On Extension

2016-02-09 Thread Ryan Lane
Never hurts :)

On Tue, Feb 9, 2016 at 6:06 PM, François St-Arnaud <fstarn...@logisphere.ca>
wrote:

> Right. As mentioned in my first post, I already have created a custom
> extension using this approach and NTLMActiveDirectory as a starting point.
> Now, I wonder if it is worth sharing with the community, if others would
> benefit from an LDAP-less SSO solution for MW hosted on IIS?
>
> -Original Message-
> From: Wikitech-l [mailto:wikitech-l-boun...@lists.wikimedia.org] On
> Behalf Of Ryan Lane
> Sent: Tuesday, February 09, 2016 17:41
> To: Wikimedia developers <wikitech-l@lists.wikimedia.org>
> Subject: Re: [Wikitech-l] Windows Single Sign-On Extension
>
> If this is what you'll need, you're going to need to write a custom
> extension. None of the existing auth extensions do this.
>
> On Tue, Feb 9, 2016 at 2:35 PM, François St-Arnaud <
> fstarn...@logisphere.ca>
> wrote:
>
> > Thanks, I'll take a closer look at your extension.
> >
> > Well, although I understand that using LDAP against AD is supposed to
> > work mostly seamlessly, I've had troubles trying to use it in our
> > client's domain, mostly due to GPOs and other security constraints.
> > For one thing, LDAP, even TLS-secured, is not authorized for
> authentication in the domain.
> > Also, LDAP starts to feel like a wart -- or an overkill -- when I have
> > to require and configure a PHP LDAP client on the Web server and send
> > LDAP requests when I know that the web server I'm sitting on, IIS, has
> > already authentified the user via Negotiate/Kerberos and already knows
> > the user's AD group membership and other such information.
> >
> > Hence, I feel that the approach of a simple loopback call from the
> > extension back to a .NET ASHX web handler -- which is readily
> > available via an API in that environment -- is more elegant. For
> > example, to get the AD group membership of the currently logged-in
> > user (some lines removed for
> > clarity):
> >
> > In PHP, using curl:
> >
> > $curl = curl_init();
> > curl_setopt($curl, CURLOPT_URL, 'roles.ashx'); $result =
> > curl_exec($curl); $wgAuth->userADGroups = Array($result);
> >
> > In C#, in a roles.ashx file deployed with the extension on the IIS
> server:
> >
> > public void ProcessRequest (HttpContext context) {
> >   context.Response.ContentType = @"text\json";
> >   context.Response.Write("[");
> >   int i = 0;
> >   int count = Roles.GetRolesForUser().Length;
> >   foreach (var role in Roles.GetRolesForUser())
> >   {
> > context.Response.Write('"' + role + '"');
> > if (++i != count) context.Response.Write(',');
> >   }
> >   context.Response.Write(']');
> >   context.Response.End();
> > }
> >
> > - François
> >
> > -Original Message-
> > From: Wikitech-l [mailto:wikitech-l-boun...@lists.wikimedia.org] On
> > Behalf Of Ryan Lane
> > Sent: Tuesday, February 09, 2016 14:43
> > To: Wikimedia developers <wikitech-l@lists.wikimedia.org>
> > Subject: Re: [Wikitech-l] Windows Single Sign-On Extension
> >
> > The best option here is:
> > https://www.mediawiki.org/wiki/Extension:LDAP_Authentication
> >
> > I'm not sure why you think LDAP is a wart on Windows. Active Directory
> > is just LDAP with Kerberos.
> >
> > Anyway, the LDAP Authentication extension has examples of how to do
> > auto-auth using kerberos. You still need LDAP for things like group
> > membership, username conversion, and other integrations.
> >
> > - Ryan
> >
> > On Tue, Feb 9, 2016 at 9:20 AM, François St-Arnaud <
> > fstarn...@logisphere.ca>
> > wrote:
> >
> > > Hello,
> > >
> > > To enable Single Sign-On to a MediaWiki hosted on IIS in a Windows
> > > Domain, the best MediaWiki extension I could find was
> > NTLMActiveDirectory.
> > > https://www.mediawiki.org/wiki/Extension:NTLMActiveDirectory
> > >
> > > However, I had two peeves with this extension:
> > > 1) Its name; I'm not doing NTLM, but Negotiate and Kerberos; and
> > > 2) Its use of LDAP; feels too much like a wart on Windows!
> > >
> > > See, I'm sitting on an IIS box on a Windows domain with Integrated
> > > Windows Authentication enabled. By the time the MW extension gets
> > > hit, IIS has already authenticated the user, so why not just
> > > leverage that
> > instead?
> > >
> > > I therefore used NTLMActiveDirectory as a starting point, but threw
> &g

Re: [Wikitech-l] Peer-to-peer sharing of the content of Wikipedia through WebRTC

2015-11-30 Thread Ryan Lane
On Sun, Nov 29, 2015 at 8:18 PM, Yeongjin Jang 
wrote:

>
> I recall that I saw financial statement of WMF that states around $2.3M
> was spent for Internet Hosting. I am not sure whether it includes
> management cost for computing resources
> (server clusters such as eqiad) or not.
>
>
That's the cost for datacenters, hardware, bandwidth, etc..


> Not sure following simple calculation works;
> 117 TB per day, for 365 days, if $0.05 per GB, then it is around $2.2M.
> Maybe it would be more accurate if I contact analytics team directly.
>
>
That calculation doesn't work because it doesn't take into account peering
agreements, or donated (or heavily discounted) transit contracts. Bandwidth
is one of the cheaper overall costs.

Something your design doesn't take into account for bandwidth costs is that
the world is trending to mobile and mobile bandwidth costs are generally
very high. It's likely this p2p approach will be many orders of magnitude
more expensive than the current approach.

A decentralized approach doesn't benefit from the economics of scale.
Instead of being able to negotiate transit pricing and eliminating cost
through peering, you're externalizing the cost at the consumer rate, which
is the highest possible rate.

> The other scalability concern would be for obscure articles. I havent
> really looked at your code, so maybe you cover it - but wikipedia has over
> 5 million articles (and a lot more when you count non-content pages). The
> group of peers is presumably going to have high churn (since they go away
> when you browse somewhere else). Id worry the overhead of keeping track of
> which peer knows what, especially given how fast the peers change to be a
> lot. I also expect that for lots of articles, only a very small number of
> peers will know them.
>
>

> That's true. Dynamically registering / un-registering lookup table gives
> high
> overhead on the servers (in both computation & memory usage).
> Distributed solutions like DHT is there, but we think there could be a
> trade-off
> on lookup time for using de-centralized (managing lookup table on the
> server)
> versus fully distributed architecture (DHT).
>
> Our prior "naive" implementation costs like if each user has
> 5K pages cached (with around 50K images),
> when 10K concurrent user presents it consumes around 35GB
> of memory, and each registering incurs 500K bytes of network traffic.
>
> We thought it is not that useful, so now we are trying to come up with
> more lightweight implementation. We hope to have a practically meaningful
> micro-benchmark result on the new implementation.
>
>
Just the metadata for articles, images and revisions is going to be
massive. That data itself will need to be distributed too. The network
costs associated with just lookups is going to be quite expensive for peers.

It seems your project assumes that bandwidth is unlimited and unrated,
which for many parts of the world isn't true.

I don't mean to dissuade you. The idea of a p2p Wikipedia is an interesting
project, and at some point in the future if bandwidth is free and unrated
everywhere this may be a reasonable way to provide a method of access in
case of major disaster of Wikipedia itself. This idea has been brought up
numerous times in the past, though, and in general the potential gains are
never better than the latency, cost, and complexity associated with it.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`

2015-11-06 Thread Ryan Lane
On Thu, Nov 5, 2015 at 5:38 PM, C. Scott Ananian 
wrote:

> I view it as partly an effort to counteract the perceived complexity of
> running a forest full of separate services.  It's fine to say they're all
> preinstalled in this VM image, but that's still a lot of complexity to dig
> through: where are the all the servers? What ports are they listening all?
> Did one of them crash?  How do I restart it?
>
>
When you run docker-compose, your containers are linked together. If you
have the following containers:

parsoid
mathoid
mediawiki
mysql (hopefully not)
cassandra
redis

You'd talk to redis from mediawiki via: redis://redis:6379 and you'd talk
to parsoid via: http://parsoid and to mathoid via http://mathoid, etc etc.
It handles the networking for you.

If one of them crash then docker compose will tell you. If any of them fail
to start it will also tell you.

I'm not even a huge proponent of docker, but the docker-compose solution
for this is way more simple and way more standard than what you're
proposing and it doesn't investing a ton of effort into something that no
one other project will ever consider using.

Ride the ocean on the big boat, not the life raft.


> For some users, the VM (or an actual server farm) is indeed the right
> solution.  But this was an attempt to see if I could recapture the
> "everything's here in this one process (and one code tree)" simplicity for
> those for whom that's good enough.
>

There's no server farm here. If you're running linux it's just a set of
processes running in containers on a single node (which could be your
laptop). If you're on OSX or Windows it's a VM, but that can be totally
abstracted away using Vagrant.

If you're launching in the cloud, you could launch directly to joyent or
AWS ECS or very easily stand something up on digital ocean. If you're
really feeling like making things easier for end-users, provide
orchestration code that will automatically provision MW and its depedencies
via docker-compose in a VM in one of these services.

Orchestration + containers is what most people are doing for microservices.
Don't make something complex to maintain that's completely out of the
ordinary out of fears of complexity. Go with the solutions everyone else is
using and wrap tooling around it to make it easier for people.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`

2015-11-06 Thread Ryan Lane
On Fri, Nov 6, 2015 at 11:13 AM, C. Scott Ananian 
wrote:

> Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
> docker?)" --- there's another phabricator ticket and summit topic for that
> (
> https://phabricator.wikimedia.org/T87774 and
> https://phabricator.wikimedia.org/T113210.
>
>
I only mentioned this portion of the discussion because I can't think of
any other reason your initial proposal makes sense, since it's essentially
discussing ways to distribute and run a set of microservices. Using docker
requires root, which isn't available on shared hosting. I'm fine ignoring
this topic in this discussion, though.


> I'd prefer to have discussion in *this* particular task/thread concentrate
> on:
>
> * Hey, we can have JavaScript and PHP in the same packaging system.  What
> cool things might that enable?
>
>
* Hey, we can have JavaScript and PHP running together in the same server.
> Perhaps some persistence-related issues with PHP can be made easier?
>
> * Hey, we can actually write *extensions for mediawiki-core* in JavaScript
> (or CoffeeScript, or...) now.  Or run PHP code inside Parsoid.  How could
> we use that?  (Could it grow developer communities?)
>
>
You're not talking about microservices here, so it's at least partially a
different discussion. You're talking about adding multiple languages into a
monolith and that's a path towards insanity. It's way easier to understand
and maintain large numbers of microservices than a polygot monolith. REST
with well defined APIs between services provides all of the same benefits
while also letting people manage their service independently, even with the
possibility of the service not being tied to MediaWiki or Wikimedia at all.

I'd posit that adding additional languages into the monolith will more
likely have the result of shrinking the developer community because it
requires knowledge of at least two languages to properly do development.

* How are parser extensions (like, say, WikiHiero, but there are lots of
> them) going to be managed in the long term?  There are three separate
> codebases to hook right now.  An extension like  might eventually
> need to hook the image thumbnail service, too.  Do we have a plan?
>
>
This seems like a perfect place for another microservice.


> And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
> into one of those other tasks.  Thanks.
>
>
You're discussing packaging, distribution and running of services. So, I
don't believe they belong in another task. You're saying that alternatives
to your idea are only relevant when considered on their own, but these
alternatives are basically industry standards for the problem set as this
point and your proposal is something that only MediaWiki (and Wikimedia)
will be doing or maintaining.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`

2015-11-05 Thread Ryan Lane
Is this simply to support hosted providers? npm is one of the worst package
managers around. This really seems like a case where thin docker images and
docker-compose really shines. It's easy to handle from the packer side,
it's incredibly simple from the user side, and it doesn't require
reinventing the world to distribute things.

If this is the kind of stuff we're doing to support hosted providers, it
seems it's really time to stop supporting hosted providers. It's $5/month
to have a proper VM on digital ocean. There's even cheaper solutions
around. Hosted providers at this point aren't cheaper. At best they're
slightly easier to use, but MediaWiki is seriously handicapping itself to
support this use-case.

On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian 
wrote:

> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> )
>
> This summit topic/RFC proposes an alternative: tightly integrating PHP/HHVM
> with a persistent server process running under node.js. The central service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into a
> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
>  embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
>  uses this to embed
> MediaWiki into an express.js  HTTP server. (Other
> HTTP server frameworks could equally well be used.)  A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
>- Determine whether this technology (or something similar) might be an
>acceptable alternative for small sites which are currently using shared
>hosting.  See T113210  for
>related discussion.
>- Identify and address technical roadblocks to deploying modular
>single-server wikis (see below).
>- Discuss methods for deploying complex wikitext extensions.  For
>example, the WikiHiero
> extension would
>ideally be distributed with (a) PHP code hooking mediawiki core, (b)
>client-side JavaScript extending Visual Editor, and (c) server-side
>JavaScript extending Parsoid.  Can these be distributed as a single
>integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
>- Certain pieces of our code are hardwired to specific directories
>underneath mediawiki-core code.  This complicates efforts to run
> mediawiki
>from a "clean tree", and to distribute piece of mediawiki separately.
> In
>particular:
>- It would be better if the `vendor` directory could (optionally) live
>   outside the core mediawiki tree, so it could be distributed
> separately from
>   the main codebase, and allow for alternative package structures.
>   - Extensions and skins would benefit from allowing a "path-like" list
>   of directories, rather than a single location underneath the
> core mediawiki
>   tree.  Extensions/skins could be distributed as separate packages,
> with a
>   simple hook to add their locations to the search path.
>   - Tim Starling has suggested that when running in single-server mode,
>some internal APIs (for example, between mediawiki and Parsoid) would be
>better exposed as unix sockets on the filesystem, rather than as
> internet
>domain sockets bound to localhost.  For one, this would be more "secure
> by
>default" and avoid inadvertent exposure of internal service APIs.
>- It would be best to define a standardized mechanism for "services" to
>declare themselves & be connected & configured.  This may mean standard
> ro
>utes on a single-server install 

Re: [Wikitech-l] AWS usage

2015-10-08 Thread Ryan Lane
On Thu, Oct 8, 2015 at 9:13 AM, Petr Bena  wrote:

> As to why:
>
> AWS is more flexible and more reliable than wikimedia-labs, other than
> that it's basically the same. If they really need to use AWS it's
> probably because they don't like the restrictions that come with
> wikimedia labs (stuff must be open source, comply with policies, only
> ubuntu or debian, no proprietary software, hard-to-get public IP, long
> waiting time for stuff that you can't do yourself (request new
> project), no IPv6 and many others)
>
>
Assuming you don't use NFS, Labs is likely as reliable or possibly more
reliable than EC2, since effort is put into saving instances when hardware
is having issues (which AWS does not do). Project creation is usually
pretty fast in Labs. When you create a new account in AWS, you have to get
approved for most of the features before you can use them, so it's not
quite as quick as you're making out. Getting public IPs in Labs is
restricted, but it's also the most restricted thing in AWS too. For the
most part it's unnecessary to get/use public IPs. In AWS basically
everything goes in through ELBs and in Labs there's an ELB equivalent.

As for the rest of the restrictions, yep. Those totally make sense and fit
with the intent of Labs.

I think some of the benefits of Labs are being ignored here. You may be
limited to ubuntu/debian, but you also create an instance that's
pre-configured, that you (and any other project member) can immediately SSH
into. DNS is handled for you, load balancers are easy to use, cross-project
access is way easier (no need to manage IAM and VPCs), and networking is
pre-configured (which is actually non-trivial in AWS). Most importantly
it's free for end-users.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LDAP extension ownership

2015-09-21 Thread Ryan Lane
On Mon, Sep 21, 2015 at 8:41 AM, Chris Steipp  wrote:

> On Sep 19, 2015 11:15 AM, "bawolff"  wrote:
> >
> > maintain is an ambiguous word. WMF has some responsibility to all the
> > extensions deployed on cluster (imo). If Devunt (and any others who
> > were knowledgeable of the Josa extension) disappeared, WMF would
> > default to becoming responsible for the security and critical issues
> > in the extension (However, I wouldn't hold them responsible for
> > feature requests or minor bugs).
> >
> > LDAP is used on wikitech, and some similar services. It would be nice
> > if teams that most directly interact with the extension (I suppose
> > that's labs, maybe security) help with maintenance [Maybe they already
> > do]. I don't necessarily think they have a responsibility to (beyond
> > critical issues, security, etc), but if the teams in question aren't
> > too busy, it is always nice to give back to projects that we use.
>
> Ideally the security team would take on this extension. Unfortunately we
> just don't have the capacity to address anything except significant
> security issues right now.
>
>
For the most part, this is likely all that's needed. Security updates and
compatibility with MediaWiki. I know someone is working on an auth
framework update, so I'm sure there'll be some changes necessary for that
too.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] LDAP extension ownership

2015-09-16 Thread Ryan Lane
I haven't been actively maintaining the LDAP extension for MediaWiki for
over two years. There's not really much that needs to change, but some
basic love and care is likely a good idea. The LDAP extension is one of the
most popular MediaWiki extensions, so it wouldn't be great if it was broken
for long periods of time.

If anyone would like to take this extension over, please feel free.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "Try the free Wikipedia app" banners

2015-09-09 Thread Ryan Lane
On Wed, Sep 9, 2015 at 7:08 AM, Adam Wight  wrote:

>
> Another misconception or oversight I want to bring up is that Fundraising
> is the team pioneering the 2/3-page or full-page banners.  We're driving
> readers from the website to completely closed and somewhat evil payments
> platforms.  If there's any relevant or even irrelevant research about
> interstitials, please apply it to our work, cos we're about to have a huge
> impact on the English-speaking community in December.  Any complaints about
> the Finnish mobile experiment and dogpiling on the awesome apps developers
> seem incredibly misplaced while I'm walking around with this "Kick Me" sign
> on my backside.
>
>
I'm couldn't fully grok what point this was making, but whether it's this
mobile experiment or fundraising doing 2/3 or full-page banners, they're
terrible. This has been a major complaint by numerous people over the past
few years. Please tell me you're not considering interstitials, 2/3 page,
or full page banners for fundraising? This may be the year I actually start
boycotting the Wikimedia fundraiser, if so.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "Try the free Wikipedia app" banners

2015-09-02 Thread Ryan Lane
On Wed, Sep 2, 2015 at 12:13 PM, Oliver Keyes  wrote:

>
> And without any answer to my question about whether this was an actual
> A/B test, and whether you're measuring overall user utility rather
> than 'did they download it', this is also highly subjective and costly
> both in terms of time and emotional resources.
>
> But you're missing...well, two important points. First, as Brandon
> says, these debates /have to happen/. Identifying that something is a
> *right* thing to do, an *ethical* thing to do, cannot happen after
> that thing has been done. And second: costly in terms of time? Costly
> in terms of emotional resources? This thread is costly on both, and it
> is also an inevitable consequence of not having the discussion in
> advance.
>
>
Even ignoring the "is it right and ethical" debate, there's a pretty large
amount of research over the past 6 or so months that show this is a bad
idea. I don't understand why there's even a need for a debate. People hate
interstitials. I know the reasoning is "well, this isn't an interstitial",
but if it walks and quacks like a duck...

Part of good research is using the results of already existing research.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "Try the free Wikipedia app" banners

2015-09-02 Thread Ryan Lane
On Tue, Sep 1, 2015 at 10:50 PM, Gergo Tisza  wrote:

> On Tue, Sep 1, 2015 at 10:09 PM, Ori Livneh  wrote:
>
> > Just in time!
> > http://techcrunch.com/2015/09/01/death-to-app-install-interstitials/
>
>
> Interstitials are full-page ads where you have to click a link to get to
> the actual content. These are normal banners.
> More importantly, as you can see in the Phabricator task, they are an
> experiment to measure if it is possible to make more people use the app.
> Experiments are good. For one thing, they can turn out negative, in which
> case we will have been spared a  philosophical debate about openness.
>

I don't think anyone would consider Wikimedia's donation banners "normal".
On mobile they take up the entire screen, which makes them as bad as
interstitials. On the desktop they obscure the vast majority of the site,
even on relatively large screens. They are a frequent cause for complaint
on social media when they run.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia iOS app moving to GH

2015-07-22 Thread Ryan Lane
On Wed, Jul 22, 2015 at 2:14 PM, Gergo Tisza gti...@wikimedia.org wrote:

 GitHub is focused on small projects; for a project with lots of patches and
 committers it is problematic in many ways:


Some of the largest open source projects around are on github:

https://github.com/saltstack/salt/pulse/monthly

https://github.com/docker/docker/pulse/monthly


 * poor repository management (fun fact: GitHub does not even log force
 pushes, much less provides any ability to undo them)


You can have force push to master disabled. Also their fork model (which I
dislike for other reasons) means you can limit access to the main fork to a
small set of people and require all pull requests to come from branches of
forks. It's the default model for basically every public github project.


 * noisy commit histories due to poor support of amend-based workflows, and
 also because poor message generation of the editing interface (Linus wrote
 a famous rant
 https://github.com/torvalds/linux/pull/17#issuecomment-5654674 on that)


You can manage that yourself through rebasing of PRs. That's completely
based on your workflow and what you require of contributors (or how you do
your merges).


 * no way to mark patches which depend on each other


Sure you can. Alll PRs are also issues and can be referenced by issue
number. If you mention the issue in a comment it adds a reference for you.


 * diff view works poorly for large patches


It's way better than gerrit. May not be better than phabricator, though.


 * CR interface works poorly for large patches (no way to write draft
 comments so you need to do two passes; discussions can be marked as
 obsolete by unrelated code changes in their vicinity)


You can delete and edit your comments. Draft comments in gerrit are an
anti-pattern IMO.

The biggest reasons to avoid github are the possibility of future lock-in
of the community, them possibly doing evil things (like source forge), and
the fact that it's a third party that's collecting information on our
community.

For all intents and purposes github is superior to gerrit and phabricator
in almost every way. It was avoided at Wikimedia in the past because of
privacy and security concerns.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Another reason to consider forcing https

2015-04-13 Thread Ryan Lane
On Sat, Apr 11, 2015 at 7:44 PM, Brian Wolff bawo...@gmail.com wrote:

 On Apr 11, 2015 1:18 PM, Pine W wiki.p...@gmail.com wrote:
 
  https://citizenlab.org/2015/04/chinas-great-cannon/
 
  Pine
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 A surprisingly bold move on China's part.

 Im not sure if what is talked about applies directly to Wikipedia. Seems
 the goal was to try to compel github to remove specific content hostile
 to China's censorship interests, without china itself getting blocked,
 which might happen if DDOS was comming entirely from China IPs (since
 blocking github angers local programmers). To do that they needed to
 intercept connections inbound to servers in China, which doesn't apply to
 us as our servers are mostly in US (and despite various abuses of the NSA
 so often talked about, it is hard to imagine the US would ever consider so
 blatently misusing other people's computers in a ddos-via-mitm-js attack).
 Of course one never knows if future attacks might target outbound
 connections from China, or if some other group might try to do something
 similar (again hard to imagine, and it seems like there are very few
 entities other than China who could get away with this, but im still kind
 of shocked  that China did this)

 -

 The most interesting aspect of the report (imo) from the context of
 Wikipedia is, to quote:

 The attack on GitHub specifically targeted these repositories, possibly in
 an attempt to compel GitHub to remove these resources.  GitHub encrypts all
 traffic using TLS, preventing a censor from only blocking access to
 specific GitHub pages.  In the past, China attempted to block Github, but
 the block was lifted within two days, following significant negative
 reaction from local programmers.

 So because github encrypted everything with https (and thus blocking is an
 all or nothing afair), and because it was very popular, China was unwilling
 to block it entirely despite a small portion being objectionable.

 I don't really know what the status of wikipedia in China is, or how
 popular it is, but its conceivable that we could be in a similar position.
 Food for thought.


The only reason we remain unblocked is because we don't force SSL.
Wikipedia is relatively unused in China. If it was blocked, there'd be no
major public outcry.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-19 Thread Ryan Lane
On Thu, Mar 19, 2015 at 8:18 AM, Risker risker...@gmail.com wrote:


 The dogfooding has been happening for a while on WMF's own office-wiki.  We
 haven't heard any results about that.  Is the system being used more than
 the wikitext system?  (i.e., are there more talk page comments now than
 there were before?)  Have users expressed satisfaction/dissatisfaction with
 the system?  Have they been surveyed?  Do they break down into groups
 (e.g., engineering loves it, grants hates it, etc...)?  I hear some stories
 (including stories that suggest some groups of staff have pretty much
 abandoned talk pages on office-wiki and are now reverting to emails
 instead) but without any documentary evidence or analysis it's unreasonable
 to think that it is either a net positive OR a net negative.


From what I remember officewiki is pretty unused by most of the staff, so
I'd doubt you'd get much usable feedback there.

As someone who's used mediawiki for 10+ years I can say that *anything* is
better than wikitext for discussion. I have a certain bias towards LQT and
such, but that's because I actually want to use discussion pages for
discussion and wikitext is basically the worst user experience in the world
in this regard. You have a bias towards wikitext because it's been the only
option and Wikimedia wikis have weirdly embraced the functionality and
freedom it provides. Having that freedom hampered is surely painful to
powerusers, but the new user experience isn't even comparable it's so much
better.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org

2015-03-16 Thread Ryan Lane
On Mon, Mar 16, 2015 at 6:02 PM, Risker risker...@gmail.com wrote:

 How about just converting those threads back to Wikitext, instead?  That
 script already exists, I've seen it used on Mediawiki. Will it mess up the
 pages that have already been converted using that script?

 Bottom line, it makes no sense to replace software that was considered
 barely suitable when it was first developed with new software that can't
 even do what that old, long-neglected software could do...and in several
 cases, there is no intention to ever add the features already available
 using Wikitext.

 As expectations increase for project users to post their
 comments/concerns/ideas/observations on Mediawiki, the use of Flow will
 become a barrier for participation.


As someone who used LQT a lot, I'd say I'd much rather flow replace the
pages I maintained using LQT. Maybe you dislike flow, but it's *way* more
useful than wikitext for discussion. I never want to go back to the days
where I needed to discuss things with wikitext ever again. Wikitext
discussion pages are just the absolute worst.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GPL upgrading to version 3

2015-02-11 Thread Ryan Lane
On Wed, Feb 11, 2015 at 9:55 AM, Tyler Romeo tylerro...@gmail.com wrote:

 On February 11, 2015 at 11:49:15, Bryan Davis (bd...@wikimedia.org) wrote:
 On Tue, Feb 10, 2015 at 8:48 PM, Tyler Romeo tylerro...@gmail.com wrote:
  What is more important: allowing as many people to use our libraries as
 possible, or protecting against our libraries from being used in
 proprietary software.

 For me, allowing as many people to use our libraries as possible.
 For the sake of the discussion, why?

 For me, the advantages outweigh the disadvantages. The advantage of
 getting more companies to use our libraries is that (maybe) they will
 contribute back, similar to what Apple does with LLVM. However, on the
 other side of the same coin, we are allowing the possibility that companies
 will *not* contribute back, and instead keep their improvements to
 themselves (to be clear, I am not implying malicious intent).


Companies don't need to give back with GPL either, even if they make mods.
They only need to do so if they distribute. There's lots of Apache2
projects that have a very large amount of contribution, so maybe this would
happen, but I doubt it. Not having to maintain your own fork is a really
strong motivator for most companies.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GPL upgrading to version 3

2015-02-09 Thread Ryan Lane
On Mon, Feb 9, 2015 at 11:37 AM, Tyler Romeo tylerro...@gmail.com wrote:

 This entire conversation is a bit disappointing, mainly because I am a
 supporter of the free software movement, and like to believe that users
 should have a right to see the source code of software they use. Obviously
 not everybody feels this way and not everybody is going to support the free
 software movement, but I can assure you I personally have no plans on
 contributing to any WMF project that is Apache licensed, but at the very
 least MediaWiki core is still GPLv2, even if it makes things a bit more
 difficult.


You're implying that Apache2 licensed software is somehow not part of the
free software movement and that's absurd. Apache2 is technically a freer
license than GPLv(anything). Like GPL3, it also provides patent protection.
In practice it doesn't matter if software is forked and closed if the
canonical source isn't. The org that forks must maintain their fork and all
of their modifications without help. It's onerous and generally
unmaintainable for most orgs, especially if their core business isn't based
on the software, or if the canonical source is fast moving.

It's your choice to not participate in any project for any reason, but try
to understand that some people (such as myself) much prefer to work on
software that's truly free, rather than virally free.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 4:29 AM, David Gerard dger...@gmail.com wrote:

 On 16 January 2015 at 07:38, Chad innocentkil...@gmail.com wrote:

  These days I'm not convinced it's our job to support every possible
  scale of wiki install. There's several simpler and smaller wiki solutions
  for people who can't do more than FTP a few files to a folder.


 In this case the problem is leaving users and their wiki content
 unsupported. Because they won't move while it works, even as it
 becomes a Swiss cheese of security holes. Because their content is
 important to them.

 This is the part of the mission that involves everyone else producing
 the sum of human knowledge. They used our software, if we're
 abandoning them then don't pretend there's a viable alternative for
 them. You know there isn't.


What you're forgetting is that WMF abandoned MediaWiki as an Open Source
project quite a while ago (at least 2 years ago). There's a separate org
that gets a grant from WMF to handle third party use, and it's funded just
well enough to keep the lights on.

Take a look at the current state of MediaWiki on the internet. I'd be
surprised if less than 99% of the MediaWiki wikis in existence are out of
date. Most are probably running a version from years ago. The level of
effort required to upgrade MediaWiki and its extensions that don't list
compatibility with core versions is past the skill level of most people
that use the software. Even places with a dedicated ops team find MediaWiki
difficult to keep up to date. Hell, I find it difficult and I worked for
WMF on the ops team and have been a MediaWiki dev since 1.3.

I don't think adding a couple more services is going to drastically alter
the current situation.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 8:27 AM, David Gerard dger...@gmail.com wrote:

 On 16 January 2015 at 16:09, Antoine Musso hashar+...@free.fr wrote:

  So what we might end up with:
  - Wikimedia using the SOA MediaWiki with split components maintained by
  staff and the Wikimedia volunteers devs.  Code which is of no use for
  the cluster is dropped which would surely ease maintainability.  We can
  then reduce MediaWiki to a very thin middleware and eventually rewrite
  whatever few code is left.
  - The old MediaWiki PHP based is forked and given to the community to
  maintain.  WMF is no more involved in it and the PHP-only project live
  it's on life.  That project could be made to remove a lot of the rather
  complicated code that suits mostly Wikimedia, making MediaWiki simpler
  and easier to adjust for small installations.
  So, is it time to fork both intent?  I think so.



 This is not a great idea because it makes WMF wikis unforkable in
 practical terms. The data is worthless without being able to run an
 instance of the software. This will functionally proprietise all WMF
 wikis, whatever the licence statement.


So far all of the new services are open source. If you want to work on
forking Wikimedia projects you can even start a Labs project to work on it.
I don't think this is a concern here.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 10:21 AM, Ryan Schmidt skizz...@gmail.com wrote:


 This sounds like a problem we need to fix, rather than making it worse.
 I'd most wikis are not up to date then we should work on making it easier
 to keep up to date, not making it harder. Any SOA approach is sadly DOA for
 any shared host; even if the user has shell access or other means of
 launching processes (so excluding almost every free host in existence
 here), there is no guarantee that a particular environment such as node.js
 or Python is installed or at the correct version (and similarly no
 guarantee for a host to install them for you). Such a move, while possibly
 ideal for the WMF, would indeed make running on shared hosting nearly if
 not entirely impossible. The net effect is that people will keep their
 existing MW installs at their current version or even install old versions
 of the software so they can look like Wikipedia or whatnot, rife with
 unpatched security vulnerabilities. I cannot fathom that the WMF would be
 ok with leaving third-party users in such a situation.


You'd be wrong. WMF has been ok with it since they dropped efforts for
supporting third parties (and knew that most MW installs were insecure well
before that). It would be great if the problem was solved, but people have
been asking for it for over 10 years now and it hasn't happened. I don't
expect it to happen any time soon. There's only a couple substantial MW
users and neither of them care about third party usage.


 Moving advanced things into services may make sense, but it should not
 come at the expense of making the core worthless for third parties;
 sensible refactors could still be done on core while leaving it as pure
 PHP. Services, if implemented, should be entirely optional (as an example,
 Parsoid is not required to have a base working wiki. I would like to stick
 to that model for any future service as well).


I agree with other people that MW as-is should be forked. Until this
happens it'll never get proper third party support.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 1:05 PM, Brad Jorsch (Anomie) bjor...@wikimedia.org
 wrote:

 On Fri, Jan 16, 2015 at 12:27 PM, Ryan Lane rlan...@gmail.com wrote:

  What you're forgetting is that WMF abandoned MediaWiki as an Open Source
  project quite a while ago (at least 2 years ago).


 {{citation needed}}


There was a WMF engineering meeting where it was announced internally. I
was the only one that spoke against it. I can't give a citation to it
because it was never announced outside of WMF, but soon after that third
party support was moved to a grant funded org, which is the current status
quo.

Don't want to air dirty laundry, but you asked ;).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 1:20 PM, Erik Moeller e...@wikimedia.org wrote:


 I think the confusion between third party support and an open
 source project is unhelpful. We're obviously an open source project
 with lots of contributors who aren't paid (and many of them are
 motivated by Wikimedia's mission), it's just that the project puts
 primary emphasis on the Wikimedia mission, and only secondary emphasis
 on third party needs. I don't think it's a dirty secret that we moved
 to a model of contracting out support to third parties -- there was
 even an RFP for it. ;-)


There's open source in technicality and open source in spirit. MediaWiki is
obviously the former. It's hard to call MediaWiki open source in spirit
when the code is dumped over the wall by its primary maintainer. Third
parties get security fixes, but little to no emphasis is put into making it
usable for them. Projects that involve making the software usable for third
parties are either rejected or delegated to WMF projects that never get
resourced because they aren't beneficial to WMF (effectively licking all
those cookies). MediaWiki for third parties is effectively abandonware
since no one is maintaining it for that purpose.

All of this is to say: Wikimedia Foundation shouldn't consider third party
usage because it doesn't as-is. I don't understand why there's a
conversation about it. It's putting constraints on its own architecture
model because of a third party community it doesn't support.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Ryan Lane
On Fri, Jan 16, 2015 at 1:23 PM, Brad Jorsch (Anomie) bjor...@wikimedia.org
 wrote:

 So this was never publicly announced and has had no visible effect, to the
 point that the latest version of all the code is still publicly available
 under a free license and volunteers are still encouraged to contribute?


At the time there were projects about to start for improving the installer,
making extension management better, and a few other third-party things.
They were dropped to narrow scope. In practice third-party support was
the only thing that year dropped from scope. So, maybe not a visible effect
because it maintained the status quo, but there was definitely an effect.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Ops] All non-api traffic is now served by HHVM

2014-12-03 Thread Ryan Lane
This is awesome! Great work everyone!

On Wed, Dec 3, 2014 at 9:03 AM, Giuseppe Lavagetto glavage...@wikimedia.org
 wrote:

 Hi all,

 it's been quite a journey since we started working on HHVM, and last
 week (November 25th) HHVM was finally introduced to all users who didn't
 opt-in to the beta feature.

 Starting on monday, we started reinstalling all the 150 remaining
 servers that were running Zend's mod_php, upgrading them from Ubuntu
 precise to Ubuntu trusty in the process. It seemed like an enormous task
 that would require me weeks to complete, even with the improved
 automation we built lately.

 Thanks to the incredible work by Yuvi and Alex, who helped me basically
 around the clock,  today around 16:00 UTC we removed the last of the
 mod_php servers from our application server pool: all the non-API
 traffic is now being served by HHVM.

 This new PHP runtime has already halved our backend latency and page
 save times, and it has also reduced significantly the load on our
 cluster (as I write this email, the average cpu load on the application
 servers is around 16%, while it was easily above 50% in the pre-HHVM era).

 The API traffic is still being partially served by mod_php, but that
 will not be for long!

 Cheers,

 Giuseppe
 --
 Giuseppe Lavagetto
 Wikimedia Foundation - TechOps Team

 ___
 Ops mailing list
 o...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/ops

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] planned move to phabricator

2014-08-24 Thread Ryan Lane
On Sun, Aug 24, 2014 at 8:06 PM, svetlana svetl...@fastmail.com.au wrote:

 Hi all.

 I am reading https://www.mediawiki.org/wiki/Phabricator/Migration and it
 says:

 A review of our project management tools [1] was started, including an
 assessment of our needs and requirements, and a discussion of available
 options. A request for comment [2] was set up to gauge interest in
 simplifying our development toolchain and consolidating our tools (gitblit,
 Gerrit, Jenkins, Bugzilla, RT, Trello, and Mingle) into Phabricator. The
 result of the RFC was in favor of Phabricator.

 The RFC linked is
 https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator . It is
 clearly put wrong. People were not given alternatives. They were told
 let's move to Phabricator and they said OK, let's do that. I am
 frustrated. Decide on X, open a let's do X, get support - this is not how
 decisions are done.

 Would anyone oppose me writing another RFC detailing all the available
 options (including staying with bugzilla and what cons it has) and posting
 an URL to it here?


The RFC was open for quite some time. You missed the comment period, which
is unfortunate, but the decision has already been made and work is already
underway to switch.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] News about stolen Internet credentials; reducing Wikimedia reliance on usernames and passwords

2014-08-07 Thread Ryan Lane
On Thu, Aug 7, 2014 at 6:58 AM, Casey Brown li...@caseybrown.org wrote:

 On Thu, Aug 7, 2014 at 8:10 AM, Risker risker...@gmail.com wrote:
  A lot of the solutions  normally bandied about involve things like
  two-factor identification, which has the additional password coming
  through a separate route (e.g., gmail two-factor ID sends a second
 password
  as a text to a mobile) and means having more expensive technology) or
 using
  technology like dongles that cannot be sent to users in certain
 countries.

 Actually, most modern internet implementations use the TOTP algorithm
 open standard that anyone can use for free.
 https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm
 One of the most common methods, other than through text messages, is
 the Google Authenticator App that anyone can download for free on a
 smart phone. https://en.wikipedia.org/wiki/Google_Authenticator.


Yep. This. It's already being used for high-risk accounts on
wikitech.wikimedia.org. It's not in good enough shape to be used anywhere
else, since if you lose your device you'd lose your account. Supporting two
factor auth also requires supporting multiple ways to rescue your account
if you lose your device (and don't write down your scratch tokens, which is
common). Getting this flow to work in a way that actually adds any security
benefit is difficult. See the amount of effort Google has gone through for
this.

Let's be a little real here, though. There's honestly no good reason to
target these accounts. There's basically no major damage they can do and
there's very little private information accessible to them, so attackers
don't really care enough to attack them.

We should take basic account security seriously, but we shouldn't go
overboard.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] News about stolen Internet credentials; reducing Wikimedia reliance on usernames and passwords

2014-08-07 Thread Ryan Lane
On Thu, Aug 7, 2014 at 11:27 AM, Pine W wiki.p...@gmail.com wrote:

 There are good reasons people would target checkuser accounts, WMF staff
 email accounts, and other accounts that have access to lots of private info
 like functionary email accounts and accounts with access to restricted IRC
 channels.


WMF uses gmail; they should force-require the use of two factor
authentication for their employees if they care about that. Restricted IRC
channels also don't have anything to do with Wikimedia wiki account
security (and IRC security is a joke anyway, so if we're really relying on
that to be secure, shame on us).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] translatewiki.net compromised: server reinstalled

2014-07-10 Thread Ryan Lane
On Thu, Jul 10, 2014 at 10:09 AM, Siebrand Mazeland siebr...@kitano.nl
wrote:

 This is an email to shell account holders on translatewiki.net and to
 wikitech-l, so that you are informed.

 Today at 08:10 UTC Niklas noticed that the translatewiki.net server had
 been compromised. We saw some suspicious files in /tmp and a few processes
 that didn't belong:

 elastic+ 22862  0.0  0.0   2684  2388 ?S04:53   0:00
 /tmp/freeBSD /tmp/freeBSD 1
 elastic+ 31575  0.0  0.0   2684  2388 ?S06:38   0:00
 /tmp/freeBSD /tmp/freeBSD 1
 elastic+ 31580 16.7  0.0  90816   724 ?Ssl  06:38  16:26
 [.Linux_time_y_2]

 We gathered data and looked at our recent traffic statistics. We drew the
 following conclusions:

 - Only the Elasticsearch account had been compromised. The intruder did not
 gain access to other accounts.
 - The attack could be made because the Elasticsearch process was bound to
 all interfaces, instead of only the localhost interface, and dynamic
 scripting was enabled, because it is required by CirrusSearch
 (CVE-2014-3120).
 - A virtual machine was started, and given the traffic that was generated
 (about 1TB in the past 4 days), we think this was a DDoS drone. The process
 reported to an IP address in China.
 - A server reinstall is the right thing to do (better safe than sorry).

 The compromised server was taken off-line around 10:00 UTC today.

 Actions taken:
 - Bind Elasticsearch only to localhost from now on:
 https://gerrit.wikimedia.org/r/#/c/145262/
 - Reinstall the server

 Actions to be taken:
 - Configure a firewall to only allow expected traffic to enter and exit the
 translatewiki.net server so that something like the added virtual machine
 could not have communicated to the outside world.
 - As a precaution, shell account holders should change any secret that they
 have used on the translatewiki.net server in the past 7 days.


Did this server have access to private ssh keys that are used to push/merge
code for upstream repos? If so, will they be rotated as well?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Login to Wikimedia Phabricator with a GitHub/Google/etc account?

2014-05-16 Thread Ryan Lane
On Thu, May 15, 2014 at 7:36 PM, Steven Walling steven.wall...@gmail.comwrote:

 On Thu, May 15, 2014 at 2:20 PM, Quim Gil q...@wikimedia.org wrote:

  However, Phabricator can support authentication using 3rd party providers
  like GitHub, Google, etc. You can get an idea at
  https://secure.phabricator.com/auth/start/
 

 I think since this is already built and would require no extra work, we
 should definitely support GitHub and Persona as well.


Persona is dead. It's no longer being actively developed by Mozilla.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Login to Wikimedia Phabricator with a GitHub/Google/etc account?

2014-05-15 Thread Ryan Lane
On Thu, May 15, 2014 at 5:20 PM, Quim Gil q...@wikimedia.org wrote:

 This is a casual request for comments about the use of 3rd party
 authentication providers for our future Wikimedia Phabricator instance.

 Wikimedia Phabricator is expected to replace Bugzilla, Gerrit and many
 other tools, each of them having their own registration and user account.
 The plan is to offer Wikimedia SUL (your Wikimedia credentials) as the
 default way to login to Phabricator -- details at
 http://fab.wmflabs.org/T40

 However, Phabricator can support authentication using 3rd party providers
 like GitHub, Google, etc. You can get an idea at
 https://secure.phabricator.com/auth/start/

 There are good reasons to plan for Wikimedia SUL only (consistency with the
 rest of Wikimedia projects), and there are good reasons to plan for other
 providers as well (the easiest path for most first-time contributors).

 What do you think? Should we offer alternatives to Wikimedia login? If so,
 which ones?


Will Labs no longer have the same authentication as the rest of the
tooling? Is this something that will be solved before the switch?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Affiliation in username

2014-05-09 Thread Ryan Lane
On Wed, May 7, 2014 at 1:22 PM, Jared Zimmerman 
jared.zimmer...@wikimedia.org wrote:

 Affiliations change, and user names are quite difficult to change, this
 sounds like something that would be good for a structured profile, not for
 a user name.


Indeed, or in the user preferences so that it could be accessed natively.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Abandoning -1 code reviews automatically?

2014-04-14 Thread Ryan Lane
On Sun, Apr 13, 2014 at 1:37 PM, Marcin Cieslak sa...@saper.info wrote:

 2) https://gerrit.wikimedia.org/r/#/c/11562/

 My favourite -1 here is needs rebase.


Well, obviously trivial rebases should be done automatically by the system
(which OpenStack's system does), and changes that need a rebase due to
conflicts should be fixed. Reviewer time is generally in short supply, so
it makes sense to have the committer do any conflict resolution.


 Regarding Openstack policies: I'd say we should not follow them.

 I used to be #2 git-review contributor according to launchpad
 until recently. I gave up mainly because of my inability
 to propose some larger change to this relatively simple
 script. For a nice example of this, please see

 https://review.openstack.org/#/c/5720/

 I have given up to contribute to this project some time
 after this, I have no time to play politics to submit
 a set of tiny changes and play the rebase game depending
 on the random order they might would have got reviewed.


This seems like an odd change to use as an example. There seems to be no
politics in play there. All of the reviews were encouraging, but it looked
like there was a release happening during your reviews and it caused a
number of merge conflicts. The change was automatically abandoned, but you
could have restored it and pinged someone on the infra team.


 The next time I find time to improve Johnny the causual
 developer experience with gerrit I will just rewrite
 git-review from scratch. The amount of the red tape
 openstack-infra has built around their projects is
 simply not justifiable for such a simple utility
 like git-review. Time will tell if gerrit-based
 projects generally fare better than others.


Maybe, until you start looking at their statistics:


http://stackalytics.com/?release=icehousemetric=marksproject_type=openstackmodule=company=user_id=


If you notice, this release cycle they had 1,200 reviewers. One
organization had 20k reviews over the cycle, and the top 5 each had over
10k reviews. Their process scales way better than Wikimedia's, but that's
also due to the way projects are split up and organized as well.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wikitech downtime Tuesday, April 1st 9AM PST (16:00UTC)

2014-03-26 Thread Ryan Lane
On Wed, Mar 26, 2014 at 1:40 PM, Petr Bena benap...@gmail.com wrote:

 Will the new wiki be faster? Will it not log me off randomly even if I
 check remember me. Will it serve coffee to new users?


Do you still have an issue where it logs you off randomly? That was very
briefly an issue that I caused when I switched keystone to a redis token
backend and incorrectly purged everyone's wiki tokens. I fixed that a few
days later. If you're still having the issue you should open a bug.

Also, wikitech pretty much assumes you'll be logged in, so I don't see it
getting much faster, seeing as that MediaWiki is slow as hell for logged-in
user no matter what you do.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-08 Thread Ryan Lane
On Sat, Mar 8, 2014 at 1:38 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Fri, Mar 7, 2014 at 7:04 PM, Ryan Lane rlan...@gmail.com wrote:

  Yes! This is a _good_ thing. Developers should feel responsible for what
  they build. It's shouldn't be operation's job to make sure the site is
  stable for code changes. Things should go more in this direction, in
 fact.
 

 If you want to give me root access to the MediaWiki production cluster,
 then I'll start being responsible for the stability of the site.


You don't work for WMF, so you personally have no responsibility for the
stability of the site. It's the WMF developers who have a responsibility to
ensure code they're pushing out won't break the site. As an aside, root
isn't necessary to maintain MediaWiki on WMF's cluster ;).

Tell me something, what about the developers of MariaDB? Should they be

 responsible for WMF's stability? If they accidentally release a buggy
 version, are they expected to revert it within hours so that the WMF
 operations team can redeploy? Or will the operations team actually test new
 releases first, and refuse to update until things start working again?


You're confused about the role of operations and development, especially
with respect to organizations that let developers deploy and maintain the
applications they develop.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-08 Thread Ryan Lane
On Sat, Mar 8, 2014 at 2:06 PM, Ryan Lane rlan...@gmail.com wrote:

 On Sat, Mar 8, 2014 at 1:38 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Fri, Mar 7, 2014 at 7:04 PM, Ryan Lane rlan...@gmail.com wrote:

  Yes! This is a _good_ thing. Developers should feel responsible for what
  they build. It's shouldn't be operation's job to make sure the site is
  stable for code changes. Things should go more in this direction, in
 fact.
 

 If you want to give me root access to the MediaWiki production cluster,
 then I'll start being responsible for the stability of the site.


 You don't work for WMF, so you personally have no responsibility for the
 stability of the site. It's the WMF developers who have a responsibility to
 ensure code they're pushing out won't break the site. As an aside, root
 isn't necessary to maintain MediaWiki on WMF's cluster ;).


Of course, I'm really saying that everyone who has access to deploy has
this responsibility.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-08 Thread Ryan Lane
On Sat, Mar 8, 2014 at 6:21 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Sat, Mar 8, 2014 at 8:38 PM, Marc A. Pelletier m...@uberbox.org
 wrote:

  The answer is: no, obviously not.  And for that reason the MariaDB
  developers are not allowed to simply push their latest code on our
  infrastructure with a simple +2 to code review.
 

 Yes, and my point is that MediaWiki developers shouldn't be able to do that
 either! Receiving a +2 should not be the only line between a patch and
 deployment. Changes should be tested *before* deployment. Nobody is saying
 that developers are not responsible for writing good and working code, but
 there needs to be guards against things like this happening.


Wikimedia uses deployment branches. Just because someone +2/merges into
master doesn't mean it immediately shows up on Wikimedia servers. It needs
to go into a deployment branch, then it needs to get deployed by a person.
Also, we use a gating model, so tests are required to pass before something
is merged. I believe there are some tests that are essential, but take too
long to run, so they aren't gating, but the situation isn't as dire as you
claim.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-08 Thread Ryan Lane
On Sat, Mar 8, 2014 at 7:05 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Sat, Mar 8, 2014 at 9:48 PM, Ryan Lane rlan...@gmail.com wrote:

  Wikimedia uses deployment branches. Just because someone +2/merges into
  master doesn't mean it immediately shows up on Wikimedia servers. It
 needs
  to go into a deployment branch, then it needs to get deployed by a
 person.
  Also, we use a gating model, so tests are required to pass before
 something
  is merged. I believe there are some tests that are essential, but take
 too
  long to run, so they aren't gating, but the situation isn't as dire as
 you
  claim.
 

 OK, then how did this change get deployed if it broke tests?


The jenkins report says it passed tests, hence why it was deployed. If
there's other tests that aren't reporting to gerrit or if there's a test
that needs to be added, maybe that's a post-mortem action to track?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-08 Thread Ryan Lane
On Sat, Mar 8, 2014 at 9:34 PM, MZMcBride z...@mzmcbride.com wrote:

 Chad wrote:
  On Mar 8, 2014 1:42 PM, Brandon Harris bhar...@wikimedia.org wrote:
 https://en.wikipedia.org/wiki/Badger
 
 New rule: calling badger is synonymous with asking for the moderation bit
 for yourself.

 Fine, but first we have to ban the people who top-post and don't trim
 unnecessary parts of their e-mails. :-)

 I personally read the badger link as a politer(?) version of calling
 someone a troll.


It's actually something used on WMF staffer lists and it's supposed to be a
cute way of telling people a thread is getting out of hand. Some people use
it that way, and I feel Brandon was in this case, but it's most often used
to shut down conversations that people find unpleasant or controversial and
its actual effect is usually to enrage those it's used against.

I'd wager it has an overwhelmingly negative effect on communication (and
WMF's internal politics likely prove that) and I'm with Chad in saying
anyone that uses it here should be moderated. This is of course off topic,
so I'll leave this at that.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-07 Thread Ryan Lane
On Fri, Mar 7, 2014 at 2:54 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Fri, Mar 7, 2014 at 5:39 PM, George Herbert george.herb...@gmail.com
 wrote:

  With all due respect; hell, yes, development comes in second to
 operational
  stability.
 
  This is not disrespecting development, which is extremely important by
 any
  measure.  But we're running a top-10 worldwide website, a key worldwide
  information resource for humanity as a whole.  We cannot cripple
  development to try and maximize stability, but stability has to be
 priority
  1.  Any large website's teams will have the same attitude.
 
  I've had operational outages reach the top of everyone's news
  source/feed/newspaper/broadcast.  This is an exceptionally unpleasant
  experience.
 

 If you really think stability is top priority, then you cannot possibly
 think that the current deployment process is sane.


Developers shouldn't be blocked on deployment or operations. Development is
expensive and things will break either way. It's good to assume things will
break and:

1. Have a simple way to revert
2. Put tests in for common errors
3. Have post-mortems where information is kept for historical purposes and
bugs are created to track action items that come from them


 Right now you are placing the responsibility on the developers to make sure
 the site is stable, because any change they merge might break production
 since it is automatically sent out. If anything that gives the appearance
 that the operations team doesn't care about stability, and would rather
 wait until things break and revert them.


Yes! This is a _good_ thing. Developers should feel responsible for what
they build. It's shouldn't be operation's job to make sure the site is
stable for code changes. Things should go more in this direction, in fact.

I'm not totally sure what you mean by it's automatically sent out,
though. Deploys are manual.


 It is the responsibility of the operations team to ensure stability. Having
 to revert something because that's the only way production will be stable
 is not a proper workflow.


It's the responsibility of the operations team to ensure stability at the
infrastructure level, not at the application level. It's sane to expect to
revert things because things will break no matter what. Mean time to
recovery is just as important or more important than mean time between
failure.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Ryan Lane
On Thu, Mar 6, 2014 at 3:54 PM, Tyler Romeo tylerro...@gmail.com wrote:

 I think this entire thing was a big failure in basic software development
 and systems administration. If MobileFrontend is so tightly coupled with
 the desktop login form, that is a problem with MobileFrontend. In addition,
 the fact that a practically random code change was launched into production
 an hour later without so much as a test... That's the kind of thing that
 gets people fired at other companies.


At shitty companies maybe.

Things break. You do a post-mortem and track the things that lead to an
outage and try to make sure it doesn't break again, ideally by adding
automated tests, if possible.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] deploying the most recent MediaWiki code: which branch?

2014-02-20 Thread Ryan Lane
On Thu, Feb 20, 2014 at 1:20 PM, Sumana Harihareswara suma...@wikimedia.org
 wrote:

 Reuben Smith of wikiHow asked:
 We're having a hard time figuring out whether we should be basing our
 wikiHow code off Mediawiki's external releases (such as the latest 1.22.2),
 or off the branches that WMF uses for their internal infrastructure (latest
 looks to be wmf/1.23wmf14).

 Do you have any thoughts of guidance on that? We're leaning towards moving
 to using the WMF internal branches, since we use MySQL as well, but I
 wanted to hear from different people about the drawbacks.


 Greg Grossmeier responded:

 The quick answer is:
 Feel free to base it off of either. There shouldn't be any WMF-specific
 things in those wmfXX branches. If there is, it is a commit called
 something like Commit of various WMF live hacks. That one commit can
 be safely reverted.

 The wmfXX branches are made every week on Thursday morning (Pacific)
 before we deploy. As we get closer to the next release (1.23) the
 MediaWiki Release Managers (our outside contractors, Mark and Markus,
 not myself) will pick a wmfXX to call a Release Candidate.

 Going with a 1.22.x would give you stability at the loss of getting
 fixes faster and it means a bigger upgrade task when 1.23 is out.

 Summary: If you want to keep closer to WMF, pick a wmfXX branch (this
 assumes you'll follow at some pace behind WMF). If you don't want to be
 that bleeding edge, stick with 1.22.x.


Note that unless you're willing to keep up to date with WMF's relatively
fast pace of branching, you're going to miss security updates. No matter
what, if you use git you're going to get security updates slower, since
they are released into the tarballs first, then merged into master, then
branches (is this accurate?). Sometimes the current WMF branch won't even
get the security updates since they are already merged locally onto
Wikimedia's deployment server.

That said, due to the poor state of extension management in MediaWiki, the
only reasonable way to manage MediaWiki is to use the WMF branches since
they handle dependencies for the most popular extensions. I was hoping that
composer would make managing extensions easier, but I've been tracking it
in SMW and it's actually making things more difficult.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Drop support for PHP 5.3

2014-02-20 Thread Ryan Lane
On Thu, Feb 20, 2014 at 3:22 PM, Trevor Parscal tpars...@wikimedia.orgwrote:

 Is that the rule then, we have to make MediaWiki work on anything Ubuntu
 still supports?

 Is there a rule?


We should strongly consider ensuring that the latest stable releases of
Ubuntu and probably RHEL (or maybe fedora) can run MediaWiki.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Drop support for PHP 5.3

2014-02-20 Thread Ryan Lane
On Thu, Feb 20, 2014 at 3:58 PM, James Forrester
jforres...@wikimedia.orgwrote:

 On 20 February 2014 15:34, Ryan Lane rlan...@gmail.com wrote:

  On Thu, Feb 20, 2014 at 3:22 PM, Trevor Parscal tpars...@wikimedia.org
  wrote:
 
   Is that the rule then, we have to make MediaWiki work on anything
 Ubuntu
   still supports?
  
   Is there a rule?
  
 
  We should strongly consider ensuring that the latest stable releases of
  Ubuntu and probably RHEL (or maybe fedora) can run MediaWiki.
 

 Is that a no, only the latest stable releases, or yes, the latest
 stable releases and also the LTS releases?


If it isn't an LTS it isn't a stable release.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Faidon Liambotis promoted to Principal Operations Engineer

2014-02-10 Thread Ryan Lane
On Mon, Feb 10, 2014 at 5:48 AM, Mark Bergsma m...@wikimedia.org wrote:

 I'm pleased to announce that Faidon Liambotis has been promoted to
 Principal Operations Engineer.


Definitely deserved and far too late in coming! Congrats Faidon, you do
great work and you're an awesome person to work with.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wmf getting ready for puppet3, advice please

2014-01-28 Thread Ryan Lane
On Tue, Jan 28, 2014 at 2:41 AM, Ariel T. Glenn ar...@wikimedia.org wrote:

 Hi puppet wranglers,

 We're trying to refactor the WMF puppet manifests to get rid of reliance
 on dynamic scope, since puppet 3 doesn't permit it.  Until now we've
 done what is surely pretty standard pupet 2.x practice: assign
 values to a variable in the node definition and pick it up in the
 class from there dynamically.  Example: we set $openstack_version
 to a specific version depending on the node, and the nova and
 openstack classes do different things depending on that value.

 Without dynamic scoping that's no longer possible, so I'm wondering
 what people are doing in the real world to address this, and
 what best practices are, if any.  Hiera? Some sort of class
 parameterization?  Something else?

 It should go without saying that we'd like not to have to massively
 rewrite all our manifests, if possible.


In puppet3 variables assigned in the node are still global. It's the only
place other than facts (or hiera) that you can assign them and have their
scope propagate. So, this'll continue working. I think the future path is
surely to use hiera rather than assigning variables in role classes (and
making tons of role classes based on realm/cluster/site/etc).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reasonator use in Wikipedias

2014-01-21 Thread Ryan Lane
On Tue, Jan 21, 2014 at 7:17 AM, Gerard Meijssen
gerard.meijs...@gmail.comwrote:

 Hoi,

 At this moment Wikipedia red links provide no information whatsoever.
 This is not cool.

 In Wikidata we often have labels for the missing (=red link) articles. We
 can and do provide information from Wikidata in a reasonable way that is
 informative in the Reasonator. We also provide additional search
 information on many Wikipedias.

 In the Reasonator we have now implemented red lines [1]. They indicate
 when a label does not exist in the primary language that is in use.

 What we are considering is creating a template {{Reasonator}} that will
 present information based on what is available in Wikidata. Such a template
 would be a stand in until an article is actually written. What we would
 provide is information that is presented in the same way as we provide it
 as this moment in time [2]

 This may open up a box of worms; Reasonator is NOT using any caching. There
 may be lots of other reasons why you might think this proposal is evil. All
 the evil that is technical has some merit but, you have to consider that
 the other side of the equation is that we are not sharing in the sum of
 all knowledge even when we have much of the missing requested information
 available to us.

 One saving (technical) grace, Reasonator loads round about as quickly as
 WIkidata does.

 As this is advance warning, I hope that you can help with the issues that
 will come about. I hope that you will consider the impact this will have on
 our traffic and measure to what extend it grows our data.

 The Reasonator pages will not show up prettily on mobile phones .. so does
 Wikidata by the way. It does not consider Wikipedia zero. There may be more
 issues that may require attention. But again, it beats not serving the
 information that we have to those that are requesting it.


I have a strong feeling you're going to bring labs to its knees.

Sending editors to labs is one thing, but you're proposing sending readers
to labs, to a service that isn't cached.

If reasonator is something we want to support for something like this,
maybe we should consider turning it into a production service?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wiki - Gerrit was Re: FWD: [Bug 58236] New: No longer allow gadgets to be turned on by default for all users on Wikimedia sites

2013-12-11 Thread Ryan Lane
On Wed, Dec 11, 2013 at 3:21 PM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Hey,

 Has there been thought on how GitHub can potentially help here? I'm not
 sure it fits the workflow well, though can make the following observations:


Unless you're implying that github writes some code for us, I'm going to
assume this is a troll from you and leave it at that.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mailing list etiquette and trolling

2013-12-11 Thread Ryan Lane
On Wed, Dec 11, 2013 at 5:11 PM, Jeroen De Dauw jeroended...@gmail.comwrote:

 In recent months I've come across a few mails on this list that only
 contained accusations of trolling. Those are very much not constructive and
 only serve to antagonize. I know some forums that have an explicit rule
 against this, which results in a ban on second violation. If there is a
 definition of the etiquette for this list somewhere, I suggest having a
 similar rule be added there. Thoughts?


To be fair, you were proposing that we use a proprietary third party web
site for editing wikimedia wiki pages, which would violate out privacy
policy and break our principles of openness. How was I not to think you
were trolling? My only alternative was to think you've simply lost your
mind.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Making MediaWiki extensions installable via Composer

2013-12-08 Thread Ryan Lane
On Mon, Dec 9, 2013 at 12:06 AM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Finding a way to separate MW the library from MW the application may be a
  solution to this conflict. I don't think this would be a trivial
  project, but it doesn't seem impossible either.
 

 That'd be fanatic if it happened for many other reasons as well. For all
 intents and purposes it is a big caste in the sky though. I expect this to
 not happen in the coming years, unless there is a big shift in opinion on
 what constitutes good software design and architecture in the community.

 The side effect is that it removed the ability to use Composer to
  manage external components used by MW the library which is Tyler's
  proposed use case [0].
 

 If the core community actually gets to a point where potential usage of
 third party libraries via Composer is actually taken seriously, this will
 indeed need to be tackled. I do not think we are quite there yet. For
 instance, if we go down this road, getting a clone of MW will no longer be
 sufficient to have your wiki run, as some libraries will first need to be
 obtained. This forces change to very basic workflow. People who dislike
 Composer will thus scream murder. Hence I tend to regard this as a moot
 point for now. Though lets pretend this has already bee taken care off and
 look at solutions to the technical problem.

 One approach, that is a lot more feasible then making MediaWiki (partially)
 library like, is to specify the MW dependencies somewhere else. Not in
 composer.json. Then when you install MediaWiki, these dependencies get
 added automatically to composer.json. And when you update it, new ones also
 get added. In a way, this is similar to the generation of LocalSettings.
 This is the approach I'd be investigating further if we actual where at a
 point where a technical solution is needed.


No one has any major objections to composer *support*. People have
objections to composer being required, though, as many of us use git as our
method of installation and deployment and have very valid beliefs that
composer is inferior to git with submodules for a number of reasons.

I think the biggest obstacle so far is the half assed way things are
currently happening, from all fronts. There's no real support from
Wikimedia in this regard. There's a composer is the right way of do it,
and you're an idiot for not using attitude from some. There's also a
relatively large lack of understanding of how we can use this for both
extensions and MediaWiki all around.

Someone needs to actually take the reigns on this, or let's stop wasting
the list's time with it.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] OpenID deployment delayed until sometime in 2014

2013-11-27 Thread Ryan Lane
On Wed, Nov 27, 2013 at 3:51 AM, Gerard Meijssen
gerard.meijs...@gmail.comwrote:

 According to the website of myopenid [1], they are closing down Februari 1.
 My hope was for the Wikimedia Foundation to come to the rescue and provide
 a non commercial alternative to what Google and Facebook offer.

 I am extremely disappointed that we are not. I hope that what is done
 instead has at least a similar impact than leaving it all to the commercial
 boys and girls. Privacy should not be left to commerce and national secret
 services.


Our account security honestly makes us a poor choice for an auth provider
and you should have never considered us for this anyway. We don't have
password requirements, we don't offer two factor auth, we don't offer good
ways to rescue a lost account, and we really don't want to be a target of
attack for the purpose of owning other websites.

Also, if you're really concerned about privacy you should be putting your
support behind Mozilla's BrowserID (Persona).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Operations buy in on Architecture of mwlib Replacement

2013-11-14 Thread Ryan Lane
On Thu, Nov 14, 2013 at 8:13 AM, C. Scott Ananian canan...@wikimedia.orgwrote:

 And I'll add that there's another axis: gwicke (and others?) have been
 arguing for a broader collection of services architecture for mw.  This
 would decouple some of the installability issues.  Even if PDF rendering
 (say) was a huge monster, Jimmy MediaWiki might still be able to simply
 install the core of the system.  Slow progress making PDF rendering more
 friendly wouldn't need to hamper all the Jane MediaWikis who don't need
 that feature.


Definitely and others.  Apart from decoupling instability issues it also
breaks the application into separately maintainable applications that can
have teams of people working on them separately. The only thing needed to
ensure compatibility with other teams is a stable API, and that's what API
versioning is for. Having multiple services doesn't complicate things much,
unless you're running on a shared host.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [IRC] wm-bot semi outage

2013-10-16 Thread Ryan Lane
On Wed, Oct 16, 2013 at 1:45 AM, Petr Bena benap...@gmail.com wrote:

 not that it would actually worked better :P I would be most happy to
 switch to local /dev/vdb storage which never had any problems, but for
 that wm-bot would need to be on own project


That's not a good idea. Performance would be better (which you probably
don't need), but at the cost of decreased ability to recover. /dev/vdb goes
away with an instance. If a compute node fails and your instance is on it,
your data is gone.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [IRC] wm-bot semi outage

2013-10-15 Thread Ryan Lane
On Tue, Oct 15, 2013 at 9:15 AM, Petr Bena benap...@gmail.com wrote:

 Hi,

 because of this bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=55690

 wm-bot is unable to write logs to file storage. Unfortunatelly since
 10 minutes ago, the mysql storage broke as well, I have no idea why
 but I am afraid that only fix that comes to my mind now involves bot
 restart which isn't possible because all logs that couldn't be written
 are cached in operating memory (so if I restarted wm-bot we would
 loose 2 - 3 days of logs of all channels that are in RAM now).

 So, if someone wondered what is up with public channel logging, this
 is a reason why there are logs missing, not only on file storage at
 bots.wmflabs.org/~wm-bot/logs but also sql at
 http://tools.wmflabs.org/wm-bot/logs/

 I hope wmf will soon start selling I love gluster t-shirts :-) I would
 get one


It's possible to switch to NFS, you know ;).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Officially supported MediaWiki hosting service?

2013-10-02 Thread Ryan Lane
On Wed, Oct 2, 2013 at 4:50 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
  From: Ryan Lane rlan...@gmail.com

  On Wed, Oct 2, 2013 at 12:48 PM, C. Scott Ananian
  canan...@wikimedia.orgwrote:
 
   One intermediate position might be for WMF to distribute virtual
   machine images

  I'd rather provide cloud-init scripts with instructions on how to use
 them.
  The cloud init script could pull and run a puppet module, or a salt
 module,
  or etc. etc.. Providing images kind of sucks.

 Ok, time for me to throw an oar in the water, as an ops and support guy.

 My perception of Brion's use of officially supported was, roughly,
 whomever is providing support to these hosting customers has a direct,
 *formal* line of communication to the development staff.

 The reason you build images, and version the images, is that it provides a
 clear solid baseline for people providing such support to know (and,
 preferably, be able to put their fingers on) exactly the release you're
 running, so they can give you clear and correct answers -- it's not just
 going to be the Mediawiki release number that's the issue there.

 That's impossible to do reliably if you pull the code down and build it
 sui generis on each customer's machine... which is what I understand
 Ryan to be suggesting.

 It's a little more work to build images, but you're not throwing it
 away; you get it back in reduced support costs.


No. I'm suggesting that we provide cloud-init scripts [1] to let people
seed their virtual instances. It would pull down a puppet, salt, chef,
juju, etc. repository which would then install all the prerequisites, start
all the necessary services, install MediaWiki, and maybe install and
configure a number of extensions. We could skip cloud-init completely for
some of these. I think vagrant has ec2 [2] and openstack [3] providers.
salt stack has salt-cloud [4] (or you could use salty vagrant [5]).

Some of these also already have mediawiki modules created and usable. In
fact, juju uses MediaWiki for demo purposes relatively often. WMF has a
usable puppet module. I wrote a salt module for webplatform.org, which
we'll be publishing soon.

The nice part about this is that it's all configuration managed, can be
versioned and could also be used to upgrade MediaWiki and all related
infrastructure.

Images are a pain in the ass to build and maintain (I build and maintain
images), you need to keep them updated because the image will be insecure
otherwise, and they are giant. They are also way less flexible.

- Ryan

[1] http://cloudinit.readthedocs.org/en/latest/
[2] https://github.com/mitchellh/vagrant-aws
[3] https://github.com/cloudbau/vagrant-openstack-plugin
[4] https://github.com/saltstack/salt-cloud
[5] https://github.com/saltstack/salty-vagrant
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Ryan Lane
On Tue, Oct 1, 2013 at 8:21 AM, Jeroen De Dauw jeroended...@gmail.comwrote:

 The file cache is barely maintained and I am not sure whether anyone is
  still relying on it.   We should probably remove that feature entirely
  and instruct people to setup a real frontend cache instead.
 

 This means we say you no can has cache to people using shared hosts. Do
 we really want to do that? If so, the docs ought to be updated, and the
 deprecation announced.


In my opinion we should completely drop support for shared hosts. It's 2013
and virtual hosts are cheap and superior in every way. Supporting shared
hosting severely limits what we can do in the software reasonably.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Ryan Lane
On Tue, Oct 1, 2013 at 12:46 PM, Christopher Wilson gwsuper...@gmail.comwrote:

 I'm usually just an observer on these lists, but I'll weigh in as a user
 who runs MediaWiki on a shared host. The host *is* a VPS, but our wiki is
 used by the environmental department of a large international non-profit.
 As such it lives on the enviro server along with some WordPress sites and
 other minor things.

 If we have to give the wiki its own dedicated VPS, it will likely not
 survive, or we will move to another platform. I see some REALLY low costs
 for VPSes being tossed around here, but honestly, we'd probably be looking
 at a minimum of $50/month to give the site its own dedicated VPS. I realize
 that in the grand scheme of things, that's not a huge cost, but at that
 point, management will probably insist on making the site pay for itself
 rather than the current situation of letting it exist on shared resources.


We aren't discussing dropping support for running MediaWiki along with
other applications, but we're discussing dropping support for shared
hosting services, which run hundreds of applications on the same host as
different customers. It's horribly insecure and doesn't allow the user to
install anything at the system-level.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Ryan Lane
On Tue, Oct 1, 2013 at 1:43 PM, Tyler Romeo tylerro...@gmail.com wrote:

 Has anybody ever considered the possibility that maybe people don't know
 (or want to know) how to set up a caching proxy? One of the nice things
 about MediaWiki is that it's extraordinarily easy to set up. All you have
 to do is dump a tar.gz file into a directory, run the web installer and
 call it a day. No sysadmin experience required.


This is only true if you want almost no functionality of out MediaWiki and
you want it to be very slow. MediaWiki is incredibly difficult to properly
run and requires at least some minor sysadmin experience to do so. There's
a reason that almost every MediaWiki install in existence is completely out
of date.

When we get to Wordpress's ease of use, then we can assume this.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Ryan Lane
On Tue, Oct 1, 2013 at 2:53 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Tue, Oct 1, 2013 at 2:46 PM, Ryan Lane rlan...@gmail.com wrote:

  This is only true if you want almost no functionality of out MediaWiki
 and
  you want it to be very slow. MediaWiki is incredibly difficult to
 properly
  run and requires at least some minor sysadmin experience to do so.
 There's
  a reason that almost every MediaWiki install in existence is completely
 out
  of date.
 

 Do you have some specific examples?


Extension management, upgrades, proper backend caching, proper localization
cache, proper job running, running any maintenance script, using a *lot* of
different extensions, etc. etc..


 Also, if that's the case then removing file caching would be a step
 backwards.


Pretending that we support the lowest common denominator while not actually
doing it is the worst of all worlds. We should either support shared hosts
excellently, which is very difficult, or we should just stop acting like we
support them.

I'm not saying we should make the software unusable for shared hosts, but
we also shouldn't worry about supporting them for new features or
maintaining often broken features (like file cache) just because they are
useful on shared hosting. It makes the software needlessly complex for a
dying concept.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Ryan Lane
On Tue, Oct 1, 2013 at 3:00 PM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Hey,

 This is only true if you want almost no functionality of out MediaWiki and
  you want it to be very slow.
 

 There are quite a few people running MW without a cache or other magic
 config and find it quite suitable for their needs, which can be quite
 non-trivial.

 MediaWiki is incredibly difficult to properly
  run and requires at least some minor sysadmin experience to do so.
 There's
  a reason that almost every MediaWiki install in existence is completely
 out
  of date.
 

 So because MediaWiki already sucks in some regards, its fine to have it
 suck more in others as well? Is that really the point made here?


I'm actually arguing that we should prioritize fixing the things that suck
for everyone over the things that suck for shared hosts. We should
especially not harm the large infrastructures just so that we can support
the barely usable ones. If we keep supporting shared hosts we likely can't
break portions of MediaWiki into services without a lot of duplication of
code and effort (and bugs!).

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Access from China

2013-09-23 Thread Ryan Lane
Ryu,

As far as we've been told Wikipedia access to China is open, excluding a
filter for specific pages, assuming you are using HTTP and not HTTPS.

- Ryan


On Mon, Sep 23, 2013 at 7:36 PM, Ryu Cheol rch...@gmail.com wrote:

 Hello all,
 last week I had been in Nanjing of China for my business. I tried
 to access to Wikipedia, my love, of course. Wow, I found I can access on my
 iPad. It's quite weird. So I tried again on my notebook computer, the
 firewall reset the session to stop me to access Wikipedia. That's exactly
 what I expected.  The great fire wall of China was so vulnerable. Any idea
 why I could access on my iPad?

 Best regards
 Cheol
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Access from China

2013-09-23 Thread Ryan Lane
On Mon, Sep 23, 2013 at 8:07 PM, Ryu Cheol rch...@gmail.com wrote:

 On iPad, I could not check it is http or https. You means that on mobile
 devices such as iPad, it does not redirect to https. I feel a bit, it is
 not consistent.

 In the region which https is not allowed, could we keep the http as the
 default protocol for example China?
 I thought I totally lost the access. I ask we could change directing
 behavior according to the region the access comes from?


Can you email me privately and let me know what your IP address is? We
actually do have something in place to not redirect logged-in users in some
regions (such as China). Maybe the geoip database we are using is incorrect
for your area.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC]: Clean URLs- dropping /wiki/ and /w/index.php?title=..

2013-09-16 Thread Ryan Lane
On Mon, Sep 16, 2013 at 3:12 PM, Gabriel Wicke gwi...@wikimedia.org wrote:

 Hi,

 while tinkering with a RESTful content API I was reminded of an old pet
 peeve of mine: The URLs we use in Wikimedia projects are relatively long
 and ugly. I believe that we now have the ability to clean this up if we
 want to.

 It would be nice to

 * drop the /wiki/ prefix
   https://en.wikipedia.org/Foo instead of
   https://en.wikipedia.org/wiki/Foo

 * use simple action urls
   https://en.wikipedia.org/Foo?action=history instead of
   https://en.wikipedia.org/w/index.php?title=Fooaction=history

 The details of this proposal are discussed in the following RFC:

 https://www.mediawiki.org/wiki/Requests_for_comment/Clean_up_URLs

 I'm looking forward to your input!



https://www.mediawiki.org/wiki/Manual:Short_URL#URL_like_-_example.com.2FPage_title


*Warning:* this method may create an unstable URL structure and leave some
page names unusable on your wiki. See Manual:Wiki in site root
directoryhttps://www.mediawiki.org/wiki/Manual:Wiki_in_site_root_directory.
Please see the article Cool URIs don't
changehttp://www.w3.org/Provider/Style/URIand take a few minutes to
devise a stable URL structure for your web site
before hopping willy-nilly into rewrites into the URL root.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC]: Clean URLs- dropping /wiki/ and /w/index.php?title=..

2013-09-16 Thread Ryan Lane
On Mon, Sep 16, 2013 at 4:41 PM, Gabriel Wicke gwi...@wikimedia.org wrote:

 On 09/16/2013 03:25 PM, Ryan Lane wrote:
 
 https://www.mediawiki.org/wiki/Manual:Short_URL#URL_like_-_example.com.2FPage_title
 
 
  *Warning:* this method may create an unstable URL structure and leave
 some
  page names unusable on your wiki. See Manual:Wiki in site root
  directory
 https://www.mediawiki.org/wiki/Manual:Wiki_in_site_root_directory.
  Please see the article Cool URIs don't
  changehttp://www.w3.org/Provider/Style/URIand take a few minutes to
  devise a stable URL structure for your web site
  before hopping willy-nilly into rewrites into the URL root.

 That is a very vague warning. So far I have lower-case 'favicon.ico',
 'robots.txt' and 'w/' as potential conflicts. Do you see any others?


Any of the entry points? Any new entry point? Anything we ever want to put
into the root?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Wikimedia labs-tools

2013-09-11 Thread Ryan Lane
On Wed, Sep 11, 2013 at 8:45 AM, Magnus Manske
magnusman...@googlemail.comwrote:

 There was a recent mail saying that Labs is not considered production
 stability. Mainly a disagreement about how many 9s in the 99.9% that
 represents.


Indeed. I don't want to get into the debate about this again, but tools is
considered semi-production which is a smaller set of nines. We're
reasonably staffed and have a well designed enough infrastructure to
properly support that, but it's not the case for production-level
support. The specific discussion about levels of support was for a service
that should be supported by WMF in production since it has uptime
requirements that we aren't scoped to handle.

We handle the underlying infrastructure with production-level support, but
we don't have the same level of support for projects inside of the
infrastructure.

I think so far we've done a relatively good job of keeping stability and
the level of stability has been increasing, not decreasing. If we did an
analysis of tools project outages vs toolserver, I'm positive our level of
unscheduled downtime would be far lower than toolserver's.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Wikimedia labs-tools

2013-09-11 Thread Ryan Lane
On Wed, Sep 11, 2013 at 7:50 AM, John phoenixoverr...@gmail.com wrote:

 tools.wmflabs.org is supposed to be the replacement for the toolserver
 which the wmf is basically forcefully shutting down. I started the
 migration several months ago but got fed up with the difficulties and
 stopped. In the last month I have moved most of my tools to labs, and I
 have discovered that there are some serious issues that need addressed.

 The toolserver was a fairly stable environment. I checked my primary host
 I connect to and it has been up for 4 months with continuous operations.


I'm not going to do an analysis on this to disprove you, but there were
periods of time where toolserver was down for over a week (more than once,
at that). You're talking about connection to bastion hosts, which doesn't
reflect the system as a whole.


 tools however is being treated like the red-headed step child. According
 to the people in charge of labs they dont care about ensuring stability and
 that if stuff breaks Oh well well get to it when we can. They say that
 tools is not a production service so we really don't give a , if it
 breaks it breaks, we will fix it when we can but since its not production
 its not a priority.


The only major instability we've had in tools is with the NFS server. We've
been working on it for months. I honestly think Labs is cursed when it
comes to storage, because our other major instability since project
inception was glusterfs.

As mentioned in another email, we do care, but we also need to be clear
about our level of support. Our advertised level of support is
semi-production. If something breaks in the middle of the night we have
people that will wake up and fix it.


 One good example of this is that a tool cannot connect to
 tools.wmflabs.org due to a host configuration issue. This is a known bug,
 we have a way of fixing it, but its still not implemented


This is due to the way the floating IP addresses work in OpenStack's nova
service. There's workarounds for this issue. For instance, you can connect
to the private DNS or IP address of the webserver, rather than using the
public hostname.

This is definitely something we'd like to fix, but haven't had a chance to
do so yet.

-Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Persian Wikipedia stance on SSL

2013-09-10 Thread Ryan Lane
On Tue, Sep 10, 2013 at 10:58 AM, Amir Ladsgroup ladsgr...@gmail.comwrote:

 I'm not talking about right now. Right now is okay (consensuses of the
 community because of availability of SSL of WMF projects since August
 25 in Iran) to enable SSL as default like other countries and I'm
 asking to do that, but I'm saying do this if you don't enforce editors
 to use SSL by hiding the option in preference

 I'm talking about stance of Persian Wikipedia about switching whole
 traffic of the site and shutting down http:// in future too. This
 action will drop number of readers in Iran by 95-99% you'll loose
 almost all of readers. It's a safety vs. accessibility issue and I'm
 saying this much concern about safety will cause enormous lack of
 accessibility in Iran


We have no current plans to ever disable HTTP and force the use of HTTPS. I
know there was another thread about this on wikimedia-l, but there's no
consensus for doing this and at this point isn't something you should worry
about.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] SSL security, part II.

2013-09-06 Thread Ryan Lane
On Fri, Sep 6, 2013 at 1:08 PM, C. Scott Ananian canan...@wikimedia.orgwrote:

 New revelations on NSA capabilities yesterday in the New York Times: see
 https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html for a
 jumping off point.

 The bottom line seems to be:
 1) don't use RC4 (we're already working toward that goal, I believe)


Someone somewhere commented that the NSA's groundbreaking cryptanalytic
capabilities could include a practical attack on RC4. I don't know one way
or the other, but that's a good speculation.

This is simply not helpful. Someone somewhere, good speculation. None
of the articles or released documents say this. This is FUD as of right now.

On Monday I'll be adding the GCM ciphers for TLS 1.2 (I added the change
yesterday: https://gerrit.wikimedia.org/r/#/c/83043/). We already have
1.2 enabled with weaker ciphers. We should keep RC4 around for older
browsers that don't have a proper BEAST fix. There's no actual evidence of
a viable attack.

 2) don't use the Dual_EC_DRBG PRNG (see
 http://crypto.stackexchange.com/questions/10189/who-uses-dual-ec-drbg)

 Can someone take a look at our SSL configuration and see if we have
 Dual_EC_DRBG enabled? (And if so, turn it off and use a better PRNG!


From my (brief) investigation, this was included in the FIPS
implementations for openssl, but not otherwise. We don't use FIPS.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Use of http:// urls in wikimedia wiki emails

2013-09-03 Thread Ryan Lane
If we switch the URLs to HTTPS we'll have issues with users in China and
Iran and we can't really use geoip targeting for this like we are for
log-in.


On Tue, Sep 3, 2013 at 5:40 AM, billinghurst billinghu...@gmail.com wrote:

 Now that we push/force/encourage https:// connections, are we going to do
 anything about the htpp:// addresses in the emails from the wikis that
 alert to changes? I know that there is a bugzilla for this, and thought
 that it would have been handled at the same time as the conversion of the
 login.

 Regards, Billinghurst

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WMFs stance on non-GPL code

2013-08-28 Thread Ryan Lane
On Sun, Aug 25, 2013 at 5:15 PM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Hey,

 I'm curious what the stance of WMF is on BSD, MIT and MPL licensed code. In
 particular, could such code be deployed on WMF servers?


Was this just grenade lobbing? You still haven't clarified your question,
though a number of folks asked for clarification. Can you let us know why
you're asking this?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Etherpad Lite labs instance going down in two weeks - backup time

2013-08-26 Thread Ryan Lane
On Sat, Aug 24, 2013 at 1:49 AM, Federico Leva (Nemo) nemow...@gmail.comwrote:

 I already have been archiving my stuff from etherpad on wiki, of course,
 and I've never ever used etherpad.wmflabs.org because I knew everything
 in Labs can die any time, but this doesn't mean that I don't worry for what
 others will lose. Of course I understand it's not the _responsibility_ of
 Labs folks nor ops, but still it would be nice if someone managed to do
 something about it. If I was provided with a list of all existing pads I
 could do something myself; I only found few links from our wikis
 https://toolserver.org/~**nemobis/tmp/ethlabs.txthttps://toolserver.org/~nemobis/tmp/ethlabs.txt



Some people may have placed sensitive info in the pads, assuming some level
of (misguided) privacy since the pages weren't indexed. We're not planning
on doing dumps or even exposing an index.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WMFs stance on non-GPL code

2013-08-26 Thread Ryan Lane
On Mon, Aug 26, 2013 at 12:55 PM, Trevor Parscal tpars...@wikimedia.orgwrote:

 VisualEditor is MIT licensed. It was originally GPLv2 by default as per my
 contract with Wikimedia, but early on we got written permission from all
 authors to change it. We did this because we wanted to ensure maximum
 license compatibility for re-use in non-MediaWiki systems.


Aren't our contracts generally written to allow us to use any OSI compliant
license, with a preference to GPL 2?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Etherpad Lite labs instance going down in two weeks - backup time

2013-08-23 Thread Ryan Lane
We don't consider etherpad archive-worthy. It's always been considered an
ephemeral service and we're not willing to put any effort into to save data
from it. If you care about data that you've personally hosted in it, please
put it somewhere that's meant to be archived.

We don't have backups for the service and never will.


On Fri, Aug 23, 2013 at 1:55 PM, Federico Leva (Nemo) nemow...@gmail.comwrote:

 Why is a DB merge not possible? Will the DB be kept somewhere, e.g. in the
 private data of downloads.wikimedia.org?

 Nemo


 __**_
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HTTPS for logged in users on Wednesday August 21st

2013-08-20 Thread Ryan Lane
On Tue, Aug 20, 2013 at 8:04 PM, MZMcBride z...@mzmcbride.com wrote:

 Erik Moeller wrote:
 In general, though, I'd prefer for WMF to move away from what could be
 characterized as appeasement and towards actively resisting censorship
 and monitoring.

 I agree with you and I imagine most developers would agree with you. But
 the question remains: do most Wikimedians?

 I think for most Wikimedians, but particularly for Wikimedians in areas
 where HTTPS access is restricted, I think there's a general view that
 having insecure access over HTTP trumps requiring HTTPS and cutting off
 access altogether. While we hope that this situation is inapplicable to
 most users, we have to recognize that it's applicable to some percent of
 our users. It'd be good to get numbers about how many users we're talking
 about and try to better understand what the Wikimedia community thinks is
 the best path forward, given the various constraints and consequences.


There's not a lot of great options for us to actively bypass censorship
methods and anything we do will likely result in us being completely
blocked by doing it.

Maybe what we're doing is appeasement, but realistically we have no
political power against China. The editors from mainland China had a
discussion with some of us at Wikimania and they said that Wikipedia is
basically unknown in China because Baidupedia is what shows up in the
search results and Wikipedia does not. They actually spend a great deal of
time trying to make Wikipedia known to readers in hopes of strengthening
the editing community. If we were fully blocked again in China, it wouldn't
cause any political fuss.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HTTPS for logged in users on Wednesday August 21st

2013-08-19 Thread Ryan Lane
On Mon, Aug 19, 2013 at 6:03 PM, Risker risker...@gmail.com wrote:

 On 19 August 2013 20:35, Chad innocentkil...@gmail.com wrote:

  On Mon, Aug 19, 2013 at 5:32 PM, Tyler Romeo tylerro...@gmail.com
 wrote:
 
   Quick question: will the patch that was just merged regarding removing
  the
   Stay on HTTPS checkbox be deployed by then? Or will that be a
 separate
   deployment?
  
  
  I'm going to work on getting that merged to all relevant branches
  either tonight or tomorrow, so yes, it will be included.
 
 
 Congrats to everyone for getting this going.  Is there a workaround
 available for people behind the Great Firewall to log into projects in
 languages other than those that are exempted?  If so, what is the best way
 for those individual users to contact Operations or whoever, outside of
 IRC? I'm fairly certain some of those users may not want to have to
 publicize their locations.  I see mention of an email address: could that
 be created before the change please?


Some projects are being left out of the initial rollout. Users that use
those projects as their home wiki will still log-in to HTTP by default and
will get a central auth cookie that will work for other projects as well.

Users who are logged in over HTTPS and feel that it is too slow for their
area or device can disable HTTPS redirection in their preferences to
continue using the site in HTTP mode.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HTTPS for logged in users on Wednesday August 21st

2013-08-19 Thread Ryan Lane
On Mon, Aug 19, 2013 at 6:21 PM, Risker risker...@gmail.com wrote:

 On 19 August 2013 21:09, Ryan Lane rlan...@gmail.com wrote:

  On Mon, Aug 19, 2013 at 6:03 PM, Risker risker...@gmail.com wrote:
 
   On 19 August 2013 20:35, Chad innocentkil...@gmail.com wrote:
  
On Mon, Aug 19, 2013 at 5:32 PM, Tyler Romeo tylerro...@gmail.com
   wrote:
   
 Quick question: will the patch that was just merged regarding
  removing
the
 Stay on HTTPS checkbox be deployed by then? Or will that be a
   separate
 deployment?


I'm going to work on getting that merged to all relevant branches
either tonight or tomorrow, so yes, it will be included.
   
   
   Congrats to everyone for getting this going.  Is there a workaround
   available for people behind the Great Firewall to log into projects in
   languages other than those that are exempted?  If so, what is the best
  way
   for those individual users to contact Operations or whoever, outside of
   IRC? I'm fairly certain some of those users may not want to have to
   publicize their locations.  I see mention of an email address: could
 that
   be created before the change please?
  
  
  Some projects are being left out of the initial rollout. Users that use
  those projects as their home wiki will still log-in to HTTP by default
 and
  will get a central auth cookie that will work for other projects as well.
 
  Users who are logged in over HTTPS and feel that it is too slow for their
  area or device can disable HTTPS redirection in their preferences to
  continue using the site in HTTP mode.
 
  - Ryan
 


 Okay, perhaps I wasn't clear.  What I am referring to are editors from
 China or Iran who regularly log into projects that will be covered with
 HTTPS, as we know that HTTPS is (at least sometimes) blocked in those
 countries.  Remember that you're including Commons, Meta, and all English
 projects - and yes, it is the right thing to do.  But we do have a
 non-negligible number of users (including administrators and stewards) who
 will need to have a way to access these projects.  Do you have a way to
 exempt them?


As I mentioned above. As long as they log-in to their home wiki, they will
get a central auth cookie that will keep them logged-in on every other
project, which includes commons, meta, etc. If they visit other projects as
an anonymous user and try to log in, they'll be redirected to HTTPS, which
will fail.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia's anti-surveillance plans: site hardening

2013-08-17 Thread Ryan Lane
On Sat, Aug 17, 2013 at 10:13 AM, Tyler Romeo tylerro...@gmail.com wrote:

 On Fri, Aug 16, 2013 at 9:59 PM, C. Scott Ananian canan...@wikimedia.org
 wrote:

  Because the other TLS 1.0 ciphers are *even worse*.
 
 
 https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-broken-now-what
 

 ...except they're not (in all major browsers and the latest stable openssl
 and gnutls implementations).

 https://bugzilla.mozilla.org/show_bug.cgi?id=665814


I can't tell if your emails are trolling us or not, but you're being pretty
aggressive. Things take time and you're oversimplifying issues. It's better
to be calm and rational when implementing stuff like this.

I mentioned on wikimedia-l that I'd be enabling GCM ciphers relatively
soon. You even opened a bug after I mentioned it. I didn't get a chance at
Wikimania to do it and I'm currently on vacation. They'll be enabled when I
get back on Monday or Tuesday.

We released a blog post about our plans and are having an ops meeting about
this next week. We'll update https://wikitech.wikimedia.org/wiki/Https
when we've more firmly set our plans.

To this specific email's point, though: RC4 still protects BEAST for
browsers that will always be vulnerable and those that aren't will support
TLS 1.2 soon enough (which is the correct solution). Let's not make old
browsers vulnerable to make newer browsers slightly more secure for a short
period of time.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] trace gitblit, was: Re: Wikimedia's anti-surveillance plans: site hardening

2013-08-17 Thread Ryan Lane
On Sun, Aug 18, 2013 at 5:37 AM, rupert THURNER rupert.thur...@gmail.comwrote:

 On Sat, Aug 17, 2013 at 10:48 PM, bawolff bawolff...@gmail.com wrote:
  yes ken, you are right, lets stick to the issues at hand:
  (1) by when you will finally decide to invest the 10 minutes and
  properly trace the gitblit application? you have the commands in the
  ticket:
  https://bugzilla.wikimedia.org/show_bug.cgi?id=51769
 
  (2) by when you will adjust your operating guideline, so it is clear
  to faidon, ariel and others that 10 minutes tracing of an application
  and getting a holistic view is mandatory _before_ restoring the
  service, if it goes down for so often, and for days every time. the 10
  minutes more can not be noticed if it is gone for more than a day.
 
 
  What information are you hoping to get from a trace that isn't currently
 known?
 if a web application dies or stops responding this can be (1) caused
 by too many requests for the hardware it runs on. which can be
 influenced from outside the app by robots.txt, cache, etc. and inside
 the app by links e.g using nofollow. but it can be (2) influenced by
 the application itself. a java application uses more or less operating
 system resources depending on how it is written. one might find this
 out by just reading the code. having a trace helps a lot here. a trace
 may reveal locking problems in case of multi threading, string
 operations causing OS calls for every character, creating and garbage
 collecting objects, and 100s of others. it is not necessary to wait
 until it stalls again to get the trace. many things can be seen during
 normal operations as well.

 so i hope to get (2). (1) was handled ok in my opinion.


As you've been told numerous times in the past, we already determined what
this specific issue is. It's generating zip files for a large number of
spider requests. What is the point of tracing that? Are you going to do
optimizations on zip generation? Spiders don't need to index the zips, so
we disallowed it. There's no point in wasting time debugging a problem we
have no plans on solving.

If you want to reproduce this, set up gitblit, clone a number of repos,
then crawl your mirror. After doing so, generate your own stacktrace. We're
not going to waste our time with this, so drop it.

That said, if we have an issue in the future not related to the zips, maybe
it's worth generating a stacktrace for that.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How's the SSL thing going?

2013-07-31 Thread Ryan Lane
On Wed, Jul 31, 2013 at 1:06 PM, David Gerard dger...@gmail.com wrote:

 Oh - if anyone can authoritatively compose a WMF blog post on the
 state of the move to SSL (the move to logins and what happened there,
 the NSA slide, ongoing issues like browsers in China, etc), that would
 probably be a useful thing :-)


I'll be posting blog posts each step of the way as we move to SSL. We have
plans on SSL for anons by default, but there's no official roadmap for
doing so.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How's the SSL thing going?

2013-07-31 Thread Ryan Lane
On Wed, Jul 31, 2013 at 1:39 PM, Paul Selitskas p.selits...@gmail.comwrote:

 Can we enable full security mode (as an optional feature) geographically
 based on the most concerned governments, if the whole thing isn't going
 fast due to lack of resources?


No. That's in fact much, much harder.

There's nothing stopping you (and anyone else who is concerned about their
privacy) from using HTTPS Everywhere. We support HTTPS natively as is right
now.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How's the SSL thing going?

2013-07-31 Thread Ryan Lane
On Wednesday, July 31, 2013, Ryan Lane wrote:

 On Wed, Jul 31, 2013 at 1:06 PM, David Gerard 
 dger...@gmail.comjavascript:_e({}, 'cvml', 'dger...@gmail.com');
  wrote:

 Oh - if anyone can authoritatively compose a WMF blog post on the
 state of the move to SSL (the move to logins and what happened there,
 the NSA slide, ongoing issues like browsers in China, etc), that would
 probably be a useful thing :-)


 I'll be posting blog posts each step of the way as we move to SSL. We have
 plans on SSL for anons by default, but there's no official roadmap for
 doing so.


A follow up: I've started writing a blog post about this and hope to have
something postable by tomorrow.

 - Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How's the SSL thing going?

2013-07-31 Thread Ryan Lane
On Wed, Jul 31, 2013 at 9:28 PM, Anthony wikim...@inbox.org wrote:

 On Wed, Jul 31, 2013 at 5:59 PM, George Herbert george.herb...@gmail.com
 wrote:

  The second is site key security (ensuring the NSA never gets your private
  keys).


 Who theoretically has access to the private keys (and/or the signing key)
 right now?


People who have root at Wikimedia, which is Wikimedia's operations team and
a few of the developers.


  The third is perfect forward security with rapid key rotation.
 

 Does rapid key rotation in any way make a MITM attack less detectable?
 Presumably the NSA would have no problem getting a fraudulent certificate
 signed by DigiCert.


SSL Observatory would likely pick that up if it was done in any large
scale. It's less detectable when done in a targeted way, but if that's the
case, the person being targeted is already pretty screwed and we wouldn't
likely be the site targeted.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] no TLS 1.1 and 1.2 support

2013-07-29 Thread Ryan Lane
Seems we had the protocols listed explicitly (to disable SSL2) and
TLS1.1/1.2 weren't available in the past when we were using Ubuntu 10.04.
We've been on 12.04 for a while, but the protocol list wasn't updated. I'm
pushing an updated config now. Thanks for letting us know!


On Mon, Jul 29, 2013 at 11:43 AM, Greg Grossmeier g...@wikimedia.orgwrote:

 Hi 0x,

 quote name=0x date=2013-07-28 time=23:35:19 +0200
  hi,
  recently i tested several sites who are using https, most of them
  communicate with my chromium-webbrowser over TLS 1.1, but
  wikipedia/wikimedia still is using TLS 1.0.
  ssllabs (see link below) shows a warning notice that you should
  upgrade to the newer version, i dont think there is a urgent
  security reason for this but even if its only preventive upgarding
  wouldn't be wrong, right?
 
  example:
  https://encrypted.google.com/ TLS 1.1
  https://mega.co.nz/ TLS 1.1
  https://www.ixquick.com/ TLS 1.1
  https://btc-e.com/ TLS 1.1
  https://www.wsws.org/ TLS 1.1
  https://linksunten.indymedia.org/ TLS 1.1
  https://en.wikipedia.org TLS 1.0
  https://commons.wikimedia.org/ TLS 1.0
  https://www.taz.de/ TLS 1.0
  https://duckduckgo.com/ TLS 1.0
 
 
  https://www.ssllabs.com/ssltest/analyze.html?d=https://en.wikipedia.org
 
 
  hopefully at the right mailinglist, greetings 0x0...@anche.no

 In this reply I just included wikitech-l@lists.wikimedia.org, which is
 probably a better place than the Wikidata specific mailing list.

 Best,

 Greg

 --
 | Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
 | identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] no TLS 1.1 and 1.2 support

2013-07-29 Thread Ryan Lane
On Mon, Jul 29, 2013 at 11:51 AM, C. Scott Ananian
canan...@wikimedia.orgwrote:

 That ssllabs link also shows that wikimedia has RC4 encryption enabled
 on SSL connections, which offers no real security.  This is apparently
 related to the TLS 1.0 -vs- TLS 1.1/1.2 issue:

 https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-broken-now-what
  --scott


Well, you can either be vulnerable to BEAST or to the less practical attack
against RC4. TLS 1.1/1.2 clients should choose the strongest cipher, while
SSL3/TLS1 clients are sent a preferred server list, specifying RC4 first.
See: http://wiki.nginx.org/HttpSslModule#ssl_prefer_server_ciphers.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] no TLS 1.1 and 1.2 support

2013-07-29 Thread Ryan Lane
On Mon, Jul 29, 2013 at 2:03 PM, Thomas Gries m...@tgries.de wrote:

 Am 29.07.2013 21:31, schrieb Ryan Lane:
 
  That ssllabs link also shows that wikimedia has RC4 encryption enabled
  on SSL connections, which offers no real security.  This is apparently
  related to the TLS 1.0 -vs- TLS 1.1/1.2 issue:
 
 
 https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-broken-now-what
   --scott
 

 Ryan:

 check with https://www.ssllabs.com/ssltest/

 Example for https://en.wikipedia.org
 https://www.ssllabs.com/ssltest/analyze.html?d=en.wikipedia.org


I did. I also offered an explanation as to why we're using RC4. I'm going
to add a proper block cipher to the config for TLS1.2 soon, but the rest
need to use RC4.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] no TLS 1.1 and 1.2 support

2013-07-29 Thread Ryan Lane
On Mon, Jul 29, 2013 at 7:34 PM, C. Scott Ananian canan...@wikimedia.orgwrote:

 Au contraire: I think WMF has a responsibility to ensure the safety and
 security of its editors, who might be working on topics controversial in
 their home regions.


Obviously, which is why our SSL security is currently relatively good.

On that note, I'll be pushing in a change soonish to add GCM cipher support
for TLS 1.2. I don't believe it's possible to have TLS 1.1 use a block
cipher while TLS 1 uses RC4 when using a server cipher list, at least not
in nginx nor apache.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcement: Mark y Markus leading MediaWiki Release Management

2013-07-26 Thread Ryan Lane
On Fri, Jul 26, 2013 at 8:41 AM, Greg Grossmeier g...@wikimedia.org wrote:

 Full announcement on the blog:
 http://blog.wikimedia.org/2013/07/26/future-third-party-releases-mediawiki/

  Today, the Wikimedia Foundation is pleased to announce that we have
  contracted with two long-time members of the MediaWiki community–Mark
  Hershberger and Markus Glaser–to manage and drive forward third-party
  focused releases of MediaWiki.

  Over a month ago, the Wikimedia Foundation sent out a request for
  proposals (RFP) to help us fill an important and underserved role in
  the development of MediaWiki. Two very solid proposals were produced,
  the community weighed in with questions and comments, and there was an
  open IRC office hour with the authors and interested community
  members. The Wikimedia Foundation is pleased with the outcome of this
  RFP and excited to begin this new chapter in the life of MediaWiki.

 Join me in congratulating Mark and Markus!


Congrats! I'm looking forward to better third party support and you guys
are an excellent choice!

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unable to Create a gerrit account

2013-07-23 Thread Ryan Lane
On Mon, Jul 22, 2013 at 11:13 PM, anubhav agarwal anubhav...@gmail.comwrote:

 I was following this
 tutorial
 https://www.mediawiki.org/wiki/Writing_an_extension_for_deploymentfor
 deploying an extension on MediaWiki. It asked me to create a
 labs/git/gerrit account.

 When I opened the
 linkhttps://wikitech.wikimedia.org/wiki/Special:UserLogin/signup,
 I entered the following details for

 username: anubhavagarwal
 email: anubhav...@gmail.com
 instance shell account name: anubhav

 It gave me
 There was either an authentication database error or you are not allowed
 to update your external account
 this error. I guess my email might be registered but I have forgotten the
 username and password.
 Can someone help  ?


Hi. Seems you used to have an svn account with that shell account name.
I've linked your svn account with a new account on wikitech with username
anubhavagarwal and shell account name anubhav and I've sent you a password
reminder.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Ryan Lane
On Mon, Jul 22, 2013 at 6:25 AM, Denny Vrandečić 
denny.vrande...@wikimedia.de wrote:

 I assume Ryan didn't mean to single out the Wikidata development team.
 Other teams have done this as well -- the Translate extension depends on
 ULS, CodeEditor depends on WikiEditor, Semantic Result Formats depends on
 Semantic MediaWiki etc. I assume Ryan just choose Wikibase because it
 exemplifies the symptoms so well. Ryan, please correct me if my assumptions
 are wrong.


Indeed, this is a general problem, but Wikibase does this quite a bit more
than other extensions have in the past. It's been a growing concern of mine
as an admin for quite a while now.


 The reminder of this mail has two parts, first it tries to explain the
 motivation of why the Wikidata dev team did things the way we did, and
 second, it asks this list to please resolve the underlying actual issue.

 First:

 The original message by Ryan lists the number of dependencies for Wikibase.
 He lists, e.g. Scribunto as an extension.
 The question is, how to avoid this dependency?
 Should we move Scribunto to core?
 Should we turn Scribunto and Wikibase into one extension?

 The same for Babel and ULS, listed as optional extensions.

 Another extension mentioned is Diff. Diff provides generic functionality
 that can be used by different extensions.
 It seems that in this case the suggestion that Ryan makes is to move it to
 core.
 So shall we move functionality that more than one extension depend on
 generally to core?

 One of the reasons for DataValues, Ask and some others being in their own
 extension is that they already are or are planned very soonish to be reused
 in SMW.
 Since it is suggested that generic functionality should be moved to core,
 would that include functionality shared by Wikibase and SMW?
 Or how else would we manage that shared code?


This is the hardest scenario of the group. It's likely best that they are
in separate extensions. This is still a complication, but is more an issue
with MediaWiki's lack of extension management than it is with the
modularity of the extensions themselves.


 Another reason for having separate components like the WikibaseModel or
 Diff is that they are not MediaWiki extensions, but pure PHP libraries.
 Any PHP script can reuse them. Since the WikibaseModel is not trivial, this
 should help with the writing of bots and scripts dealing with Wikidata
 data.
 How should we handle such components?
 Should they be moved to Wikibase and we require every script to depend on
 the whole of Wikibase, and thus MediaWiki?


For pure PHP libraries, they could be distributed like pure PHP libraries
usually are. They can be packaged for multiple distros and be available via
apt/yum/composer (or pear). Having them as MediaWiki extensions is somewhat
awkward.


 If you add everything needed by Wikibase into a single extension, how do
 you ensure that no unnecessary dependencies creep in?
 Is there a code-analyzer that can run as part of Jenkins that check that
 the architecture is not being violated, and that parts of the code to not
 introduce dependencies on other parts where they should not?
 Separating such components allows us to check this part of the architecture
 during CI, which is indeed extremely helpful.


How does splitting extensions apart make it easier to check this during CI?
Is there some automated way to do this when they are split, but not when
they are together?



 I would indeed be very much interested in better solutions for these
 questions than we currently have.

 As Ryan said in his thread-opening email, For legitimate library-like
 extensions I have no constructive alternative, but there must be some sane
 alternative to this.
 A lot of the issues would be resolved if we had this constructive
 alternative. The solution will likely also help to deal with the other
 dependencies.
 I hope it is understandable that I do not consider the time of the Wikidata
 development well spent to replace our architecture with something else,
 before we have agreed on what this something else should be.


Yes, that's surely a good position to take :).


 Second:

 I would be interested in answers to the above questions.
 But maybe we really should concentrate on getting the actual question
 resolved, which has been discussed on this list several times without
 consensus, those that would allows us to answer the above questions
 trivially:
 how should we deal with modularity, extensions, components, etc. in
 MediaWiki?
 I hope the answer is not throw everything in core or into monolithical
 extensions which do not have dependencies among each other, but let's see
 what the discussion will bring. Once we have this answer, we can implement
 the results. Until then I am not sure whether I found it productive to
 single the Wikidata team out in the way we are doing things.


My concern was increasing complexity for admins with no effort towards
decreasing it. Composer may indeed be an answer to some of 

Re: [Wikitech-l] Remove 'visualeditor-enable' from $wgHiddenPrefs

2013-07-22 Thread Ryan Lane
On Mon, Jul 22, 2013 at 7:17 PM, Tyler Romeo tylerro...@gmail.com wrote:

 On Mon, Jul 22, 2013 at 9:35 PM, James Forrester
 jforres...@wikimedia.orgwrote:
 
   It would imply that this is a preference that Wikimedia thinks is
  appropriate. This would be a lie. For a similar example, see the removal
 of
  the disable JavaScript option from Firefox 23.
 

 You still haven't explained why this preference is inappropriate.


This is slightly off topic, but removing that preference from firefox is a
great idea. It's only used properly by power users, who would be able to do
the same in about:config, or via noscript, or will add an extension to do
it. That preference is almost always incorrect set by users who don't know
what they are doing and it leads to a broken browser experience.

Maybe there's a comparison to be made, but there's not really a simple way
to disable VE in MediaWiki other than by having a preference.

Assuming a proper implementation of edit/edit source I'm not sure what the
big deal is, but I'm not a hardcore editor so I'm likely just not seeing it.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-21 Thread Ryan Lane
On Sun, Jul 21, 2013 at 9:41 AM, Jeroen De Dauw jeroended...@gmail.comwrote:

 What started this thread was Ryan having problems with installing Wikibase.
 And I can see why this would not be all that smooth. The components you
 need is probably not the biggest hassle. After all, you just need to do 4
 git clones instead of 3. Then you also need to do a bunch of config, and
 figure out if you want to use additional functionality.

 For instance, you need Scrubuntu if you want to have lua support on the
 client. As a new user of the software, how will you know you need this or
 not? Some very good documentation can help, but we don't have this. Putting
 the lua stuff into its own extension would make this a lot more clear, and
 would not bother users with the requirement of understanding what this is
 while they don't even want to use lua. (This model of creating extensions
 on top of your main extension is something done by SMW and is working very
 well. Users generally do not get confused by it at all.)


Actually, funny enough, I think SMW is one of the more difficult extension
sets to use as an end-user. It was the first extension where I saw your
Validator code used as a dependency extension. Validator was rejected from
being included in core, but now basically every extension you maintain uses
it. I didn't bother to say anything about this when it was limited to SMW
and your extensions, but now Wikimedia deployed extensions are starting to
use them.


 Doing 4 git clones and having some basic understanding of the dependencies
 is something quite reasonable to expect from developers. As everyone here,
 including me, is saying, this is not acceptable for users. The problem in
 case of Wikibase is that it is the only process we currently have, as no
 one has bothered to do a proper release. You know, one of these things with
 a version number, release notes, tarballs, etc. If we want this software to
 be usable by non-devs, this needs to happen. And if it does, it is also
 going to be useful to devs. So I'm actually quite disappointed we did not
 have a release yet, and am regularly shouting to people about it.


The actual method of getting the extensions isn't that hard. Even if an
end-user can't do a git clone, they can have git.wm.o give them a tar or
zip. It's specifically the dependencies that are hard. When I attempt to
upgrade MediaWiki I currently have to write down all of the extensions, and
ensure all of them are compatible with MediaWiki. With some subsets, I need
to ensure they are compatible with each other (like SMW, SF, SRF). Now I'm
going to need to do that and track the compatibility between extensions and
dependency extensions. I'm actually going to have to write an upgrade
matrix to upgrade, and that's not OK.


 So Ryan, I agree that currently this is not in good state. I however
 disagree there is this dichotomy between developers and users, where we can
 only have it work nicely for one. Attacking the developer process is thus
 not a good option, as that will just end up hurting everyone, including the
 end users.


I didn't say there's a dichotomy. I believe that we can have both, but this
approach isn't it.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-20 Thread Ryan Lane
On Sat, Jul 20, 2013 at 4:34 AM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Hey,

  you're adding in a whole new set of incompatibilities.

 How so?


Extensions that use any of these extension libaries now depend on the
version of MediaWiki and the version of the extension. That's a new set of
dependencies that can be incompatible now.


  You're really not thinking of this from the perspective of the person
 using the software.

 Oh, glad to know you understand what I am thinking, since clearly I do not.


Maintaining MediaWiki installs is already relatively difficult, due to a
lack of extension management. Many extensions are also poorly maintained,
and just getting an extension that works with your version of MediaWiki
reliably is hard. Upgrades are incredibly hard due to this. Adding in
another set of incompatibilities is going to make this much harder.

This is making something easier for developers at the expense of admins.


  In case of the components
   created for Wikidata, we have been supporting Composer for a while now,
   which is a great fit to our needs.
  
  
  We in this situation is Wikidata and not the developer community. In
  fact, there were a number of threads about composer with no consensus
 and a
  number of objections.
 

 Given my first sentence, I would think it is indeed abundantly clear this
 is about the components created for Wikidata and not the whole community.
 Thanks for making it even more obvious.


Many of these components are generic enough to


  So, what you're saying is that the Wikidata team has
  made a decision on behalf of the community?
 

 What I am saying is that I have played around with something that might be
 of use to the community in general. Where did I imply I, or the WD team,
 had made a decision for the whole community here? I don't see it, but since
 you know better then me what I am thinking, please explain.


My concern is adoption of these extension libraries. Other extensions will
use these libraries, which is the point of them... If the only way to
sanely install them is via composer and a decent number of extensions now
relies on them, it's be come a de-facto requirement of MediaWiki, without
consensus.


 Something that's being sidestepped here is that extensions are being used
  as a means to avoid getting things reviewed for core. Quite a few of
 these
  extensions should just be core functionality or they shouldn't exist.
 

 This is preposterous, and I find the accusation you made outrageous. I'd
 love to have some constructive discussion here, though this is very
 difficult if you come at it from the angle you are.


Sorry, I phrased that poorly. Here's my concern: If extension libraries are
generic enough that they could be considered core (Diff is a great
example), then other extensions will likely use it like core functionality.
Wikibase already is. These extensions won't get the same level of review as
they would if they were core functionality.

Is there any compelling reason that they are extensions, rather than being
added to core?

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   3   4   5   >