Re: [Wikitech-l] Process change: New contributors getting editbugs on Bugzilla

2014-05-30 Thread Jasper Deng
If I recall correctly, it used to be the default, but it was removed after
some Bugzilla vandalism in 2011.


On Thu, May 29, 2014 at 10:40 PM, Legoktm legoktm.wikipe...@gmail.com
wrote:

 On 5/29/14, 11:57 AM, Mark Holmquist wrote:

  Solution: We've made every editbugs user able to add editbugs to an
 account. I've documented the process here: https://www.mediawiki.org/
 wiki/Bugzilla#Why_can.27t_I_claim_a_bug_or_mark_it_resolved.3F


 Does this only apply to every user who has editbugs right now, or will it
 also apply to those we give editbugs to in the future?


  Thanks to Chad for the quick resolution on this, hopefully this will be a
 positive change overall.


 Thank you for finally getting this done! :)

 -- Legoktm


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] GeoData Extension

2014-05-30 Thread Matthias Hochgatterer
I’m currently looking into the GeoData Extension to make location based 
Wikipedia queries.
There are still some open questions - would be nice if sb could provide 
guidance.

- The release status of the extension is still experimental. Is it safe to use 
it in production (mobile app)? Are there some hard limit how often I can query 
the API? Just thinking when the app gets popular…

- Is there a way to increase the search radius? E.g. When showing a continent 
(Europe) on a map, I would like to display articles for all countries (sth like 
`gsmindim` would be useful in this case too). I couldn’t find a way to do this 
other than making multiple queries for different coordinates which does not 
scale very well.

Thanks
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-05-30 Thread Daniel Kinzler
Am 29.05.2014 21:07, schrieb Aaron Schulz:
 Yes it was for auto-reviewing new revisions. New revisions are seen as a
 combination of (base revision, changes). 

But EditPage in core sets $baseRevId to false. The info isn't there for the
standard case. In fact, the ONLY thing in core that sets it to anything but
false is commitRollback() , and that sets it to a value that to me doesn't make
much sense to me - the revision we revert to, instead of either the revision we
revert *from* (base/physical parent), or at least the *parent* of the revision
we revert to (logical parent).

Also, if you want (base revision, changes), you would use $oldid in
doEditContent, not $baseRevId. Perhaps it's just WRONG to pass $baseRevId to the
hooks called by doEditCOntent, and it should have been $oldid all along? $oldid
is what you need if you want to diff against the previous revision - so
presumably, that's NOT what $baseRevId is.

 If baseRevId is always set to the revision the user started from it would
 cause problems for that extension for the cases where it was previously
 false.

false means don't check, I suppose - or there is no base, but that could
be identified by the EDIT_NEW flag.

I'm not proposing to change the cases where baseRevId is false. They can stay as
they are. I'm proposing to set baseRevId to the revision the user started with,
OR false, so we can detect conflicts safely  sanely.

 It would indeed be useful to have a casRevId value that was the current
 revision at the time of editing just for CAS style conflict detection.

Indeed - but changing the method signature would be painful, and the existing
$baseRevId parameter does not seem to be used at all - or at least, it's used in
such an inconsistent way as to be useless, of not misleading and harmful.

For now, I propose to just have commitRollback call doEditContent with
$baseRevId = false, like the rest of core does. Since core itself doesn't use
this value anywhere, and sets it to false everywhere, that seems consistent. We
could then just clarify the documentation. This way, Wikibase could use the
$baseRevId value for conflict detection - actually, core could, and should, do
just that in doEditContent; this wouldn't do anything in core until the
$baseRevId is supplied at least by EditPage.

Of course, we need to check FlaggedRevs and other extensions, but seeing how
this argument is essentially unused, I can't imagine how this change could break
anything for extensions.

-- daniel



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Process change: New contributors getting editbugs on Bugzilla

2014-05-30 Thread Andre Klapper
On Thu, 2014-05-29 at 11:57 -0700, Mark Holmquist wrote:
 We've made every editbugs user able to add editbugs to an account.

Thank you Mark (and Chad) for going ahead!

I'm crossing fingers that advantages will outweigh the potential
problems which made me indecisive about how to solve this problem.

Time will tell, but right now I'm just happy there is progress.

Thanks!
andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-05-30 Thread Brad Jorsch (Anomie)
On Fri, May 30, 2014 at 4:06 AM, Daniel Kinzler dan...@brightbyte.de
wrote:

 Am 29.05.2014 21:07, schrieb Aaron Schulz:
  Yes it was for auto-reviewing new revisions. New revisions are seen as a
  combination of (base revision, changes).

 But EditPage in core sets $baseRevId to false. The info isn't there for the
 standard case. In fact, the ONLY thing in core that sets it to anything
 but false is commitRollback() , and that sets it to a value that to me
 doesn't make much sense to me - the revision we revert to, instead of
 either the revision we revert *from* (base/physical parent), or at least
 the *parent* of the revision we revert to (logical parent).


I think you need to look again into how FlaggedRevs uses it, without the
preconceptions you're bringing in from the way you first interpreted the
name of the variable. The current behavior makes perfect sense for that
specific use case. Neither of your proposals would work for FlaggedRevs.

As for the EditPage code path, note that it has already done edit conflict
resolution so base revision = current revision of the page. Which is
probably the intended meaning of false.


 Of course, we need to check FlaggedRevs and other extensions, but seeing
 how this argument is essentially unused, I can't imagine how this change
 could break anything for extensions.


Except FlaggedRevs.


-- 
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GeoData Extension

2014-05-30 Thread Magnus Manske
Why not use Wikidata instead? 15 million items vs. 4.5 million articles
(en.wp) means three times the number of things with coordinates. Simple,
quick query for radius around X available:

http://wdq.wmflabs.org/api_documentation.html

(check AROUND)

Cheers,
Magnus


On Fri, May 30, 2014 at 8:26 AM, Matthias Hochgatterer 
matthias.hochgatte...@gmail.com wrote:

 I’m currently looking into the GeoData Extension to make location based
 Wikipedia queries.
 There are still some open questions - would be nice if sb could provide
 guidance.

 - The release status of the extension is still experimental. Is it safe to
 use it in production (mobile app)? Are there some hard limit how often I
 can query the API? Just thinking when the app gets popular…

 - Is there a way to increase the search radius? E.g. When showing a
 continent (Europe) on a map, I would like to display articles for all
 countries (sth like `gsmindim` would be useful in this case too). I
 couldn’t find a way to do this other than making multiple queries for
 different coordinates which does not scale very well.

 Thanks
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
undefined
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Process change: New contributors getting editbugs on Bugzilla

2014-05-30 Thread Bartosz Dziewoński

On Fri, 30 May 2014 07:40:33 +0200, Legoktm legoktm.wikipe...@gmail.com wrote:


Does this only apply to every user who has editbugs right now, or will it also 
apply to those we give editbugs to in the future?


As far as I know, yes, it will apply to users who will get editbugs later. Chad 
apparently made the ysers in the 'editbugs' group be able to add users to the 
'editbugs' group.

--
Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Status of the new PDF Renderer

2014-05-30 Thread Chris McMahon
On Thu, May 29, 2014 at 6:06 PM, Matthew Walker mwal...@wikimedia.org
wrote:

 I should have also noted -- there is something strange going on with the
 frontend to Special:Collection. You have to manually refresh to see status
 updates...


Reported 10 days ago in test envs:
https://bugzilla.wikimedia.org/show_bug.cgi?id=65562



 ~Matt Walker
 Wikimedia Foundation
 Fundraising Technology Team


 On Thu, May 29, 2014 at 5:56 PM, Matthew Walker mwal...@wikimedia.org
 wrote:

  I'm happy to report that after a LONG time fighting with deployment the
  test instance is available in beta labs (en.wikipedia.beta.wmflabs.org
  and all others) via the WMF PDF option in Special:Collection and on the
  side panel.
 
  It is very rough still in terms of reliable rendering (it doesn't like to
  clean up after itself) -- but now that I have deployment sorted and it
  stably running that's my next task. Play away :D
 
  ~Matt Walker
  Wikimedia Foundation
  Fundraising Technology Team
 
 
  On Thu, May 29, 2014 at 7:02 AM, Andre Klapper aklap...@wikimedia.org
  wrote:
 
  Hi,
 
  On Mon, 2014-05-19 at 11:57 -0700, C. Scott Ananian wrote:
   That's a good question!  I'm in SFO this week, so it's probably worth
   setting aside a day to resync and figure out what the next steps for
   the new PDF renderer are.
 
  Any news (or a public test instance available)?
 
  As I wrote, I'd be interested in having a bugday on testing the new PDF
  renderer by going through / retesting
 
 
 https://bugzilla.wikimedia.org/buglist.cgi?resolution=---component=Collection
 
  Thanks,
  andre
  --
  Andre Klapper | Wikimedia Bugwrangler
  http://blogs.gnome.org/aklapper/
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GeoData Extension

2014-05-30 Thread Tomasz Finc
Max (CC'd) can help you on this one.

--tomasz

On Fri, May 30, 2014 at 12:26 AM, Matthias Hochgatterer
matthias.hochgatte...@gmail.com wrote:
 I’m currently looking into the GeoData Extension to make location based 
 Wikipedia queries.
 There are still some open questions - would be nice if sb could provide 
 guidance.

 - The release status of the extension is still experimental. Is it safe to 
 use it in production (mobile app)? Are there some hard limit how often I can 
 query the API? Just thinking when the app gets popular…

 - Is there a way to increase the search radius? E.g. When showing a continent 
 (Europe) on a map, I would like to display articles for all countries (sth 
 like `gsmindim` would be useful in this case too). I couldn’t find a way to 
 do this other than making multiple queries for different coordinates which 
 does not scale very well.

 Thanks
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Friday RfC discussion: extension registration

2014-05-30 Thread Sumana Harihareswara
On 05/25/2014 09:09 AM, Sumana Harihareswara wrote:
 This Friday we're discussing Kunal Mehta's Extension registration RfC.
 (This is a change from Wednesday, the usual day.)
 
 https://www.mediawiki.org/wiki/Architecture_meetings/RFC_review_2014-05-30
 
 https://www.mediawiki.org/wiki/Requests_for_comment/Extension_registration
 
 It'll be 1900 UTC
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140530T1900
 this Friday 30 May, in #wikimedia-office.
 
 8pm in London
 3pm in Washington, DC
 noon in San Francisco
 
 Sorry for the change in time. The next week's chat, about Pau Giner's
 CSS grid proposal, will probably on Monday June 2, at a time better for
 Asia  Australia.

This is in about 30 minutes in IRC.

-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: [Wikimedia-l] Mobile Operator IP Drift Tracking and Remediation

2014-05-30 Thread Adam Baso
FYI

-- Forwarded message --
From: Adam Baso ab...@wikimedia.org
Date: Fri, May 30, 2014 at 2:04 PM
Subject: Re: [Wikimedia-l] Mobile Operator IP Drift Tracking and Remediation
To: Wikimedia Mailing List wikimedi...@lists.wikimedia.org


Okay, the code is in place in the alphas of both the Android and iOS apps,
and the server-side 2% sampling (extra header in HTTPS request sent once
per cellular app session) is working.

https://git.wikimedia.org/commitdiff/apps%2Fandroid%2Fwikipedia.git/8b4a0c3b170d6bf1a8f8141d93dfc60416ae4e2b

https://git.wikimedia.org/commitdiff/apps%2Fios%2Fwikipedia.git/59cde497921bc6d2c28e3967c24f0316dfedf3ce

https://git.wikimedia.org/commitdiff/mediawiki%2Fextensions%2FZeroRatedMobileAccess.git/df3da0b3fa564ae27d33cd1b82f81df12a5ed287

Changes to event logging in the iOS alpha app (internal only at the moment,
although repo can be cloned and run in the Xcode simulator) are coming
pretty soon, and once those are in, we'll make one last tweak there to have
the app not add the extra MCC/MNC header on that single request per
cellular connection when logging is turned off in the iOS alpha app. That
part is done in the Android app already.

-Adam




On Fri, May 2, 2014 at 1:16 PM, Adam Baso ab...@wikimedia.org wrote:

 Federico asked if sampling might make sense here. I think it will work, so
 I've updated the patchset.

 From a patchset comment I provided:

 It's possible we may have situations where operators have not lots of
 users on them accessing Wiki(m|p)edia properties, so we do run some risk of
 actually missing IPs, even if exit IPs are concentrators of typically large
 sets of users. That said, let's try a 2% sample ratio; and if we find out
 it's insufficient, then we'll sample more, if it's oversampling, then we
 can adjust the other way, too. New patchset arriving shortly.

 (I've since submitted the updated code for review.)

 -Adam



 On Thu, May 1, 2014 at 7:52 PM, Adam Baso ab...@wikimedia.org wrote:

 After examining this, it looks like EventLogging is more suited to the
 logging task than debug logging and the trappings of needing to alter debug
 logging in the core MediaWiki software.

 EventLogging logs at the resolution of a second (instead of a day), but
 has inbuilt support for record removal after 90 days.

 Please do let us know in case of further questions. Here's the logging
 schema for those with an interest:

 https://meta.wikimedia.org/wiki/Schema:MobileOperatorCode

 Here's the relevant server code:

 https://gerrit.wikimedia.org/r/#/c/130991/

 -Adam




 On Wed, Apr 16, 2014 at 2:20 PM, Adam Baso ab...@wikimedia.org wrote:

 Great idea!

 Anyone on the list know if there's a way to make the debug log
 facilities do the MMDD timestamp instead of the longer one?

 If not, I suppose we could work to update the core MediaWiki code. [1]

 -Adam

 1. For those with PHP skills or equivalent, I'm referring to
 https://git.wikimedia.org/blob/mediawiki%2Fcore.git/a26687e81532def3faba64612ce79b701a13949e/includes%2FGlobalFunctions.php#L1042.
 Scroll to the bottom of the function definition to see the datetimestamp
 approach.


 On Wed, Apr 16, 2014 at 12:47 PM, Andrew Gray andrew.g...@dunelm.org.uk
  wrote:

 Hi Adam,

 One thought: you don't really need the date/time data at any detailed
 resolution, do you? If what you're wanting it for is to track major
 changes (last month it all switched to this IP) and to purge old
 data (delete anything older than 10 March), you could simply log day
 rather than datetime.

 enwiki / 127.0.0.1 / 123.45 / 2014-04-16:1245.45

 enwiki / 127.0.0.1 / 123.45 / 2014-04-16

 - the latter gives you the data you need while making it a lot harder
 to do any kind of close user-identification.

 Andrew.
 On 16 Apr 2014 19:17, Adam Baso ab...@wikimedia.org wrote:

  Inline.
 
  Thanks for starting this thread.
  
   Sorry if I've overlooked this, but who/what will have access to this
  data?
   Only members of the mobile team? Local project CheckUsers? Wikimedia
   Foundation-approved researchers? Wikimedia shell users? AbuseFilter
   filters?
  
 
  It's a good question. The thought is to put it in the customary
 wfDebugLog
  location (with, for example, filename mccmnc.log) on fluorine.
 
  It just occurred to me that the wiki name (e.g., enwiki), but not
 the
  full URL, gets logged additionally as part of the wfDebugLog call; to
 make
  the implicit explicit, wfDebugLog adds a datetime stamp as well, and
 that's
  useful for purging old records. I'll forward this email to mobile-l
 and
  wikitech-l to underscore this.
 
 
   And this may be a silly question, but is there a reasonable means of
   approximating how identifying these two data points alone are? That
 is,
   Using a mobile country code and exit IP address, is it possible to
   identify a particular editor or reader? Or perhaps rephrased, is
 this
  data
   considered anonymized?
  
 
  Not a silly question. My approximation is these tuples (datetime, 

[Wikitech-l] Call for Wikimedia Hackathon(s) 2014-2015

2014-05-30 Thread Quim Gil
(CCing wikimedia-l as well, please send any replies to wikitech-l only)

The Wikimedia technical community wants to have another hackathon next year
in Europe. Who will organize it?

Interested parties, check https://www.mediawiki.org/wiki/Hackathons

We would like to confirm a host by Wikimania, latest.

The same call goes for India and other locations with a good concentration
of Wikimedia contributors and software developers. Come on, step in. We
want to increase our geographical diversity of technical contributors.




-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 404 errors

2014-05-30 Thread ENWP Pine

Ori, thanks for following up.

I think I saw somewhere that there is a list of postmortems for tech ops 
disruptions
that includes reports like this one. Do you know where the list is? I tried a 
web search
and couldn't find a copy of this report outside of this email list.

I personally find this report interesting and concise, and I am interested in
understanding more about the tech ops infrastructure. Reports like this one
are useful in building that understanding. If there's an overview of tech ops
somewhere I'd be interested in reading that too. The information on English
Wikipedia about WMF's server configuration appears to be outdated.

Thanks,

Pine


 Date: Thu, 29 May 2014 22:38:10 -0700
 From: Ori Livneh o...@wikimedia.org
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] 404 errors
 Message-ID:
   cahxk4byya8ae0evgaufwscrjztaqh+sjtw6ccj14mb8o-te...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 On Thu, May 29, 2014 at 1:34 PM, ENWP Pine deyntest...@hotmail.com wrote:
 
  Hi, I'm getting some 404 errors consistently when trying to load some
  English Wikipedia articles. Other pages load ok. Did something break?
 
 
 TL;DR: A package update went badly.
 
 Nitty-gritty postmortem:
 
 At 20:25 (all times UTC), change Ie5a860eb9[0] (Remove
 wikimedia-task-appserver from app servers) was merged. There were two
 things wrong with it:
 
 1) The appserver package was configured to delete the mwdeploy and apache
 users upon removal. The apache user was not deleted because it was logged
 in, but the mwdeploy user was. The mwdeploy account was declared in Puppet,
 but there was a gap between the removal of the package and the next Puppet
 run during which the account would not be present.
 
 2) The package included the symlinks /etc/apache2/wmf and
 /usr/local/apache/common, which were not Puppetized. These symlinks were
 unlinked when the package was removed.
 
 Apache was configured to load configuration files from /etc/apache2/wmf,
 and these include the files that declare the DocumentRoot and Directory
 directives for our sites. As a result, users were served with 404s. At
 20:40 Faidon Liambotis re-installed wikimedia-task-appserver on all
 Apaches. Since 404s are cached in Varnish, it took another five minutes for
 the rate of 4xx responses to return to normal (20:45).[1]
 
 [0]: https://gerrit.wikimedia.org/r/#/c/136151/
 [1]:
 https://graphite.wikimedia.org/render/?title=HTTP%204xx%20responses%2C%202014-05-29from=20:00_20140529until=21:00_20140529target=reqstats.4xxhideLegend=true
 
  
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-05-30 Thread Aaron Schulz
FlaggedRevs uses the NewRevisionFromEditComplete hook. Grepping for that, I
see reasonable values set in the callers at a quick glance. This cover
various null edit scenarios too. The $baseRevId in WikiPage is just one of
the cases of that value passed to the hook, and is fine there (being mostly
false). false indeed means not determined and that behavior is needed
for the hook values. The values given in that hook variable make sense and
are more or less consistent.

As I said before, if the NewRevisionFromEditComplete hook is given the same
base revision ID values for all cases, then I don't care to much what
happens to the $baseRevId value semantics in doEditContent(). As long as
everything is changed to keep that part consistent then it won't effect
anything. However, just naively change the $baseRevId values for the
non-false cases will break the extension using it.

As as side note, FlaggedRevs doesn't just end up using $oldid. It only uses
that as the last resort after picking other values in difference scenarios
it detects.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5029028.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 404 errors

2014-05-30 Thread Alex Monk
On 30 May 2014 23:30, ENWP Pine deyntest...@hotmail.com wrote:

 I think I saw somewhere that there is a list of postmortems for tech ops
 disruptions
 that includes reports like this one. Do you know where the list is? I
 tried a web search
 and couldn't find a copy of this report outside of this email list.

 I personally find this report interesting and concise, and I am interested
 in
 understanding more about the tech ops infrastructure. Reports like this one
 are useful in building that understanding. If there's an overview of tech
 ops
 somewhere I'd be interested in reading that too. The information on English
 Wikipedia about WMF's server configuration appears to be outdated.


I believe you are looking for
https://wikitech.wikimedia.org/wiki/Incident_documentation#Incident_reports
 ?

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 404 errors

2014-05-30 Thread Greg Grossmeier
quote name=ENWP Pine date=2014-05-30 time=15:30:59 -0700
 I think I saw somewhere that there is a list of postmortems for tech ops 
 disruptions
 that includes reports like this one. Do you know where the list is? I tried a 
 web search
 and couldn't find a copy of this report outside of this email list.

https://wikitech.wikimedia.org/wiki/Incident_documentation

With this specific one at:
https://wikitech.wikimedia.org/wiki/Incident_documentation/20140529-appservers

-- 
| Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Using Composer to manage libraries for mediawiki/core on Jenkins and Foundation cluster

2014-05-30 Thread Bryan Davis
On Thu, May 29, 2014 at 11:27 AM, Bryan Davis bd...@wikimedia.org wrote:
 My logging changes [0][1][2][3] are getting closer to being mergeable
 (the first has already been merged). Tony Thomas' Swift Mailer change
 [4] is also progressing. Both sets of changes introduce the concept of
 specifying external library dependencies, both required and suggested,
 to mediawiki/core.git via composer.json. Composer can be used by
 people directly consuming the git repository to install and manage
 these dependencies. I gave a example set of usage instructions in the
 commit message for my patch that introduced the dependency on PSR-3
 [0]. In the production cluster, on Jenkins job runners and in the
 tarball releases we will want a different solution.

 My idea of how to deal with this is to create a new gerrit repository
 (mediawiki/core/vendor.git?) that contains a composer.json file
 similar to the one I had in patch set 7 of my first logging patch [5].
 This composer.json file would be used to tell Composer the exact
 versions of libraries to download. Someone would manually run Composer
 in a checkout of this repository and then commit the downloaded
 content, composer.lock file and generated autoloader.php to the
 repository for review. We would then be able to branch and use this
 repository as git submodule in the wmf/1.2XwmfY branches that are
 deployed to production and ensure that it is checked out along with
 mw-core on the Jenkins nodes. By placing this submodule at $IP/vendor
 in mw-core we would be mimicking the configuration that direct users
 of Composer will experience. WebStart.php already includes
 $IP/vendor/autoload.php when present so integration with the rest of
 wm-core should follow from that.

The proposed repository has been created [0] and has an initial set of
proposed additions pending review [1].

There is still some ongoing internal discussion about the best way to
verify that included libraries are needed and that security patches
are watched for and applied from upstream. Chris Steipp is awesome,
but it would be quite an additional burden to hang these thousands of
new lines of code around his neck as yet another burden to bear. One
current theory is that need should be determined by the RFC process
and security support would need to be provided by a sponsor of the
library.


[0]: https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/core/vendor
[1]: 
https://gerrit.wikimedia.org/r/#/projects/mediawiki/core/vendor,dashboards/default

Bryan
-- 
Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
[[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Jettisoning our history?

2014-05-30 Thread Chad
All,

When we end up moving MW core to Phabricator I'd like us to jettison our
history. The
repo is large and clunky and not conducive to development. It's only going
to grow in
size unless we do something to cut back on the junk we're carrying around.

This is my ideal Phabby world:

mediawiki (no /core, that was always redundant)
mediawiki/i18n (as submodule)
mediawiki/historical (full history, previous + all mediawiki going forward)

If we jettison all our history we can get the repo size down to a 30-35MB
which is
very nice. Doing it on Gerrit isn't worthwhile because it'd basically break
everything.
We're gonna be breaking things with the move to Phab...it's then or never
if we're
going to do this.

Being able to stitch with the old history would be nice and I think might
be doable with
git-replace. If not, I still think it's worth discussing for developer and
deployer productivity.

Thoughts?

-Chad
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jettisoning our history?

2014-05-30 Thread C. Scott Ananian
Please, no.  I regularly use git blame and git annotate on core to
figure out why certain features are the way they are.
  --scott

-- 
(http://cscott.net)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jettisoning our history?

2014-05-30 Thread Daniel Friesen
On 2014-05-30, 7:25 PM, Chad wrote:
 All,

 When we end up moving MW core to Phabricator I'd like us to jettison our
 history. The
 repo is large and clunky and not conducive to development. It's only going
 to grow in
 size unless we do something to cut back on the junk we're carrying around.

 This is my ideal Phabby world:

 mediawiki (no /core, that was always redundant)
 mediawiki/i18n (as submodule)
 mediawiki/historical (full history, previous + all mediawiki going forward)

 If we jettison all our history we can get the repo size down to a 30-35MB
 which is
 very nice. Doing it on Gerrit isn't worthwhile because it'd basically break
 everything.
 We're gonna be breaking things with the move to Phab...it's then or never
 if we're
 going to do this.

 Being able to stitch with the old history would be nice and I think might
 be doable with
 git-replace. If not, I still think it's worth discussing for developer and
 deployer productivity.

 Thoughts?

 -Chad
Eliminating localization updates from repos is always nice, I hate it
when they fill up a repo's history. However using a submodule doesn't
fix that it just replaces i18n file commits with a submodule update commit.
Personally I've always wanted to switch to JSON messages (^_^ yay we
already did that), drop messages for all language besides the canonical
texts (en and qqq), then integrate the automatic fetching of messages
for other languages into MediaWiki (tarballs releases can be bundled
with a snapshot of the data for intranets, etc...; ExtensionDistributor
can do the same; and thanks to things like localization caches we won't
even need to require filesystem write to do this). Especially for
extensions, the i18n commits for our extensions completely drown out the
code contributions.

However I don't really like the thought of dropping the history. We're
using git, switching to phabricator shouldn't actually break anything
(except custom things like `git review`). git {clone|fetch|pull} won't
work from the old url anymore, but all people have to do is `git remote
set-url {new url}` or `git remote add {new remote} {new url}` and voila,
they pick up right where they left off, this time with Phabricator
backing git instead of Gerrit.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jettisoning our history?

2014-05-30 Thread Chad
On Fri, May 30, 2014 at 7:34 PM, C. Scott Ananian canan...@wikimedia.org
wrote:

 Please, no.  I regularly use git blame and git annotate on core to
 figure out why certain features are the way they are.
   --scott


git-blame should respect git-replace'd objects and would enable you
to add the full-history version as a second remote and see the full
history.

Again, this is all in theory.

-Chad
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jettisoning our history?

2014-05-30 Thread Chad
On Fri, May 30, 2014 at 7:50 PM, Daniel Friesen dan...@nadir-seen-fire.com
wrote:

 Eliminating localization updates from repos is always nice, I hate it
  when they fill up a repo's history. However using a submodule doesn't
 fix that it just replaces i18n file commits with a submodule update commit.


I guess we see different problems them. I don't care about the
commits themselves, just the amount of data they contain :)

Submodule updates are always going to be lighter-weight.


 Personally I've always wanted to switch to JSON messages (^_^ yay we
 already did that), drop messages for all language besides the canonical
 texts (en and qqq), then integrate the automatic fetching of messages
 for other languages into MediaWiki (tarballs releases can be bundled
 with a snapshot of the data for intranets, etc...; ExtensionDistributor
 can do the same; and thanks to things like localization caches we won't
 even need to require filesystem write to do this). Especially for
 extensions, the i18n commits for our extensions completely drown out the
 code contributions.


This would also be ok to me.


 However I don't really like the thought of dropping the history. We're
 using git, switching to phabricator shouldn't actually break anything
 (except custom things like `git review`). git {clone|fetch|pull} won't
 work from the old url anymore, but all people have to do is `git remote
 set-url {new url}` or `git remote add {new remote} {new url}` and voila,
 they pick up right where they left off, this time with Phabricator
 backing git instead of Gerrit.


I know we can carry the history (and we should, for referencing), I'm
wondering if we *should* keep the history on the repos that the average
developer uses for writing patches and deploying.

(The deployment thing is just a nice benefit. I probably will do this in
deployment regardless of what we do on the canonical repo)

I've yet to find a git repo out there that's as large as ours that doesn't
ship
large blobs around (which we don't). Some of this is due to the nasty blobs
in our history. Some of this is due to the ever-increasing number of i18n
commit blobs.

-Chad
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l