Re: [Wikitech-l] ArchCom Radar, 2017-06-07

2017-06-08 Thread Victoria Coleman
We should probably start calling this the TechCom Radar. Are there other places 
that we need to do “rebranding” once we are ready?

Best,

Victoria 
> On Jun 8, 2017, at 6:45 AM, Daniel Kinzler  
> wrote:
> 
> Hi all!
> 
> Here are the minutes from this week's ArchCom meeting. You can also find the
> minutes at .
> 
> See also the ArchCom status page at
>  and the RFC 
> board
> .
> 
> 
> Here are the minutes, for your convenience:
> 
> * New RFC: “Move most of MediaWiki within a /core folder”
> https://phabricator.wikimedia.org/T167038 and “Make MediaWiki Core a 
> Dependency
> of MediaWiki” https://phabricator.wikimedia.org/T166956. The initial reaction
> from ArchCom after very brief review was: “sounds scary, why do we need 
> this?”.
> 
> * Kevin Smith to take on a more active role in RFC discussions and other 
> ArchCom
> related processes.
> 
> * Discussion is ongoing on the charter proposal
> https://www.mediawiki.org/wiki/Architecture_committee/Charter
> 
> * The RFC for adding an index on rc_this_oldid is entering the last week of 
> the
> last call period https://phabricator.wikimedia.org/T139012
> 
> * RFC discussion next week: “Reading List”
> https://phabricator.wikimedia.org/T164990. A brief discussion by ArchCom
> revealed that this is distinct from the idea of multiple watchlists since it’s
> cross-wiki, and is less complex than cross-wiki watchlist(s) because no
> notifications or feeds are needed. As always, the discussion will take place 
> in
> the IRC channel #wikimedia-office on Wednesday 21:00 UTC (2pm PDT, 23:00 
> CEST).
> 
> -- 
> Daniel Kinzler
> Principal Platform Engineer
> 
> Wikimedia Deutschland
> Gesellschaft zur Förderung Freien Wissens e.V.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How does a build process look like for a mediawiki extension repository?

2017-06-08 Thread David Barratt
Symfony is going to start recommending the use of `make` starting with
version 4, so it might be something worth exploring:
http://fabien.potencier.org/symfony4-best-practices.html#makefile

(I have no opinion on the matter)

On Wed, Jun 7, 2017 at 5:48 PM, Bryan Davis  wrote:

> On Wed, Jun 7, 2017 at 2:29 PM, Brion Vibber 
> wrote:
> > On Wed, Jun 7, 2017 at 10:18 AM, Joaquin Oltra Hernandez <
> > jhernan...@wikimedia.org> wrote:
> >
> >> *Context*
> >>
> >> We'd like to have a build script/process for an extension so that I can
> >> perform certain commands to install dependencies and perform
> optimizations
> >> on the extension sources. For example, on front-end sources.
> >>
> >> Some examples could be:
> >>
> >>- Installing libraries from bower or npm and bundling them into the
> >>resources folder
> >>- Applying post processing steps to CSS with something like post css
> >>- Optimizing images
> >>
> >> We are aware of other projects that have build processes for building
> >> deployables, but not extensions.
> >> Such projects have different ways of dealing with this. A common way is
> >> having a repository called /deploy and in there you pull from
> >>  and run the build scripts, and that is the repository that
> gets
> >> deployed.
> >>
> >> *Current system*
> >>
> >> The current way we usually do this (if we do) is run those build
> >> scripts/jobs on the developers machines and commit them into the git
> >> repository on master.
> >>
> >> With this system, if you don't enforce anything in CI, then build
> processes
> >> may be skipped (human error).
> >>
> >> If you enforce it (by running the process and comparing with what has
> been
> >> committed in CI) then patches merged to master that touch the same files
> >> will produce merge conflicts with existing open patches, forcing a
> >> rebase+rebuild on open patches every time one is merged on master.
> >>
> >> *Questions*
> >>
> >> Can we have a shared configuration/convention/system for having a build
> >> step on mediawiki extensions?
> >>
> >>- So that a build process is run
> >>   - on CI jobs that require production assets like the selenium jobs
> >>   - on the deployment job that deploys the extension to the beta
> >>   cluster and to production
> >>
> >> How would it look like? Are any extensions doing a pre-deployment build
> >> step?
> >
> >
> > For JS dependencies, image optimizations etc the state of the art still
> > seems to be to have a local one-off script and commit the build artifacts
> > into the repo. (For instance TimedMediaHandler fetches some JS libs via
> npm
> > and copies/patches them into the resources/ dir.)
> >
> > For PHP deps we've got composer dependency installation for extensions,
> so
> > it seems like there's an opportunity to do other build steps in this
> > stage...
> >
> > Not sure offhand if that can be snuck into composer directly or if we'd
> > need to replace the "run composer" step with "run this script, which runs
> > composer and also does other build steps".
>
> When I first joined the Foundation and started working with MediaWiki
> on a daily basis I wondered about the lack of a build process. At past
> jobs I had built PHP application environments that had a "run from
> version control" mode for local development, but always included a
> build step for packaging and deployment that did the sort of things
> that Joaquin is talking about. When I was in the Java world Ant and
> then later Maven2 were the tools of choice for this work. Later in a
> PHP shop I selected Phing as the build tool and even committed some
> enhancements upstream to make it work nicer with the type of projects
> I was managing.
>
> I helped get Composer use into MediaWiki core and that added a post
> deploy build step for MediaWiki, but one that is pretty limited in
> what it can do easily. Composer is mostly a tool for installing PHP
> library dependencies. Most of the attempts I have seen to make it do
> things beyond that are clunky uses of the tool. I can certainly still
> see the possible benefit of having a full fledged build step for core,
> skins, and extensions. It is something that should be thought about a
> bit before diving right into an implementation though. One thing to
> consider is if what would be best is a packaging step that leads to a
> tarball or similar artifact that can be dropped into a runtime
> environment for MediaWiki or if instead it would be better to have a
> unified post-deploy build step that operates across MediaWiki core and
> the entire collection of optional extensions and skins deployed to
> create a particular wiki.
>
> The Foundation's production deployment use case will always be an
> anomaly. It should be considered, but really in my opinion only to
> ensure that nothing absolutely requires external network access in the
> final build. For Composer this turned out to be as easy as maintaining
> a 

Re: [Wikitech-l] Setting up multiple Parsoid servers behind load balancer

2017-06-08 Thread C. Scott Ananian
https://www.mediawiki.org/wiki/Parsoid/Setup/RESTBase

Cassandra is optional, for a small deployment the sqlite backend is
probably sufficient.  Cassandra is the "distributed DB" part, so if you
used cassandra you could set up multiple restbase clients.

RESTBase is for performance, not availability.  It moved the parse time to
"after the page is saved" instead of "when the user hits edit", which makes
the editing interface feel much more snappy.

None of mediawiki, Parsoid, or RESTBase store any state other than in their
DBs (MySQL for mediawiki or Cassandra/sqlite for RESTBase).  We use LVS for
load balancing (https://wikitech.wikimedia.org/wiki/LVS).  All the
mediawikis point at LVS, which chooses an appropriate RESTBase.  All the
RESTBase point at LVS, which chooses an appropriate Parsoid.  All the
Parsoids point to LVS, which will give them an arbitary mediawiki for
fetching wikitext, etc.
  --scott

On Thu, Jun 8, 2017 at 10:10 AM, James Montalvo 
wrote:

> I've read through the documentation I think you're talking about. It's kind
> of hard to determine where to start since the docs are spread out between
> multiple VE, Parsoid and RESTBase pages. Installing RESTBase is, as you
> say, straightforward (git clone, npm install, basically). Configuring is
> not clear to me, and without clear docs it's the kind of thing that takes
> hours of trial and error. Also some parts mention I need Cassandra but it's
> not clear if that's a hard requirement.
>
> If I want a highly available setup with multiple app servers and multiple
> Parsoid servers, would I install RESTBase alongside each Parsoid? How does
> communication between the multiple app and RB/Parsoid servers get
> configured? I feel like I'll be back in the same load balancing situation.
>
> --James
>
> On Jun 8, 2017 7:43 AM, "C. Scott Ananian"  wrote:
>
> RESTBase actually adds a lot of immediate performance, since it lets VE
> load the editable representation directly from cache, instead of requiring
> the editor to wait for Parsoid to parse the page before it can be edited.
> I documented the RESTBase install; it shouldn't actually be any more
> difficult than Parsoid.  They both use the same service runner framework
> now.
>
> At any rate: in your configurations you have URL and HTTPProxy set to the
> exact same string.  This is almost certainly not right.  I believe if you
> just omit the proxy lines entirely from the configuration you'll find
> things work as you expect.
>  --scott
>
> On Wed, Jun 7, 2017 at 11:30 PM, James Montalvo 
> wrote:
>
> > Setting up RESTBase is very involved. I'd really prefer not to add that
> > complexity at this time. Also I'm not sure at my scale RESTBase would
> > provide much performance benefit (though I don't know much about it so
> > that's just a hunch). The parsoid and VE configs have fields for proxy
> (as
> > shown in my snippets), so it seems like running them this way is
> intended.
> > Am I wrong?
> >
> > Thanks,
> > James
> >
> > On Jun 7, 2017 8:12 PM, "C. Scott Ananian" 
> wrote:
> >
> > > I think in general the first thing you should do for performance is set
> > up
> > > restbase in front of parsoid? Caching the parsoid results will be
> faster
> > > than running multiple parsoids in parallel.  That would also match the
> > wmf
> > > configuration more closely, which would probably help us help you.  I
> > wrote
> > > up instructions for configuring restbase on the VE and Parsoid wiki
> > pages.
> > > As it turns out I updated these today to use VRS configuration. Let me
> > know
> > > if you run into trouble, perhaps some further minor updates are
> > necessary.
> > >   --scott
> > >
> > > On Jun 7, 2017 6:26 PM, "James Montalvo" 
> > wrote:
> > >
> > > > I'm trying to setup two Parsoid servers to play nicely with two
> > MediaWiki
> > > > application servers and am having some issues. I have no problem
> > getting
> > > > things working with Parsoid on a single app server, or multiple
> Parsoid
> > > > servers being used by a single app server, but ran into issues when I
> > > > increased to multiple app servers. To try to get this working I
> > starting
> > > > making the app and Parsoid servers communicate through my load
> > balancer.
> > > So
> > > > an overview of my config is:
> > > >
> > > > Load balancer = 192.168.56.63
> > > >
> > > > App1 = 192.168.56.80
> > > > App2 = 192.168.56.60
> > > >
> > > > Parsoid1 = 192.168.56.80
> > > > Parsoid2 = 192.168.56.60
> > > >
> > > > Note, App1 and Parsoid1 are the same server, and App2 and Parsoid2
> are
> > > the
> > > > same server. I can only spin up so many VMs on my laptop.
> > > >
> > > > The load balancer (HAProxy) is configured as follows:
> > > > * 80 forwards to 443
> > > > * 443 forwards to App1 and App2 port 8080
> > > > * 8081 forwards to App1 and App2 port 8080 (this will be a private
> > > network
> > > > 

Re: [Wikitech-l] Setting up multiple Parsoid servers behind load balancer

2017-06-08 Thread James Montalvo
I've read through the documentation I think you're talking about. It's kind
of hard to determine where to start since the docs are spread out between
multiple VE, Parsoid and RESTBase pages. Installing RESTBase is, as you
say, straightforward (git clone, npm install, basically). Configuring is
not clear to me, and without clear docs it's the kind of thing that takes
hours of trial and error. Also some parts mention I need Cassandra but it's
not clear if that's a hard requirement.

If I want a highly available setup with multiple app servers and multiple
Parsoid servers, would I install RESTBase alongside each Parsoid? How does
communication between the multiple app and RB/Parsoid servers get
configured? I feel like I'll be back in the same load balancing situation.

--James

On Jun 8, 2017 7:43 AM, "C. Scott Ananian"  wrote:

RESTBase actually adds a lot of immediate performance, since it lets VE
load the editable representation directly from cache, instead of requiring
the editor to wait for Parsoid to parse the page before it can be edited.
I documented the RESTBase install; it shouldn't actually be any more
difficult than Parsoid.  They both use the same service runner framework
now.

At any rate: in your configurations you have URL and HTTPProxy set to the
exact same string.  This is almost certainly not right.  I believe if you
just omit the proxy lines entirely from the configuration you'll find
things work as you expect.
 --scott

On Wed, Jun 7, 2017 at 11:30 PM, James Montalvo 
wrote:

> Setting up RESTBase is very involved. I'd really prefer not to add that
> complexity at this time. Also I'm not sure at my scale RESTBase would
> provide much performance benefit (though I don't know much about it so
> that's just a hunch). The parsoid and VE configs have fields for proxy (as
> shown in my snippets), so it seems like running them this way is intended.
> Am I wrong?
>
> Thanks,
> James
>
> On Jun 7, 2017 8:12 PM, "C. Scott Ananian"  wrote:
>
> > I think in general the first thing you should do for performance is set
> up
> > restbase in front of parsoid? Caching the parsoid results will be faster
> > than running multiple parsoids in parallel.  That would also match the
> wmf
> > configuration more closely, which would probably help us help you.  I
> wrote
> > up instructions for configuring restbase on the VE and Parsoid wiki
> pages.
> > As it turns out I updated these today to use VRS configuration. Let me
> know
> > if you run into trouble, perhaps some further minor updates are
> necessary.
> >   --scott
> >
> > On Jun 7, 2017 6:26 PM, "James Montalvo" 
> wrote:
> >
> > > I'm trying to setup two Parsoid servers to play nicely with two
> MediaWiki
> > > application servers and am having some issues. I have no problem
> getting
> > > things working with Parsoid on a single app server, or multiple
Parsoid
> > > servers being used by a single app server, but ran into issues when I
> > > increased to multiple app servers. To try to get this working I
> starting
> > > making the app and Parsoid servers communicate through my load
> balancer.
> > So
> > > an overview of my config is:
> > >
> > > Load balancer = 192.168.56.63
> > >
> > > App1 = 192.168.56.80
> > > App2 = 192.168.56.60
> > >
> > > Parsoid1 = 192.168.56.80
> > > Parsoid2 = 192.168.56.60
> > >
> > > Note, App1 and Parsoid1 are the same server, and App2 and Parsoid2 are
> > the
> > > same server. I can only spin up so many VMs on my laptop.
> > >
> > > The load balancer (HAProxy) is configured as follows:
> > > * 80 forwards to 443
> > > * 443 forwards to App1 and App2 port 8080
> > > * 8081 forwards to App1 and App2 port 8080 (this will be a private
> > network
> > > connection later)
> > > * 8001 forwards to Parsoid1 and Parsoid2 port 8000 (also will be
> private)
> > >
> > > On App1/Parsoid1 I can run `curl 192.168.56.63:8001` and get the
> > > appropriate response from Parsoid. I can run `curl 192.168.56.63:8081`
> > and
> > > get the appropriate response from MediaWiki. The same is true for both
> on
> > > App2/Parsoid2. So the servers can get the info they need from the
> > services.
> > >
> > > Currently I'm getting a the error "Error loading data from server:
500:
> > > docserver-http: HTTP 500. Would you like to retry?" when attempting to
> > use
> > > Visual Editor. I've tried various different settings and have not
> always
> > > gotten that specific error, but am getting it with the settings I
> > currently
> > > have in localsettings.js and LocalSettings.php (shown below in this
> > email).
> > > Removing the proxy config lines from these settings gave slightly
> better
> > > results. I did not get the 500 error, but instead it sometimes after a
> > very
> > > long time it would work. It also may have been throwing errors in the
> > > parsoid log (with debug on). I have those logs saved if they help. I'm
> > > hoping someone 

Re: [Wikitech-l] ArchCom Radar, 2017-06-07

2017-06-08 Thread Toby Negrin
Hi Daniel -- I always read these and find them very useful but realized I
had never let you know. So thank you for sending them out :)

-Toby

On Thu, Jun 8, 2017 at 6:45 AM, Daniel Kinzler 
wrote:

> Hi all!
>
> Here are the minutes from this week's ArchCom meeting. You can also find
> the
> minutes at  2017-06-07>.
>
> See also the ArchCom status page at
>  and the
> RFC board
> .
>
>
> Here are the minutes, for your convenience:
>
> * New RFC: “Move most of MediaWiki within a /core folder”
> https://phabricator.wikimedia.org/T167038 and “Make MediaWiki Core a
> Dependency
> of MediaWiki” https://phabricator.wikimedia.org/T166956. The initial
> reaction
> from ArchCom after very brief review was: “sounds scary, why do we need
> this?”.
>
> * Kevin Smith to take on a more active role in RFC discussions and other
> ArchCom
> related processes.
>
> * Discussion is ongoing on the charter proposal
> https://www.mediawiki.org/wiki/Architecture_committee/Charter
>
> * The RFC for adding an index on rc_this_oldid is entering the last week
> of the
> last call period https://phabricator.wikimedia.org/T139012
>
> * RFC discussion next week: “Reading List”
> https://phabricator.wikimedia.org/T164990. A brief discussion by ArchCom
> revealed that this is distinct from the idea of multiple watchlists since
> it’s
> cross-wiki, and is less complex than cross-wiki watchlist(s) because no
> notifications or feeds are needed. As always, the discussion will take
> place in
> the IRC channel #wikimedia-office on Wednesday 21:00 UTC (2pm PDT, 23:00
> CEST).
>
> --
> Daniel Kinzler
> Principal Platform Engineer
>
> Wikimedia Deutschland
> Gesellschaft zur Förderung Freien Wissens e.V.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] ArchCom Radar, 2017-06-07

2017-06-08 Thread Daniel Kinzler
Hi all!

Here are the minutes from this week's ArchCom meeting. You can also find the
minutes at .

See also the ArchCom status page at
 and the RFC board
.


Here are the minutes, for your convenience:

* New RFC: “Move most of MediaWiki within a /core folder”
https://phabricator.wikimedia.org/T167038 and “Make MediaWiki Core a Dependency
of MediaWiki” https://phabricator.wikimedia.org/T166956. The initial reaction
from ArchCom after very brief review was: “sounds scary, why do we need this?”.

* Kevin Smith to take on a more active role in RFC discussions and other ArchCom
related processes.

* Discussion is ongoing on the charter proposal
https://www.mediawiki.org/wiki/Architecture_committee/Charter

* The RFC for adding an index on rc_this_oldid is entering the last week of the
last call period https://phabricator.wikimedia.org/T139012

* RFC discussion next week: “Reading List”
https://phabricator.wikimedia.org/T164990. A brief discussion by ArchCom
revealed that this is distinct from the idea of multiple watchlists since it’s
cross-wiki, and is less complex than cross-wiki watchlist(s) because no
notifications or feeds are needed. As always, the discussion will take place in
the IRC channel #wikimedia-office on Wednesday 21:00 UTC (2pm PDT, 23:00 CEST).

-- 
Daniel Kinzler
Principal Platform Engineer

Wikimedia Deutschland
Gesellschaft zur Förderung Freien Wissens e.V.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Setting up multiple Parsoid servers behind load balancer

2017-06-08 Thread C. Scott Ananian
RESTBase actually adds a lot of immediate performance, since it lets VE
load the editable representation directly from cache, instead of requiring
the editor to wait for Parsoid to parse the page before it can be edited.
I documented the RESTBase install; it shouldn't actually be any more
difficult than Parsoid.  They both use the same service runner framework
now.

At any rate: in your configurations you have URL and HTTPProxy set to the
exact same string.  This is almost certainly not right.  I believe if you
just omit the proxy lines entirely from the configuration you'll find
things work as you expect.
 --scott

On Wed, Jun 7, 2017 at 11:30 PM, James Montalvo 
wrote:

> Setting up RESTBase is very involved. I'd really prefer not to add that
> complexity at this time. Also I'm not sure at my scale RESTBase would
> provide much performance benefit (though I don't know much about it so
> that's just a hunch). The parsoid and VE configs have fields for proxy (as
> shown in my snippets), so it seems like running them this way is intended.
> Am I wrong?
>
> Thanks,
> James
>
> On Jun 7, 2017 8:12 PM, "C. Scott Ananian"  wrote:
>
> > I think in general the first thing you should do for performance is set
> up
> > restbase in front of parsoid? Caching the parsoid results will be faster
> > than running multiple parsoids in parallel.  That would also match the
> wmf
> > configuration more closely, which would probably help us help you.  I
> wrote
> > up instructions for configuring restbase on the VE and Parsoid wiki
> pages.
> > As it turns out I updated these today to use VRS configuration. Let me
> know
> > if you run into trouble, perhaps some further minor updates are
> necessary.
> >   --scott
> >
> > On Jun 7, 2017 6:26 PM, "James Montalvo" 
> wrote:
> >
> > > I'm trying to setup two Parsoid servers to play nicely with two
> MediaWiki
> > > application servers and am having some issues. I have no problem
> getting
> > > things working with Parsoid on a single app server, or multiple Parsoid
> > > servers being used by a single app server, but ran into issues when I
> > > increased to multiple app servers. To try to get this working I
> starting
> > > making the app and Parsoid servers communicate through my load
> balancer.
> > So
> > > an overview of my config is:
> > >
> > > Load balancer = 192.168.56.63
> > >
> > > App1 = 192.168.56.80
> > > App2 = 192.168.56.60
> > >
> > > Parsoid1 = 192.168.56.80
> > > Parsoid2 = 192.168.56.60
> > >
> > > Note, App1 and Parsoid1 are the same server, and App2 and Parsoid2 are
> > the
> > > same server. I can only spin up so many VMs on my laptop.
> > >
> > > The load balancer (HAProxy) is configured as follows:
> > > * 80 forwards to 443
> > > * 443 forwards to App1 and App2 port 8080
> > > * 8081 forwards to App1 and App2 port 8080 (this will be a private
> > network
> > > connection later)
> > > * 8001 forwards to Parsoid1 and Parsoid2 port 8000 (also will be
> private)
> > >
> > > On App1/Parsoid1 I can run `curl 192.168.56.63:8001` and get the
> > > appropriate response from Parsoid. I can run `curl 192.168.56.63:8081`
> > and
> > > get the appropriate response from MediaWiki. The same is true for both
> on
> > > App2/Parsoid2. So the servers can get the info they need from the
> > services.
> > >
> > > Currently I'm getting a the error "Error loading data from server: 500:
> > > docserver-http: HTTP 500. Would you like to retry?" when attempting to
> > use
> > > Visual Editor. I've tried various different settings and have not
> always
> > > gotten that specific error, but am getting it with the settings I
> > currently
> > > have in localsettings.js and LocalSettings.php (shown below in this
> > email).
> > > Removing the proxy config lines from these settings gave slightly
> better
> > > results. I did not get the 500 error, but instead it sometimes after a
> > very
> > > long time it would work. It also may have been throwing errors in the
> > > parsoid log (with debug on). I have those logs saved if they help. I'm
> > > hoping someone can just point out some misconfiguration, though.
> > >
> > > Here are snippets of my config files:
> > >
> > > On App1/Parsoid1, relevant localsettings.js:
> > >
> > > parsoidConfig.setMwApi({
> > >
> > > uri: 'http://192.168.56.80:8081/demo/api.php',
> > > proxy: { uri: 'http://192.168.56.80:8081/' },
> > > domain: 'demo',
> > > prefix: 'demo'
> > > } );
> > >
> > > parsoidConfig.serverInterface = '192.168.56.80';
> > >
> > >
> > > On App2/Parsoid2, relevant localsettings.js:
> > >
> > > parsoidConfig.setMwApi({
> > >
> > > uri: 'http://192.168.56.80:8081/demo/api.php',
> > > proxy: { uri: 'http://192.168.56.80:8081/' },
> > >
> > > domain: 'demo',
> > > prefix: 'demo'
> > >
> > > } );
> > >
> > > parsoidConfig.serverInterface = '192.168.56.60';
> > >
> > >
> > > On App1/Parsoid1, relevant LocalSettings.php:
> > >
> > > 

Re: [Wikitech-l] Changes to Product and Technology departments at the Foundation

2017-06-08 Thread Derk-Jan Hartman
I'm really looking forward to these changes. I have (as many) long been
unsatisfied with the previous 2015 reorg. I was also a firm believer in
keeping it in place over the last year, to make sure people got time to
recover from the craziness. During WM conf in Berlin however I had many
hallway conversations, where I relayed my opinion that in order to improve
the performance of the tech teams, it was time to start fixing the broken
structure.

These seem steps into the right direction and I hope that with these
changes the teams can be more efficient, and that it can adapt where
needed. I'm also glad to hear that so many employees were consulted
beforehand. It is an unusual organisation and taking in that diverse input
is highly critical.

Thank you


On Wed, Jun 7, 2017 at 11:15 PM, Toby Negrin  wrote:

> Hi everybody,
>
> We have made some changes to our Product and Technology departments which
> we are excited to tell you about. When Wes Moran, former Vice President of
> Product, left the Wikimedia Foundation in May, we took the opportunity to
> review the organization and operating principles that were guiding Product
> and Technology. Our objectives were to improve our engagement with the
> community during product development, develop a more audience-based
> approach to building products, and create as efficient a pipeline as
> possible between an idea and its deployment. We also wanted an approach
> that would better prepare our engineering teams to plan around the upcoming
> movement strategic direction. We have finished this process and have some
> results to share with you.
>
> Product is now known as Audiences, and other changes in that department
>
> In order to more intentionally commit to a focus on the needs of users, we
> are making changes to the names of teams and department (and will be using
> these names throughout the rest of this update):
>
>-
>
>The Product department will be renamed the Audiences department;
>-
>
>The Editing team will now be called the Contributors team;
>-
>
>The Reading team will be renamed the Readers team.
>
> You might be asking: what does “audience” mean in this context? We define
> it as a specific group of people who will use the products we build. For
> example, “readers” is one audience. “Contributors” is another. Designing
> products around who will be utilizing them most, rather than what we would
> like those products to do, is a best practice in product development. We
> want our organizational structure to support that approach.
>
> We are making five notable changes to the Audiences department structure.
>
> The first is that we are migrating folks working on search and discovery
> from the stand-alone Discovery team into the Readers team and Technology
> department, respectively. Specifically, the team working on our search
> backend infrastructure will move to Technology, where they will report to
> Victoria. The team working on maps, the search experience, and the project
> entry portals (such as Wikipedia.org) will join the Readers team. This
> realignment will allow us to build more integrated experiences and
> knowledge-sharing for the end user.
>
> The second is that the Fundraising Tech team will also move to the
> Technology department. This move recognizes that their core work is
> primarily platform development and integration, and brings them into closer
> cooperation with their peers in critical functions including MediaWiki
> Platform, Security, Analytics, and Operations.
>
> The Team Practices group (TPG) will also be undergoing some changes.
> Currently, TPG supports both specific teams in Product, as well as
> supporting broader organizational development. Going forward, those TPG
> members directly supporting feature teams will be embedded in their
> respective teams in the Audiences or Technology departments. The TPG
> members who were primarily focused on organizational health and development
> will move to the Talent & Culture department, where they will report to
> Anna Stillwell.
>
> These three changes lead to the fourth, which is the move from four
> “audience” verticals in the department (Reading, Editing, Discovery, and
> Fundraising Tech, plus Team Practices) to three: Readers, Contributors, and
> Community Tech. This structure is meant to streamline our focus on the
> people we serve with our feature and product development, increase team
> accountability and ownership over their work, allow Community Tech to
> maintain its unique, effective, and multi-audiences workflow, and better
> integrate support directly where teams need it most.
>
> One final change: in the past we have had a design director. We recognize
> that design is critical to creating exceptional experiences as a
> contributor or a reader, so we’re bringing that role back. The director for
> design will report to the interim Vice President of Product. The Design
> Research function, currently under the