[Wikitech-l] [offtopic] interesting read on Android app architecture

2016-02-26 Thread Brian Gerstle
And general architecture:

http://devblog.songkick.com/2016/02/25/ingredients-for-a-healthy-android-codebase/

Post links to a presentation as well. Their approach is following some best
practices I also strongly agree with, and they're using frameworks I would
probably look to use (Dagger & RxJava).



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] CI and cross repository dependencies

2015-11-19 Thread Brian Gerstle
Nice work!  Is this at all related to upstream/downstream Jenkins jobs?

On Thu, Nov 19, 2015 at 12:38 PM, Sam Smith  wrote:

> This is great! A huge thanks to everyone involved.
>
> -Sam
>
> On Thu, Nov 19, 2015 at 5:26 PM, Amir E. Aharoni <
> amir.ahar...@mail.huji.ac.il> wrote:
>
> > Wonderful, thank you, this should be very useful.
> >
> >
> > --
> > Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
> > http://aharoni.wordpress.com
> > ‪“We're living in pieces,
> > I want to live in peace.” – T. Moore‬
> >
> > 2015-11-19 15:19 GMT+02:00 Antoine Musso :
> >
> > > Hello,
> > >
> > > We often have the case of a change to an extension depending on a
> > > pending patch to MediaWiki core.  I have upgraded our CI scheduler -
> > > Zuul - a couple weeks ago and it now supports marking dependencies even
> > > in different repositories.
> > >
> > >
> > > Why does it matter?  To make sure the dependency is fulfilled one
> > > usually either:
> > >
> > > * CR-2 the patch until dependent change is merged
> > > * write a test that exercise the required patch in MediaWiki.
> > >
> > > With the first solution (lack of test), once both are merged, nothing
> > > prevent one from cherry picking a patch without its dependent patch.
> > > For example for MediaWiki minor releases or Wikimedia deployment
> > branches.
> > >
> > > When a test covers the dependency, it will fail until the dependent one
> > > is merged which is rather annoying.
> > >
> > >
> > > Zuul now recognizes the header 'Depends-On' in git messages, similar to
> > > 'Change-Id' and 'Bug'.  'Depends-On' takes as parameter a change-id and
> > > multiple ones can be added.
> > >
> > > When a patch is proposed in Gerrit, Zuul looks for Gerrit changes
> > > matching the 'Depends-On' and verify whether any are still open. In
> such
> > > a case, it will craft references for the open patches so all the
> > > dependencies can be tested as if they got merged.
> > >
> > >
> > > Real world example
> > > --
> > >
> > > The ContentTranslation extension is tested with the Wikidata one and
> was
> > > not passing the test.  Wikidata created a patch and we did not want to
> > > merge it until we confirm the ContentTranslation one is passing
> properly.
> > >
> > > The Wikidata patch is https://gerrit.wikimedia.org/r/#/c/252227/
> > > Change-Id: I0312c23628d706deb507b5534b868480945b6163
> > >
> > > On ContentTranslation we indicated the dependency:
> > > https://gerrit.wikimedia.org/r/#/c/252172/1..2//COMMIT_MSG
> > > + Depends-On: I0312c23628d706deb507b5534b868480945b6163
> > >
> > > Which is the Wikidata patch.
> > >
> > >
> > > Zuul:
> > > * received the patch for ContentTranslation
> > > * looked up the change-id and found the Wikidata
> > > * created git references in both repo to point to the proper patches
> > >
> > > Jenkins:
> > > * zuul-cloner cloned both repos and fetched the references created by
> > > the Zuul service
> > > * run tests
> > > * SUCCESS
> > >
> > > That confirmed us the Wikidata patch was actually fixing the issue for
> > > ContentTranslation. Hence we CR+2 both and all merged fine.
> > >
> > >
> > > Please take a moment to read upstream documentation:
> > >
> > >
> > >
> >
> http://docs.openstack.org/infra/zuul/gating.html#cross-repository-dependencies
> > >
> > > Wikidata/ContentTranslation task:
> > > https://phabricator.wikimedia.org/T118263
> > >
> > >
> > > --
> > > Antoine "hashar" Musso
> > >
> > >
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] CI and cross repository dependencies

2015-11-19 Thread Brian Gerstle
>
> Jenkins as a huge collections of independent shell scripts
> that are waiting to be executed with appropriate parameters.


Right that made me wonder... but then you said:

OpenStack has a spec to get rid of Jenkins entirely and instead have
> Zuul create an Ansible play book to run on a machine.  But really that
> is another topic.


Makes sense that we should drop Jenkins if we're not leveraging it's
features.

Congrats again on contributing back to the OSS community!

On Thu, Nov 19, 2015 at 2:27 PM, Antoine Musso <hashar+...@free.fr> wrote:

> Le 19/11/2015 18:51, Brian Gerstle a écrit :
> > Nice work!
> >
> > Is this at all related to upstream/downstream Jenkins jobs?
>
> The Zuul system does not rely at all on Jenkins upstream/downstream. One
> can think of Jenkins as a huge collections of independent shell scripts
> that are waiting to be executed with appropriate parameters.
>
> OpenStack has a spec to get rid of Jenkins entirely and instead have
> Zuul create an Ansible play book to run on a machine.  But really that
> is another topic.
>
>
> To elaborate a bit more:
>
> Gerrit does support dependencies between changes, but only in the same
> repository and branch. You can see that in the Gerrit web interface, and
> Gerrit will refuse to merge a change for which the parent is not merged
> yet.
>
> Zuul does the same but independently from Gerrit. It is merely filling
> the gap of Gerrit lacks of cross repositories dependencies.
>
> When a change is voted +2, it is enqueued in 'gate-and-submit'.  Zuul
> immediately verify whether the dependencies are either merged or ahead
> in the queue, else it will reject the change and report back in Gerrit.
>
> So if you have change A and change B depending on A. You +2 A then B and
> the queue is:
>
>  A <-- B (depend on A)
>
> A is processed (no dependency)
> For B, Zuul find the dependency A ahead and thus it is processed.
>
> If A fails the tests, B tests are automatically cancelled and the change
> dequeued.  Zuul knows B depends on A.
>
> Assuming all changes are merged by Zuul (via CR+2), Zuul dependency
> comes on top of Gerrit and nicely enforce dependencies.
>
>
>
> --
> Antoine "hashar" Musso
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] mattermost.org (open source Slack alternative)

2015-10-22 Thread Brian Gerstle
Mattermost.org

Not only does it look pretty cool, but I really, really like their community
portal
.
It looks incredibly easy  to
file bugs, submit feature requests, vote for and discuss existing feature
requests, and find a features ready to accept volunteer contributions.

I'm curious what others about it as a potential chat client* and as an open
source project in general.  Seems like it could be easy to evaluate,
especially if we use our fancy new kubernetes infrastructure to spin up a
container

for
one or more teams to try out.

Thoughts?

Brian

* FYI, there's already a feature request for an IRC bridge


-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inhibitors for Mobile Content Service to use Parsoid output

2015-10-19 Thread Brian Gerstle
On Fri, Oct 16, 2015 at 5:59 PM, Gabriel Wicke <gwi...@wikimedia.org> wrote:

> On Fri, Oct 16, 2015 at 2:50 PM, Brian Gerstle <bgers...@wikimedia.org>
> wrote:
>
> > I've mentioned this idea before, but having a service which allowed you
> to
> > reliably get image thumbs for a given file at a specified width/height
> > would obviate the srcset.
>
>
>
> Our thumbs are already created on demand, based on the image width
> specified in the URL. Example for a 40px wide thumb:
>
>
> https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Collage_of_Nine_Dogs.jpg/40px-Collage_of_Nine_Dogs.jpg
>
>
It's been mentioned elsewhere (I believe by Gergo) that these URLs aren't
stable, and can't be reliably constructed by clients. Is that still the
case?


>
> The corresponding Parsoid HTML contains the original height & width in data
> attributes:
>
>  data-file-width="1665" data-file-height="1463" data-file-type="bitmap"
> height="228" width="260">
>
> Based on this information, it shouldn't be too hard to calculate 1.5x / 2x
> resolution thumb urls with a combination of multiplication & rounding.
>
>
> > And prevent cache fragmentation on img resolutions.
> >
>
> Isn't the srcset using a limited set of resolution factors?
>
>
> >
> > On Friday, October 16, 2015, Dmitry Brant <dbr...@wikimedia.org> wrote:
> >
> > > We can indeed fall back to TTS if the spoken article is not available,
> or
> > > offer a choice between TTS and the spoken version. The intention was
> for
> > > this to be a quick win of surfacing a useful, if lesser-known, facet of
> > > Wikipedia content.
> > >
> > > That being said, this doesn't necessarily need to be a blocker for
> > > transitioning the Content Service to Parsoid. If all else fails, we can
> > > ascertain the audio URL on the client side based on the File page name.
> > As
> > > for transcodings of video files, we already make a separate API call to
> > > retrieve them, so perhaps we can continue to do that until we're able
> to
> > > get them directly from Parsoid?
> > > It sounds like a more pressing issue right now is the srcset
> > attributes...
> > >
> > >
> > > On Fri, Oct 16, 2015 at 2:30 PM, Luis Villa <lvi...@wikimedia.org
> > > <javascript:;>> wrote:
> > >
> > > > On Fri, Oct 16, 2015 at 11:14 AM, Bernd Sitzmann <
> be...@wikimedia.org
> > > <javascript:;>>
> > > > wrote:
> > > >
> > > > > It looks like Mobile Apps and Mobile Web have different priority
> > > > > > requirements from Parsoid here. Looking at
> > > > > > https://en.wikipedia.org/wiki/Wikipedia:Spoken_articles, I also
> > see
> > > > that
> > > > > > there are only 1243 spoken wikipedia articles (that are probably
> > not
> > > > all
> > > > > > the latest version of these articles). It also doesn't look like
> > the
> > > > > video
> > > > > > player works currently in mobile web or in mobile apps (except
> > maybe
> > > > > > Android ?).
> > > > >
> > > >
> > > > With due respect for the hard work people have put in on that
> project,
> > is
> > > > there any indication Spoken Articles has any traction and will grow
> > > beyond
> > > > that ~1K articles? Wouldn't using Android's TTS API to read the most
> > > > up-to-date version of the article be a much better user experience
> (35M
> > > > articles, always up-to-date, instead of 1K articles, almost always
> out
> > of
> > > > date?)
> > > >
> > > > Luis
> > > >
> > > >
> > > > --
> > > > Luis Villa
> > > > Sr. Director of Community Engagement
> > > > Wikimedia Foundation
> > > > *Working towards a world in which every single human being can freely
> > > share
> > > > in the sum of all knowledge.*
> > > > ___
> > > > Wikitech-l mailing list
> > > > Wikitech-l@lists.wikimedia.org <javascript:;>
> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > >
> > >
> > >
> > >
> > > --
> > > Dmitry Brant
> > > Mobile Apps Team (Android)
> > > Wikimedia Foundation
> > > https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org <javascript:;>
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> >
> > --
> > EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
> > IRC: bgerstle
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> Gabriel Wicke
> Principal Engineer, Wikimedia Foundation
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inhibitors for Mobile Content Service to use Parsoid output

2015-10-19 Thread Brian Gerstle
Right, but the point is to avoid an extra round trip just to get a URL.

On Mon, Oct 19, 2015 at 3:20 PM, Bartosz Dziewoński <matma@gmail.com>
wrote:

> On 2015-10-19 17:28, Brian Gerstle wrote:
>
>> It's been mentioned elsewhere (I believe by Gergo) that these URLs aren't
>> stable, and can't be reliably constructed by clients. Is that still the
>> case?
>>
>
> The URL looks different for very long file names and for different file
> types. There is an API endpoint to get it, though:
> https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query=imageinfo=json=url=40=File%3AExample.jpg
>
> --
> Bartosz Dziewoński
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inhibitors for Mobile Content Service to use Parsoid output

2015-10-16 Thread Brian Gerstle
I've mentioned this idea before, but having a service which allowed you to
reliably get image thumbs for a given file at a specified width/height
would obviate the srcset. And prevent cache fragmentation on img
resolutions.

On Friday, October 16, 2015, Dmitry Brant  wrote:

> We can indeed fall back to TTS if the spoken article is not available, or
> offer a choice between TTS and the spoken version. The intention was for
> this to be a quick win of surfacing a useful, if lesser-known, facet of
> Wikipedia content.
>
> That being said, this doesn't necessarily need to be a blocker for
> transitioning the Content Service to Parsoid. If all else fails, we can
> ascertain the audio URL on the client side based on the File page name.  As
> for transcodings of video files, we already make a separate API call to
> retrieve them, so perhaps we can continue to do that until we're able to
> get them directly from Parsoid?
> It sounds like a more pressing issue right now is the srcset attributes...
>
>
> On Fri, Oct 16, 2015 at 2:30 PM, Luis Villa  > wrote:
>
> > On Fri, Oct 16, 2015 at 11:14 AM, Bernd Sitzmann  >
> > wrote:
> >
> > > It looks like Mobile Apps and Mobile Web have different priority
> > > > requirements from Parsoid here. Looking at
> > > > https://en.wikipedia.org/wiki/Wikipedia:Spoken_articles, I also see
> > that
> > > > there are only 1243 spoken wikipedia articles (that are probably not
> > all
> > > > the latest version of these articles). It also doesn't look like the
> > > video
> > > > player works currently in mobile web or in mobile apps (except maybe
> > > > Android ?).
> > >
> >
> > With due respect for the hard work people have put in on that project, is
> > there any indication Spoken Articles has any traction and will grow
> beyond
> > that ~1K articles? Wouldn't using Android's TTS API to read the most
> > up-to-date version of the article be a much better user experience (35M
> > articles, always up-to-date, instead of 1K articles, almost always out of
> > date?)
> >
> > Luis
> >
> >
> > --
> > Luis Villa
> > Sr. Director of Community Engagement
> > Wikimedia Foundation
> > *Working towards a world in which every single human being can freely
> share
> > in the sum of all knowledge.*
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org 
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> Dmitry Brant
> Mobile Apps Team (Android)
> Wikimedia Foundation
> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org 
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] no more 350ms delay in webkit!

2015-10-15 Thread Brian Gerstle
x-post from /r/webdev https://trac.webkit.org/changeset/191072

Probably another reason to try using WKWebView in the iOS app.

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Architecture Committee expansion

2015-10-09 Thread Brian Gerstle
Congrats Timo, well deserved!

On Fri, Oct 9, 2015 at 8:21 AM, aude  wrote:

> On Fri, Oct 9, 2015 at 12:17 AM, Tim Starling 
> wrote:
>
> > In a recent meeting of the MediaWiki Architecture Committee, it was
> > agreed that Timo Tijhof (Krinkle) would be invited to join the
> > committee. Timo accepted this invitation.
> >
> >
> thought Timo was already on the architecture committee :) anyway, yay!
>
> Cheers,
> Katie
>
>
>
> > Timo is a talented software engineer with experience in many areas,
> > especially the MediaWiki core and JavaScript frontend components such
> > as ResourceLoader and VisualEditor. He currently works for WMF in the
> > performance team. I look forward to working with him on the
> > Architecture Committee.
> >
> > -- Tim Starling
> >
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
>
>
> --
> @wikimediadc / @wikidata
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] GitLab CI (was RFC: Optional Travis Integration for Jenkins)

2015-10-07 Thread Brian Gerstle
(Tried sending this last weekend w/ an image attachment, which wikitech-l
apparently didn't like. Trying again)

Scott's RFC about Travis got me thinking about GitLab again.  To satisfy my
curiosity, I conducted a little experiment this weekend: I imported the
Wikipedia iOS GitHub repo into GitLab [0], configured it to use GitLab CI
[1], and added my own laptop as a runner.  I didn't do this with any
specific outcome in mind, but wanted to share the results as food for
thought.

While just a proof of concept, it took maybe an hour to setup and, perhaps
most notably, is powered entirely by FLOSS (aside from some components
specific to iOS development).  I've added my thoughts about approach below,
but take with a grain of salt as my experience was quite brief:

*Pros*

   - FLOSS
   - Versatile
  - This setup uses a shell runner on an OSX machine, but other setups
  could use shared runners, dedicated runners, runners on Linux
inside Docker
  containers, etc.
   - Decent-looking number of contributors (though clearly one person is
   doing the lion's share on GitLab) [2]
   - Easy to setup

*Cons*

   - Small number of pre-made runners (e.g. compared to Travis)
  - Especially true for OS X, which appears to have no shared runners.
  Therefore, we'd have to provide our own (like I did with my own laptop).
  This could be facilitated via configuration management systems
(e.g. Boxen
  or Ansible), but still not as "hassle free" as Travis (which
provides OS X
  VMs as part of the service).

*Unknowns*

   - How to hook up runners that reside in our secure networks (e.g. the
   iOS Mac Mini)
   - ToU / ToS

Otherwise, my overall impression of GitLab was very positive.  It seems to
have all the same features of GitHub and then some*:* e.g. protected
branches and requiring approval from specific individuals for merge
requests (similar to our +1/+2 system).  Also, the UI is very modern and I
found it quite enjoyable to use.

Hope you found this interesting and had some fun experiments of your own
this weekend :-)

Brian

0: https://gitlab.com/wikimedia/wikipedia-ios
1: https://gitlab.com/wikimedia/wikipedia-ios/merge_requests/1
2: http://contributors.gitlab.com/

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GitLab CI (was RFC: Optional Travis Integration for Jenkins)

2015-10-07 Thread Brian Gerstle
Thanks for the correction & link, Greg, will check that out.  Is this
different than GitHub's approach?  Is it better or worse?

On Wed, Oct 7, 2015 at 11:43 AM, Tyler Romeo  wrote:

> I hate to say it, but this is exactly why we should be preferring copyleft
> licenses. GitLab began as completely FOSS, but only later on split into the
> CE and EE, which they were able to do because they used MIT.
>
>
> *-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
>
> On Wed, Oct 7, 2015 at 11:26 AM, Greg Grossmeier 
> wrote:
>
> > Thanks Brian for your thoughts.
> >
> > Only commenting on one small part:
> >
> > 
> > > *Pros*
> > >
> > >- FLOSS
> >
> > ...ish. Not really. "Open Core".
> >
> > See: https://en.wikipedia.org/wiki/GitLab#History
> >
> > A view on why Open Core isn't healthy for FLOSS communities:
> > http://ebb.org/bkuhn/blog/2009/10/16/open-core-shareware.html
> >
> >
> > Greg
> >
> > --
> > | Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
> > | identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Optional Travis integration for Jenkins

2015-10-02 Thread Brian Gerstle
>
> * iOS Mobile app
> <
> https://lists.wikimedia.org/pipermail/wikitech-l/2015-September/083005.html
> >


I saw npm-travis when you announced it a while back and was intrigued.  The
master branch on the wikipedia-ios repo is already building successfully on
Travis, so feel free to try triggering builds if that helps with the
experiment—but please don't push to master :-).  I bet the Android team
would also be interested in trying this out to alleviate some of the burden
they've shouldered getting our own CI to run Android builds & tests.

Ideally,
> isolated CI instances would allow us to have a Zotero container that could
> be spun alongside Citoid during tests.
>

Gabriel beat me to it, but here's a link to docs about using Docker w/in
Travis .  Also, is Zotero
stateless?  I'd caution against having persistent staging/test service
instances to avoid problems caused by:

   - Accumulated state
   - Multiple tests being run in parallel

Regarding npm-travis specifically, who would own it?  It doesn't seem too
complex (push to a GitHub repo, get the job id, poll its state), but would
C. Scott continue working on it or would RelEng claim it as part of the
infrastructure they maintain?

Lastly, it appears that Travis's build-triggering API is still in beta
, and there's no mention
of polling builds in this way.  If someone were to try using this with one
or more projects (e.g. Android) and decide to move forward, would we reach
out to Travis to ask when the current API will be marked as stable or if
they would be willing to work with us to develop an API more suited to our
needs?  I'm all about doing what it takes to get the job done*, but just
wanted to be sure that were aware that this might not be the most stable
way to use Travis.  I also did a quick search to see if there's precedent
for this, and found:

   - travis-build-watcher
   : this is the
   closest I could find to something that pushes to GitHub to trigger builds &
   monitor them
   - node-travis-ci : not
   equivalent to npm-travis, but might simplify it if didn't want to
   re-implement a JS client for Travis


* iOS team is using a collection of tools called fastlane
 which build, test, and release our
app—including using the undocumented (AFAIK) API for iTunes Connect
(Apple's service for uploading & managing apps in the App Store).  That
said, it's widely used, has 90 contributors, and is being actively
maintained by the owner.

On Fri, Oct 2, 2015 at 9:15 AM, Marko Obrovac 
wrote:

> On 1 October 2015 at 22:43, C. Scott Ananian 
> wrote:
>
> > We currently have several projects which can not be tested with our
> current
> > Jenkins test infrastructure, and as a consequence are hosting their
> primary
> > code repositories on github:
> > * RESTBase  -- missing Cassandra
> > support
> > * iOS Mobile app
> > <
> >
> https://lists.wikimedia.org/pipermail/wikitech-l/2015-September/083005.html
> > >
> > --
> > missing OS X platform
> >
>
> While hosted officially on Gerrit, Citoid should be added to this list as
> well. Its proper functioning depends on Zotero being available, so the
> current CI tests for Citoid include only syntax checking via jshint. In
> this case, however, it is unlikely that Travis would help. Ideally,
> isolated CI instances would allow us to have a Zotero container that could
> be spun alongside Citoid during tests.
>
> Cheers,
> Marko
>
>
> >
> > Other projects can only run a small portion of their test suite via
> > Jenkins:
> > * mw-ocg-latexer
> > <
> >
> https://github.com/wikimedia/mediawiki-extensions-Collection-OfflineContentGenerator-latex_renderer
> > >
> > -- requires LaTeX from PPA, image utilities (`jpegtran`).
> >
> > An alternative to allow these apps to be hosted on Wikimedia
> infrastructure
> > (gerrit, eventually phabricator) is to allow travis integration with
> > jenkins as an optional service.
> >
> > npm-travis  (
> > https://github.com/cscott/npm-travis) is a tool which will trigger
> Travis
> > builds from NPM by pushing to a throwaway branch, which is then cleaned
> up
> > after the tests complete.  It integrates well with the Gerrit access
> > control mechanism: the "Travis Bot" user can be granted push access only,
> > and only to branches prefixed with `npm-travis/`, so it cannot be used to
> > push changes to the master or deployment branches.
> >
> > This isn't a replacement for our jenkins test infrastructure, but it
> allows
> > us to accommodate oddball repositories without taxing our infrastructure
> > team or resorting to offsite repository hosting.
> >
> > There are WIP patches for integrating `npm-travis` with our jenkins
> > 

Re: [Wikitech-l] RFC: Optional Travis integration for Jenkins

2015-10-02 Thread Brian Gerstle
Also, sorry if this all should've gone in Phab and not email. I can
transcribe to Phab if we'd rather continue discussion there.

On Fri, Oct 2, 2015 at 11:50 AM, Brian Gerstle <bgers...@wikimedia.org>
wrote:

> * iOS Mobile app
>> <
>> https://lists.wikimedia.org/pipermail/wikitech-l/2015-September/083005.html
>> >
>
>
> I saw npm-travis when you announced it a while back and was intrigued.
> The master branch on the wikipedia-ios repo is already building
> successfully on Travis, so feel free to try triggering builds if that helps
> with the experiment—but please don't push to master :-).  I bet the Android
> team would also be interested in trying this out to alleviate some of the
> burden they've shouldered getting our own CI to run Android builds & tests.
>
> Ideally,
>> isolated CI instances would allow us to have a Zotero container that could
>> be spun alongside Citoid during tests.
>>
>
> Gabriel beat me to it, but here's a link to docs about using Docker w/in
> Travis <http://docs.travis-ci.com/user/docker/>.  Also, is Zotero
> stateless?  I'd caution against having persistent staging/test service
> instances to avoid problems caused by:
>
>- Accumulated state
>- Multiple tests being run in parallel
>
> Regarding npm-travis specifically, who would own it?  It doesn't seem too
> complex (push to a GitHub repo, get the job id, poll its state), but would
> C. Scott continue working on it or would RelEng claim it as part of the
> infrastructure they maintain?
>
> Lastly, it appears that Travis's build-triggering API is still in beta
> <http://docs.travis-ci.com/user/triggering-builds/>, and there's no
> mention of polling builds in this way.  If someone were to try using this
> with one or more projects (e.g. Android) and decide to move forward, would
> we reach out to Travis to ask when the current API will be marked as stable
> or if they would be willing to work with us to develop an API more suited
> to our needs?  I'm all about doing what it takes to get the job done*, but
> just wanted to be sure that were aware that this might not be the most
> stable way to use Travis.  I also did a quick search to see if there's
> precedent for this, and found:
>
>- travis-build-watcher
><https://www.npmjs.com/package/travis-build-watcher>: this is the
>closest I could find to something that pushes to GitHub to trigger builds &
>monitor them
>- node-travis-ci <https://github.com/pwmckenna/node-travis-ci>: not
>equivalent to npm-travis, but might simplify it if didn't want to
>re-implement a JS client for Travis
>
>
> * iOS team is using a collection of tools called fastlane
> <https://github.com/KrauseFx/fastlane> which build, test, and release our
> app—including using the undocumented (AFAIK) API for iTunes Connect
> (Apple's service for uploading & managing apps in the App Store).  That
> said, it's widely used, has 90 contributors, and is being actively
> maintained by the owner.
>
> On Fri, Oct 2, 2015 at 9:15 AM, Marko Obrovac <mobro...@wikimedia.org>
> wrote:
>
>> On 1 October 2015 at 22:43, C. Scott Ananian <canan...@wikimedia.org>
>> wrote:
>>
>> > We currently have several projects which can not be tested with our
>> current
>> > Jenkins test infrastructure, and as a consequence are hosting their
>> primary
>> > code repositories on github:
>> > * RESTBase <https://github.com/wikimedia/restbase> -- missing Cassandra
>> > support
>> > * iOS Mobile app
>> > <
>> >
>> https://lists.wikimedia.org/pipermail/wikitech-l/2015-September/083005.html
>> > >
>> > --
>> > missing OS X platform
>> >
>>
>> While hosted officially on Gerrit, Citoid should be added to this list as
>> well. Its proper functioning depends on Zotero being available, so the
>> current CI tests for Citoid include only syntax checking via jshint. In
>> this case, however, it is unlikely that Travis would help. Ideally,
>> isolated CI instances would allow us to have a Zotero container that could
>> be spun alongside Citoid during tests.
>>
>> Cheers,
>> Marko
>>
>>
>> >
>> > Other projects can only run a small portion of their test suite via
>> > Jenkins:
>> > * mw-ocg-latexer
>> > <
>> >
>> https://github.com/wikimedia/mediawiki-extensions-Collection-OfflineContentGenerator-latex_renderer
>> > >
>> > -- requires LaTeX from PPA, image utilities (`jpegtran`).
>> >
>> > An alternative to allow these apps to be ho

Re: [Wikitech-l] RFC: Optional Travis integration for Jenkins

2015-10-02 Thread Brian Gerstle
Right, but GitLab apparently has its own CI offering, so we wouldn't need
to wait for Travis to integrate.

On Friday, October 2, 2015, C. Scott Ananian <canan...@wikimedia.org> wrote:

> Travis is not integrated with gitlab, so it's not really relevant here.
>  *If* travis adds gitlab integration *and* we switch our current github
> mirror to gitlab *then* it could be something to look at.  Neither of those
> preconditions are currently met.
>
> We're not using any "new" github features, just the standard github mirror
> that gerrit already maintains (and presumably any future change review
> system would as well).
>  --scott
>
> On Fri, Oct 2, 2015 at 7:00 PM, Brian Gerstle <bgers...@wikimedia.org
> <javascript:;>>
> wrote:
>
> > Ah of course, my misunderstanding. (I even mentioned pushing to git in my
> > last email).
> >
> >
> > > If we wanted to keep build
> > > artifacts around "forever", we could easily do so;
> >
> >
> > How would you get artifacts of a build done inside a Travis VM?
> >
> > In any case, this still involves going through GitHub.  Is anyone
> > evaluating GitLab-CI <https://about.gitlab.com/gitlab-ci/>?  Apparently
> > their "runner" service can run on OSX (and Windows & Linux).
> >
> > On Fri, Oct 2, 2015 at 1:20 PM, C. Scott Ananian <canan...@wikimedia.org
> <javascript:;>>
> > wrote:
> >
> > > On Fri, Oct 2, 2015 at 11:50 AM, Brian Gerstle <bgers...@wikimedia.org
> <javascript:;>>
> > > wrote:
> > >
> > > > Lastly, it appears that Travis's build-triggering API is still in
> beta
> > > >
> > > <http://docs.travis-ci.com/user/triggering-builds/>, and there's no
> > > mention
> > > > of polling builds in this way.
> > >
> > >
> > > I don't use this API, and it's not really suitable in any case.  I just
> > use
> > > the standard stable API for watching builds on a branch.  Trigger is
> > > automatic with the push, which is a boring old git push, via the git
> CLI.
> > >
> > > If someone were to try using this with one
> > > > or more projects (e.g. Android) and decide to move forward, would we
> > > reach
> > > > out to Travis to ask when the current API will be marked as stable or
> > if
> > > > they would be willing to work with us to develop an API more suited
> to
> > > our
> > > > needs?
> > >
> > >
> > > Not sure exactly what that would be.  The only oddity in the current
> > setup
> > > is that I clean up the branch right after the build is complete -- but
> > > that's not an inherent property of the process.  If we wanted to keep
> > build
> > > artifacts around "forever", we could easily do so; I just wanted to
> > ensure
> > > things were uncluttered in the github "branches" pulldown.
> > >
> > >   I'm all about doing what it takes to get the job done*, but just
> > > > wanted to be sure that were aware that this might not be the most
> > stable
> > > > way to use Travis.
> > >
> > >
> > > There's nothing unusual about triggering travis builds by pushing to a
> > > branch.
> > >
> > > The only thing that's interesting at all is that our branch names have
> > > slashes in them, and that issue was resolved a year ago:
> > > https://github.com/travis-ci/travis-api/pull/146
> > >
> > > That property of our branch names was also optional; we used dashes
> > instead
> > > of slashes as a workaround. But it does let us integrate better with
> > gerrit
> > > access control mechanisms, which are built around slash-delimited
> branch
> > > names.
> > >   --scott
> > >
> > > --
> > > (http://cscott.net)
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org <javascript:;>
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > >
> >
> >
> >
> > --
> > EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
> > IRC: bgerstle
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org <javascript:;>
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> (http://cscott.net)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org <javascript:;>
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Optional Travis integration for Jenkins

2015-10-02 Thread Brian Gerstle
Ah of course, my misunderstanding. (I even mentioned pushing to git in my
last email).


> If we wanted to keep build
> artifacts around "forever", we could easily do so;


How would you get artifacts of a build done inside a Travis VM?

In any case, this still involves going through GitHub.  Is anyone
evaluating GitLab-CI <https://about.gitlab.com/gitlab-ci/>?  Apparently
their "runner" service can run on OSX (and Windows & Linux).

On Fri, Oct 2, 2015 at 1:20 PM, C. Scott Ananian <canan...@wikimedia.org>
wrote:

> On Fri, Oct 2, 2015 at 11:50 AM, Brian Gerstle <bgers...@wikimedia.org>
> wrote:
>
> > Lastly, it appears that Travis's build-triggering API is still in beta
> >
> <http://docs.travis-ci.com/user/triggering-builds/>, and there's no
> mention
> > of polling builds in this way.
>
>
> I don't use this API, and it's not really suitable in any case.  I just use
> the standard stable API for watching builds on a branch.  Trigger is
> automatic with the push, which is a boring old git push, via the git CLI.
>
> If someone were to try using this with one
> > or more projects (e.g. Android) and decide to move forward, would we
> reach
> > out to Travis to ask when the current API will be marked as stable or if
> > they would be willing to work with us to develop an API more suited to
> our
> > needs?
>
>
> Not sure exactly what that would be.  The only oddity in the current setup
> is that I clean up the branch right after the build is complete -- but
> that's not an inherent property of the process.  If we wanted to keep build
> artifacts around "forever", we could easily do so; I just wanted to ensure
> things were uncluttered in the github "branches" pulldown.
>
>   I'm all about doing what it takes to get the job done*, but just
> > wanted to be sure that were aware that this might not be the most stable
> > way to use Travis.
>
>
> There's nothing unusual about triggering travis builds by pushing to a
> branch.
>
> The only thing that's interesting at all is that our branch names have
> slashes in them, and that issue was resolved a year ago:
> https://github.com/travis-ci/travis-api/pull/146
>
> That property of our branch names was also optional; we used dashes instead
> of slashes as a workaround. But it does let us integrate better with gerrit
> access control mechanisms, which are built around slash-delimited branch
> names.
>   --scott
>
> --
> (http://cscott.net)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: [Tools] Kubernetes picked to provide alternative to GridEngine

2015-09-17 Thread Brian Gerstle
Congrats on moving forward with a big decision!  I'm very optimistic about
containers, so it's exciting to see movement in this area.

Is there a larger arc of using this for our own services (Mediawiki,
RESTBase, etc.), potentially in production?


On Wed, Sep 16, 2015 at 9:28 PM, Yuvi Panda  wrote:

> FYI
>
>
> -- Forwarded message --
> From: Yuvi Panda 
> Date: Wed, Sep 16, 2015 at 6:25 PM
> Subject: [Tools] Kubernetes picked to provide alternative to GridEngine
> To: labs-annou...@lists.wikimedia.org
>
>
> We have picked kubernetes.io to provide an alternative to GridEngine
> on Tool Labs! See https://phabricator.wikimedia.org/T106475 for more
> details about the evaluation itself,
>
> https://docs.google.com/spreadsheets/d/1YkVsd8Y5wBn9fvwVQmp9Sf8K9DZCqmyJ-ew-PAOb4R4/edit?pli=1#gid=0
> for the spreadsheet with the evaluation matrix and
> https://lists.wikimedia.org/pipermail/labs-l/2015-August/003955.html
> for the previous email listing pros and cons of the others solutions
> that we have considered.
>
> == Rough Timeline ==
>
> * October: Start beta testing by opening up Kubernetes to whitelisted
> tools. Allows people to run arbitrary Docker images in Kubernetes,
> both for continuous jobs and web services. If you are interested in
> this,please add a comment to https://phabricator.wikimedia.org/T112824
> * December: Work on migrational tooling to assist in switching from
> GridEngine to Kubernetes should be in a good place. This will begin
> with a '--provider=kubernetes' parameter to the webservice command
> that will allow people to easily switch to Kubernetes for webservice.
> We will have something similar implemented for jsub / jstart.
> * January: Kubernetes cluster is opened up for all tools users. Fairly
> complete switching between GridEngine and Kubernetes for at least
> continuous jobs and webservices.
>
> == What does this give developers? ==
>
> # A software stack that is made up of widely-used and
> actively-developed software components. We are looking to dramatically
> reduce the surface of code that we write and maintain in-house.
> # (When out of beta) More stability and reliability.
> # It allows us to get rid of a lot of our customizations (the entire
> webservice setup, for example) which has proven to be a lot of work
> and flaky at times.
> # We can migrate tools that don't require NFS off of it, and since it
> has historically been one of our flakiest servies this allows tools to
> opt-out of it and get a lot more reliability.
> # Using Docker allows more user freedom in a structured way that is
> easier to support.
> # Has an active upstream / is used elsewhere as well (Google Container
> Engine, Microsoft Azure, RedHat OpenShift, AWS, etc.), so much more
> help / community available when users run into issues than we have now
> # Probably more :) It opens up a lot of possibilities and I'm sure
> developers will use it createively :)
>
> == Do I need to change anything? ==
>
> No, especially if you do not want to. We think Docker and Kubernetes
> are exciting technologies, so we would like volunteers who are
> interested in exploring these platforms to have the option of getting
> their hands dirty. At the same time, it is important for us to avoid
> complicating life for people for whom the current setup works well. If
> we are successful, the migration to Docker and Kubernetes will be
> seamless, and users of Tool Labs will not have to know what either
> Docker or Kubernetes are, let alone how to operate them.
>
> We will begin by offering arbitrary Docker image execution only.
> Eventually we will make it super-easy to switch between the current
> setup and Kubernetes, and allow people to be able to use this without
> having to know what Docker is or what Kubernetes is. That will slowly
> be built over the next few months.
>
> Absolutely nothing changes for developers or users right now.
>
> == Will GridEngine be going away? ==
>
> Not anytime soon!
>
> Thanks!
>
> --
> Yuvi Panda T
> http://yuvi.in/blog
>
>
> --
> Yuvi Panda T
> http://yuvi.in/blog
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] [recommended reading] the hamburger menu is broken

2015-08-11 Thread Brian Gerstle
Interesting article about how various sites/apps switched away from the
hamburger and to the more conventional tab bar, with tangible results:

http://deep.design/the-hamburger-menu/

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] [recommended reading] do terrible things to your code

2015-07-30 Thread Brian Gerstle
...or your users will!

Doing Terrible Things To Your Code
http://blog.codinghorror.com/doing-terrible-things-to-your-code/ is a
good read on testing by Jeff Atwood on his blog, Coding Horror
http://blog.codinghorror.com/.  I also found the falsehood snippets
poignant—maybe we should come up with some for Wikipedia ;-).  Here are a
couple off the top of my head, at least for official Wikipedia instances:

   1. Wikipedia sites all have standard ISO/BFC prefixes (see sitematrix
   https://www.mediawiki.org/wiki/Special:SiteMatrix)
   2. A site's main page is always titled Main Page (also see sitematrix)
   3. A page's revision is a reliable snapshot of its content (nope:
   transclusions [and images?] can change independent of a page revision)
   4. API error messages are plain text (nope, can contain HTML
   https://phabricator.wikimedia.org/T107082)


Interested in hearing falsehoods you've encountered.

Cheers,

Brian

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] Wikipedia iOS app moving to GH

2015-07-28 Thread Brian Gerstle
Hello again!

Writing to inform that the move has happened.  Here's a quick breakdown:


   - GitHub repo: https://github.com/wikimedia/wikipedia-ios
  - Main repo for development, code review, and continuous integration
  (via Travis  CodeCov)
   - Gerrit repo:
   https://gerrit.wikimedia.org/r/#/admin/projects/apps/ios/wikipedia
  - Code drops every release, developers will still get notifications
  from patches submitted
   - *new* Phabricator repo:
   https://phabricator.wikimedia.org/diffusion/APIOS/
  - Automatically synced with GitHub (Phab is the slave). Mostly just
  as redundant storage, not sure if we'll support any code
review here for
  the time being (mostly due to iOS team unfamiliarity)

Hope to see a pull request (or patch) from you soon!

Brian

On Fri, Jul 24, 2015 at 10:25 AM, Brian Gerstle bgers...@wikimedia.org
wrote:

 FYI, spent some time this morning refactoring the epic
 https://phabricator.wikimedia.org/T98970, scope has been reduced to CI
 (moving CD into its own epic https://phabricator.wikimedia.org/T102547)
 and clarified/elaborated many other sections.

 On Thu, Jul 23, 2015 at 3:14 PM, Brian Gerstle bgers...@wikimedia.org
 wrote:

 The (rough) epic definition is already on Phabricator:
 https://phabricator.wikimedia.org/T98970.

 I've defined some metrics there already, but admittedly—and thanks for
 calling us out on this—we don't really have baselines*.  I think there are
 some feasible ways to get a rough starting point, which I can brainstorm w/
 the team.  We were planning (or I should say, I was hoping) to gather more
 code metrics anyway, so I'm glad to have an excuse to hook it up sooner ;-).

 FWIW, I also think having patches tested as part of code review
 https://github.com/bgerstle/apps-ios-wikipedia/pull/3 would also work
 as a sufficient definition of success.  Our goal here is to do that as
 quickly, easily, and cheaply as possible so we can get back to focusing on
 the app.

 * I think it's fair to say that the coverage at point of migration was
 already low (~10% based on my Travis-covered fork) and hasn't changed much.


 On Thu, Jul 23, 2015 at 2:04 PM, Greg Grossmeier g...@wikimedia.org
 wrote:

 Brian and the Reading team:

 On Wed, Jul 22, 2015 at 3:40 AM, Brian Gerstle bgers...@wikimedia.org
 wrote:

 By using GitHub with Travis CI, the team believes it will work faster,
 improve testing, grow developer confidence in making code changes, and,
 most importantly, deploy fewer bugs to production.


 Given that, I am requesting you/your team create a set of KPIs to review
 in 3 or 4 months to determine if this change had the intended outcome. It's
 hard to make these things quantifiable as useful KPIs (that prevent eg
 gaming the system) but I think it'd be a good exercise for your team given
 your team's decision making process thus far.

 Please post those KPIs somewhere public and trackable (wiki or phab).

 Thank you,

 Greg

 --
 Greg Grossmeier
 Release Team Manager




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] Wikipedia iOS app moving to GH

2015-07-24 Thread Brian Gerstle
FYI, spent some time this morning refactoring the epic
https://phabricator.wikimedia.org/T98970, scope has been reduced to CI
(moving CD into its own epic https://phabricator.wikimedia.org/T102547)
and clarified/elaborated many other sections.

On Thu, Jul 23, 2015 at 3:14 PM, Brian Gerstle bgers...@wikimedia.org
wrote:

 The (rough) epic definition is already on Phabricator:
 https://phabricator.wikimedia.org/T98970.

 I've defined some metrics there already, but admittedly—and thanks for
 calling us out on this—we don't really have baselines*.  I think there are
 some feasible ways to get a rough starting point, which I can brainstorm w/
 the team.  We were planning (or I should say, I was hoping) to gather more
 code metrics anyway, so I'm glad to have an excuse to hook it up sooner ;-).

 FWIW, I also think having patches tested as part of code review
 https://github.com/bgerstle/apps-ios-wikipedia/pull/3 would also work
 as a sufficient definition of success.  Our goal here is to do that as
 quickly, easily, and cheaply as possible so we can get back to focusing on
 the app.

 * I think it's fair to say that the coverage at point of migration was
 already low (~10% based on my Travis-covered fork) and hasn't changed much.


 On Thu, Jul 23, 2015 at 2:04 PM, Greg Grossmeier g...@wikimedia.org
 wrote:

 Brian and the Reading team:

 On Wed, Jul 22, 2015 at 3:40 AM, Brian Gerstle bgers...@wikimedia.org
 wrote:

 By using GitHub with Travis CI, the team believes it will work faster,
 improve testing, grow developer confidence in making code changes, and,
 most importantly, deploy fewer bugs to production.


 Given that, I am requesting you/your team create a set of KPIs to review
 in 3 or 4 months to determine if this change had the intended outcome. It's
 hard to make these things quantifiable as useful KPIs (that prevent eg
 gaming the system) but I think it'd be a good exercise for your team given
 your team's decision making process thus far.

 Please post those KPIs somewhere public and trackable (wiki or phab).

 Thank you,

 Greg

 --
 Greg Grossmeier
 Release Team Manager




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] Wikipedia iOS app moving to GH

2015-07-23 Thread Brian Gerstle
The (rough) epic definition is already on Phabricator:
https://phabricator.wikimedia.org/T98970.

I've defined some metrics there already, but admittedly—and thanks for
calling us out on this—we don't really have baselines*.  I think there are
some feasible ways to get a rough starting point, which I can brainstorm w/
the team.  We were planning (or I should say, I was hoping) to gather more
code metrics anyway, so I'm glad to have an excuse to hook it up sooner ;-).

FWIW, I also think having patches tested as part of code review
https://github.com/bgerstle/apps-ios-wikipedia/pull/3 would also work as
a sufficient definition of success.  Our goal here is to do that as
quickly, easily, and cheaply as possible so we can get back to focusing on
the app.

* I think it's fair to say that the coverage at point of migration was
already low (~10% based on my Travis-covered fork) and hasn't changed much.


On Thu, Jul 23, 2015 at 2:04 PM, Greg Grossmeier g...@wikimedia.org wrote:

 Brian and the Reading team:

 On Wed, Jul 22, 2015 at 3:40 AM, Brian Gerstle bgers...@wikimedia.org
 wrote:

 By using GitHub with Travis CI, the team believes it will work faster,
 improve testing, grow developer confidence in making code changes, and,
 most importantly, deploy fewer bugs to production.


 Given that, I am requesting you/your team create a set of KPIs to review
 in 3 or 4 months to determine if this change had the intended outcome. It's
 hard to make these things quantifiable as useful KPIs (that prevent eg
 gaming the system) but I think it'd be a good exercise for your team given
 your team's decision making process thus far.

 Please post those KPIs somewhere public and trackable (wiki or phab).

 Thank you,

 Greg

 --
 Greg Grossmeier
 Release Team Manager




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia iOS app moving to GH

2015-07-23 Thread Brian Gerstle
On Thu, Jul 23, 2015 at 2:08 PM, Ricordisamoa ricordisa...@openmailbox.org
wrote:

 The even shorter answer is: you can't amend other people's pull requests
 without being explicitly allowed to.


Which is both good and bad. In gerrit, anyone can amend my patch,
potentially obliterating my changes, which means we need to manually sync
up to prevent conflicts and erasing each others' work:

I'm going to amend
OK!
Ok I amended!
Ok I'm stashing my changes, pulling, and re-applying!

As opposed to:

I pushed a commit to your branch
OK, I pulled the remote changes and had mine automatically rebased on top

Different strokes for different folks.




 Il 23/07/2015 11:57, Brian Gerstle ha scritto:

 The short answer is: yes. GitHub doesn't have the patch as a concept,
 only commits, branches, and forks. We only plan on encountering forks when
 merging volunteer contributions. Regardless of whether it's a fork, your
 ability to push to a branch co Ed down to whether you're a collaborator
 for that repo.

 On Wednesday, July 22, 2015, Ricordisamoa ricordisa...@openmailbox.org
 wrote:

  Il 22/07/2015 23:43, Brian Gerstle ha scritto:

  This isn't really about Gerrit vs. GitHub. To be clear, we're mainly
 doing
 this for CI (i.e. Travis).

 That said, we (the iOS team) plan for our workflow to play to GitHub's
 strengths—which also happen to be our personal preferences.  In short,
 this
 means amending patches becomes pushing another commit onto a branch.
 We've run into issues w/ rebasing  amending patches destroying our diff
 in
 Gerrit, and problems with multiple people collaborating on the same
 patch.

  With GitHub it is not possible to amend other people's patches, is it?

   We think GitHub will not only provide integrations for free CI, but,
 as an

 added bonus, also resolve some of the workflow deficiencies that we've
 personally encountered with Gerrit.


 On Wed, Jul 22, 2015 at 5:14 PM, Gergo Tisza gti...@wikimedia.org
 wrote:

   On Wed, Jul 22, 2015 at 4:39 AM, Petr Bena benap...@gmail.com
 wrote:

   Good job, you aren't the only one. Huggle team is using it for quite

 some time. To be honest I still feel that github is far superior to
 our gerrit installation and don't really understand why we don't use
 it for other projects too.

   GitHub is focused on small projects; for a project with lots of
 patches

 and committers it is problematic in many ways:
 * poor repository management (fun fact: GitHub does not even log force
 pushes, much less provides any ability to undo them)
 * noisy commit histories due to poor support of amend-based workflows,
 and
 also because poor message generation of the editing interface (Linus
 wrote
 a famous rant
 https://github.com/torvalds/linux/pull/17#issuecomment-5654674 on
 that)
 * no way to mark patches which depend on each other
 * diff view works poorly for large patches
 * CR interface works poorly for large patches (no way to write draft
 comments so you need to do two passes; discussions can be marked as
 obsolete by unrelated code changes in their vicinity)
 * hard to keep track of cherry-picks


 ___
 Mobile-l mailing list
 mobil...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/mobile-l



  ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l





 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia iOS app moving to GH

2015-07-23 Thread Brian Gerstle
The short answer is: yes. GitHub doesn't have the patch as a concept,
only commits, branches, and forks. We only plan on encountering forks when
merging volunteer contributions. Regardless of whether it's a fork, your
ability to push to a branch co Ed down to whether you're a collaborator
for that repo.

On Wednesday, July 22, 2015, Ricordisamoa ricordisa...@openmailbox.org
wrote:

 Il 22/07/2015 23:43, Brian Gerstle ha scritto:

 This isn't really about Gerrit vs. GitHub. To be clear, we're mainly doing
 this for CI (i.e. Travis).

 That said, we (the iOS team) plan for our workflow to play to GitHub's
 strengths—which also happen to be our personal preferences.  In short,
 this
 means amending patches becomes pushing another commit onto a branch.
 We've run into issues w/ rebasing  amending patches destroying our diff
 in
 Gerrit, and problems with multiple people collaborating on the same patch.


 With GitHub it is not possible to amend other people's patches, is it?

  We think GitHub will not only provide integrations for free CI, but, as an
 added bonus, also resolve some of the workflow deficiencies that we've
 personally encountered with Gerrit.


 On Wed, Jul 22, 2015 at 5:14 PM, Gergo Tisza gti...@wikimedia.org
 wrote:

  On Wed, Jul 22, 2015 at 4:39 AM, Petr Bena benap...@gmail.com wrote:

  Good job, you aren't the only one. Huggle team is using it for quite
 some time. To be honest I still feel that github is far superior to
 our gerrit installation and don't really understand why we don't use
 it for other projects too.

  GitHub is focused on small projects; for a project with lots of patches
 and committers it is problematic in many ways:
 * poor repository management (fun fact: GitHub does not even log force
 pushes, much less provides any ability to undo them)
 * noisy commit histories due to poor support of amend-based workflows,
 and
 also because poor message generation of the editing interface (Linus
 wrote
 a famous rant
 https://github.com/torvalds/linux/pull/17#issuecomment-5654674 on
 that)
 * no way to mark patches which depend on each other
 * diff view works poorly for large patches
 * CR interface works poorly for large patches (no way to write draft
 comments so you need to do two passes; discussions can be marked as
 obsolete by unrelated code changes in their vicinity)
 * hard to keep track of cherry-picks


 ___
 Mobile-l mailing list
 mobil...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/mobile-l





 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia iOS app moving to GH

2015-07-22 Thread Brian Gerstle

 I have no problem with that, as long as everyone with +2 keeps his access


This should already be the case for the main iOS engineers, but I've made a
ticket https://phabricator.wikimedia.org/T106547 to make sure people
don't slip through the cracks.

Oh, and when
 it syncs with phabricator tickets


Phabricator ticket sync is something we're sad to lose, but it's part of
the trade-off we're making.  That said, it was only slightly beneficial as
we relied more on cards being in a Code Review column (w/ a WIP limit)
than looking at the gerrit updates on the cards themselves (which aren't
visible from the board view).  Not that GH will make this any easier, but
we're not losing too much here, IMHO.

On Wed, Jul 22, 2015 at 8:01 AM, Derk-Jan Hartman 
d.j.hartman+wmf...@gmail.com wrote:

 I have no problem with that, as long as everyone with +2 keeps his access
 and someone manages developer account additions and removals. Oh, and when
 it syncs with phabricator tickets

 DJ

 On Wed, Jul 22, 2015 at 1:39 PM, Petr Bena benap...@gmail.com wrote:

  Good job, you aren't the only one. Huggle team is using it for quite
  some time. To be honest I still feel that github is far superior to
  our gerrit installation and don't really understand why we don't use
  it for other projects too.
 
  The GitHub's pull requests are more compliant with original git
  philosophy of Linus, see this video:
  https://www.youtube.com/watch?v=4XpnKHJAok8 and would be sufficient
  replacement to our current git-review mechanism, which is very
  complex and unfriendly to new developers who probably find it very
  difficult to use.
 
  On Wed, Jul 22, 2015 at 12:40 PM, Brian Gerstle bgers...@wikimedia.org
  wrote:
   Hey everyone,
  
   I'm writing with plans for the Wikimedia iOS engineering team to move
 its
   workflow to GitHub with Travis CI, much like RESTbase.
  
   The Wikimedia iOS engineers have been maintaining their own CI and
 build
   server and using Gerrit for code review. The more time efficient and
   commonplace approach for open source iOS software development leans
  heavily
   on GitHub with Travis CI instead (e.g., WordPress[0][1] and
  Firefox[2][3]).
   By using GitHub with Travis CI, the team believes it will work faster,
   improve testing, grow developer confidence in making code changes, and,
   most importantly, deploy fewer bugs to production.
  
   For builds requiring sensitive information (e.g., prod certs), will
   continue to run on WMF's Mac Mini. As is done for Android, when betas
 are
   pushed, the team will notify mobile-l.
  
   Feel free to reply or email me directly with any questions or comments.
  
   Regards,
  
   Brian
  
   0: https://github.com/wordpress-mobile/WordPress-iOS
   1: https://travis-ci.org/wordpress-mobile/WordPress-iOS
   2: https://github.com/mozilla/firefox-ios
   3: https://travis-ci.org/mozilla/firefox-ios
  
   --
   EN Wikipedia user page:
 https://en.wikipedia.org/wiki/User:Brian.gerstle
   IRC: bgerstle
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: Wikipedia iOS app moving to GH

2015-07-22 Thread Brian Gerstle
sorry, didn't hit reply-all. (these lists aren't a subset of each other,
are they?)

Also, we're losing IRC notifications, but that should be easy enough to add
back via fastlane.

-- Forwarded message --
From: Brian Gerstle bgers...@wikimedia.org
Date: Wed, Jul 22, 2015 at 12:59 PM
Subject: Re: [Wikitech-l] Wikipedia iOS app moving to GH
To: Wikimedia developers wikitech-l@lists.wikimedia.org


I have no problem with that, as long as everyone with +2 keeps his access


This should already be the case for the main iOS engineers, but I've made a
ticket https://phabricator.wikimedia.org/T106547 to make sure people
don't slip through the cracks.

Oh, and when
 it syncs with phabricator tickets


Phabricator ticket sync is something we're sad to lose, but it's part of
the trade-off we're making.  That said, it was only slightly beneficial as
we relied more on cards being in a Code Review column (w/ a WIP limit)
than looking at the gerrit updates on the cards themselves (which aren't
visible from the board view).  Not that GH will make this any easier, but
we're not losing too much here, IMHO.

On Wed, Jul 22, 2015 at 8:01 AM, Derk-Jan Hartman 
d.j.hartman+wmf...@gmail.com wrote:

 I have no problem with that, as long as everyone with +2 keeps his access
 and someone manages developer account additions and removals. Oh, and when
 it syncs with phabricator tickets

 DJ

 On Wed, Jul 22, 2015 at 1:39 PM, Petr Bena benap...@gmail.com wrote:

  Good job, you aren't the only one. Huggle team is using it for quite
  some time. To be honest I still feel that github is far superior to
  our gerrit installation and don't really understand why we don't use
  it for other projects too.
 
  The GitHub's pull requests are more compliant with original git
  philosophy of Linus, see this video:
  https://www.youtube.com/watch?v=4XpnKHJAok8 and would be sufficient
  replacement to our current git-review mechanism, which is very
  complex and unfriendly to new developers who probably find it very
  difficult to use.
 
  On Wed, Jul 22, 2015 at 12:40 PM, Brian Gerstle bgers...@wikimedia.org
  wrote:
   Hey everyone,
  
   I'm writing with plans for the Wikimedia iOS engineering team to move
 its
   workflow to GitHub with Travis CI, much like RESTbase.
  
   The Wikimedia iOS engineers have been maintaining their own CI and
 build
   server and using Gerrit for code review. The more time efficient and
   commonplace approach for open source iOS software development leans
  heavily
   on GitHub with Travis CI instead (e.g., WordPress[0][1] and
  Firefox[2][3]).
   By using GitHub with Travis CI, the team believes it will work faster,
   improve testing, grow developer confidence in making code changes, and,
   most importantly, deploy fewer bugs to production.
  
   For builds requiring sensitive information (e.g., prod certs), will
   continue to run on WMF's Mac Mini. As is done for Android, when betas
 are
   pushed, the team will notify mobile-l.
  
   Feel free to reply or email me directly with any questions or comments.
  
   Regards,
  
   Brian
  
   0: https://github.com/wordpress-mobile/WordPress-iOS
   1: https://travis-ci.org/wordpress-mobile/WordPress-iOS
   2: https://github.com/mozilla/firefox-ios
   3: https://travis-ci.org/mozilla/firefox-ios
  
   --
   EN Wikipedia user page:
 https://en.wikipedia.org/wiki/User:Brian.gerstle
   IRC: bgerstle
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikipedia iOS app moving to GH

2015-07-22 Thread Brian Gerstle
Hey everyone,

I'm writing with plans for the Wikimedia iOS engineering team to move its
workflow to GitHub with Travis CI, much like RESTbase.

The Wikimedia iOS engineers have been maintaining their own CI and build
server and using Gerrit for code review. The more time efficient and
commonplace approach for open source iOS software development leans heavily
on GitHub with Travis CI instead (e.g., WordPress[0][1] and Firefox[2][3]).
By using GitHub with Travis CI, the team believes it will work faster,
improve testing, grow developer confidence in making code changes, and,
most importantly, deploy fewer bugs to production.

For builds requiring sensitive information (e.g., prod certs), will
continue to run on WMF's Mac Mini. As is done for Android, when betas are
pushed, the team will notify mobile-l.

Feel free to reply or email me directly with any questions or comments.

Regards,

Brian

0: https://github.com/wordpress-mobile/WordPress-iOS
1: https://travis-ci.org/wordpress-mobile/WordPress-iOS
2: https://github.com/mozilla/firefox-ios
3: https://travis-ci.org/mozilla/firefox-ios

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] Wikipedia iOS app moving to GH

2015-07-22 Thread Brian Gerstle
This isn't really about Gerrit vs. GitHub. To be clear, we're mainly doing
this for CI (i.e. Travis).

That said, we (the iOS team) plan for our workflow to play to GitHub's
strengths—which also happen to be our personal preferences.  In short, this
means amending patches becomes pushing another commit onto a branch.
We've run into issues w/ rebasing  amending patches destroying our diff in
Gerrit, and problems with multiple people collaborating on the same patch.
We think GitHub will not only provide integrations for free CI, but, as an
added bonus, also resolve some of the workflow deficiencies that we've
personally encountered with Gerrit.


On Wed, Jul 22, 2015 at 5:14 PM, Gergo Tisza gti...@wikimedia.org wrote:

 On Wed, Jul 22, 2015 at 4:39 AM, Petr Bena benap...@gmail.com wrote:

 Good job, you aren't the only one. Huggle team is using it for quite
 some time. To be honest I still feel that github is far superior to
 our gerrit installation and don't really understand why we don't use
 it for other projects too.


 GitHub is focused on small projects; for a project with lots of patches
 and committers it is problematic in many ways:
 * poor repository management (fun fact: GitHub does not even log force
 pushes, much less provides any ability to undo them)
 * noisy commit histories due to poor support of amend-based workflows, and
 also because poor message generation of the editing interface (Linus wrote
 a famous rant
 https://github.com/torvalds/linux/pull/17#issuecomment-5654674 on that)
 * no way to mark patches which depend on each other
 * diff view works poorly for large patches
 * CR interface works poorly for large patches (no way to write draft
 comments so you need to do two passes; discussions can be marked as
 obsolete by unrelated code changes in their vicinity)
 * hard to keep track of cherry-picks


 ___
 Mobile-l mailing list
 mobil...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/mobile-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] interesting read: what makes great PMs

2015-06-19 Thread Brian Gerstle
Thought some people would find this stream of tweets from Charlie Kindel [0]
http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manager/
interesting.  I'd also recommend perusing his other posts about leadership
 engineering culture.  Here's my take on a few snippets, curious to hear
your thoughts as well:

*the only work that truly matters is that of the engineers*

While engineers might be responsible for actually building things,
Charlie himself admits that the quality (and relevance) of our work is
highly dependent on multiple factors leading up to the first engineer's
keystroke.

*left to their own devices, engineers will do two things: 1) the most
complicated thing, 2) the thing they think is fun*

Guilty as charged, but I think engineers who are sold on the teams'
mission are capable of making good decisions about what to work on.  Our
current situation in the Readership vertical is a live experiment on this
subject.

Finally, I wholeheartedly agree that I do my best work when it's crystal
clear *who the customer is, where the customer is, why the customer cares,
why it’s important for the business, and when it’s relevant.*

Happy reading!

Brian

0:
http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manager/

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] interesting read: testing@LinkedIn

2015-06-19 Thread Brian Gerstle
Short case study at LinkedIn [0]
http://engineering.linkedin.com/developer-happiness/getting-code-production-less-friction-and-high-quality
about how they cut release latencies by 80-90% by reversing the ice cream
cone of death [1]
http://engineering.linkedin.com/sites/default/files/InitialState-Fun.png:

There's one particular snippet that strongly resonates with what I've
experienced at multiple jobs (emphasis mine):

*Team ownership of quality*



Quality is the responsibility of the *whole team*. Quality control is most
 efficiently achieved if software quality is considered at *every step in
 the development cycle*. A software quality process will benefit from an
 appropriate distribution of test automation ownership between teams
 cooperating in a software development effort.


In other words: QA aren't the only ones responsible for tests. I would go a
step (or several) further and explicitly suggest that testing needs to be
considered at—or an integral part of—the  design  planning processes.
Rich Hickey goes even further in his talk about Hammock Driven Development
https://www.youtube.com/watch?v=f84n5oFoZBc (highly recommended: TL;DR;
people start work before fully understanding the problem space).

Things seem to be trending up at WMF, especially w/ the Web engineers' big
strides in end-to-end testing.  However, as the article suggests, you need
to attack the quality problem from both ends—perhaps even emphasizing unit
tests (shortest feedback, cheapest, least fragile).

0:
http://engineering.linkedin.com/developer-happiness/getting-code-production-less-friction-and-high-quality
1: http://engineering.linkedin.com/sites/default/files/InitialState-Fun.png,
thanks to Zeljko for introducing me to that fun term, much better than
upside-down pyramid

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [WikimediaMobile] WebM/Ogg playback: testing energy usage on iOS

2015-06-19 Thread Brian Gerstle
Any idea if your work would also support dynamic bitrate switching?

On Fri, Jun 19, 2015 at 12:57 PM, James Forrester jforres...@wikimedia.org
wrote:

 On 19 June 2015 at 02:08, Brion Vibber bvib...@wikimedia.org wrote:

 One of the reasons we've always worried about using the open Ogg and WebM
 formats on iPhones and iPads is that we don't get to make use of the
 hardware MP4/H.264 codec... using the main CPU cores is presumed to drain
 the battery faster.

 I've done a first-pass test measuring energy usage of native-code
 WebM/Ogg playback using the new energy reporting in the Xcode 7 / iOS 9
 beta:


 ​[Snip]​

 ​This is really impressive to see, thank you Brion. And yes, let's not try
 to achieve zero. :-)​

 ​J.​
 --
 James D. Forrester
 Product Manager, Editing
 Wikimedia Foundation, Inc.

 jforres...@wikimedia.org | @jdforrester

 ___
 Mobile-l mailing list
 mobil...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/mobile-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-18 Thread Brian Gerstle
I'm reluctant to interject again, but as a client of the API I feel the
need to speak up.  Let's work through an example: Clients A, B,  C are
using API 1 when the Server adds API 2:

   1. Current, change the default approach:
  1. All clients start receiving warnings as *extra payload data* (how
  does this even work for API's w/o a response payload?)
  2. Clients B  C start using API 2
  3. Server changes API 1 to behave like API 2
  4. Client A is broken, so switches to API 2
  5. Server removes API 2 (becomes the new default)
  6. Clients A, B,  C have to switch to API 1
   2. Possible alternative:
  1. Clients get an email (as they currently do) about the new API
  being available
  2. Clients A  B migrate to API 2
  3. Since there's only 1 client left (and/or the deadline has passed)
  Server removes API 1, *explicitly breaking* Client C
  4. Someone notices Client C is broken and moves it to API 2 (or
  whatever the currently-supported API is)

Am I missing something? What are your thoughts on 1 vs. 2?

On Thu, Jun 18, 2015 at 10:17 AM, John Mark Vandenberg jay...@gmail.com
wrote:

 On Thu, Jun 18, 2015 at 12:13 PM, Yuri Astrakhan
 yastrak...@wikimedia.org wrote:
  On Wed, Jun 17, 2015 at 7:44 PM, John Mark Vandenberg jay...@gmail.com
  wrote:
 
 
  The API currently emits a warning if a query continuation mode isnt
  selected.
 
  I guess on July 1 the API could emit an error, and not return any query
  data.
  Then the data isnt going to cause weird behaviour - it will break,
  properly.
 
 
  Hmm, this is actually an interesting idea - would it make sense to error
 on
  missing continue or rawcontinue for all action=query for about a
 month
  or two, so that everyone notices it right away and gets updated, and than
  resume with the new behavior?

 Not 'all', please, but perhaps all queries where there is actual
 continuation data like is used now for the existing warning.  I would
 really to avoid the API failing at all, ever, for
 ?action=querymeta=siteinfo - that is how clients get the API
 generator version, and only when we have the API generator version do
 we know what features are supported by the API.

 Or just not flick the switch on July 1, and only default to the new
 continuation mode for formatversion=2.

 --
 John Vandenberg

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-18 Thread Brian Gerstle
I guess it comes down to is this: if we're going to continue supporting old
behavior, they should be accessible via the same old requests.  *This
removes the need for existing clients to be updated in the first place*.
If we eventually want to delete the old code keeping the old behavior
separated from the new will make it clear  explicit what's being dropped
and what to use instead.  For example, dropping formatversion=1 means
clients need to use formatversion=2 (or whatever other supported versions).

Lastly, changing the default behavior to make things sane for new
developers is, IMO, a bad trade-off because they'll eventually get tripped
by us pulling the rug out from under their feet by *breaking backwards
compatibility stable APIs*.  Those sorts of changes should be reserved for
experimental or even beta APIs.  Continuing queries seems like a stable—and
pervasive—part of the API.


On Thu, Jun 18, 2015 at 11:40 AM, Brad Jorsch (Anomie) 
bjor...@wikimedia.org wrote:

 On Thu, Jun 18, 2015 at 11:33 AM, MZMcBride z...@mzmcbride.com wrote:

  Brad Jorsch (Anomie) wrote:
  On Thu, Jun 18, 2015 at 10:37 AM, Brian Gerstle bgers...@wikimedia.org
 
  wrote:
  1. Current, change the default approach:
 1. All clients start receiving warnings as *extra payload data*
(how does this even work for API's w/o a response payload?)
  
  What module in api.php doesn't have a response payload?
 action=opensearch
  with format=json is the only example I can think of.
 
  I would think any request to api.php using format=none would lack a
  response payload? However I'm not totally sure that I'm reading and using
  the term response payload in the same way that you two might be.
 

 Haha, true. But if someone is using format=none, they're explicitly not
 caring to know if their request worked or not. ;)

 --
 Brad Jorsch (Anomie)
 Software Engineer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-04 Thread Brian Gerstle
I know about the phab ticket, but I'm glad you referenced it because I
found this:

(Anomie): I think feature flags to *select new behavior* and *a good
 deprecation process* will take care of most things that actually need
 improvement, to the point where we can do per-module versioning on an ad
 hoc basis rather than trying to force it everywhere.


IOW, why don't we continue w/ this feature flagging approach, which seems
like a decent way to version APIs and prevent breaking backwards
compatibility?




On Thu, Jun 4, 2015 at 1:05 PM, Legoktm legoktm.wikipe...@gmail.com wrote:

 On 06/04/2015 09:45 AM, Brian Gerstle wrote:
  While it is (a little bit) nicer for new developers, they'll just burned
  (along with all the other current API users) when you change the
 defaults.
  What I'm trying to say is, changing the default seems like more work for
  more people with very little benefit. This is why
  https://developer.github.com/v3/ people 
 https://www.reddit.com/dev/api
  version https://stripe.com/docs/api#charge_object APIs
  https://developer.linkedin.com/docs/rest-api.

 I'd recommend reading https://phabricator.wikimedia.org/T41592, which
 contains a pretty good rationale of why we currently don't version the API.

 -- Legoktm

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-04 Thread Brian Gerstle
Just found the deprecation process document
https://www.mediawiki.org/wiki/Requests_for_comment/API_roadmap,
specifically:


1. If not already present, a request parameter will be added to
specifically request the old behavior.
2. The change will be announced:
   - A message will be sent to the mediawiki-api-announce
   http://lists.wikimedia.org/pipermail/mediawiki-api-announce/ mailing
   list.
   - Deprecation warnings will be output when neither the
   select-new-version nor the select-old-version flags are used. Logs will
   also be made.
3. *After a suitable timeframe, the new version will become the
default.*
4. *Any flag to select the new version explicitly may at some point be
removed, leading to unrecognized parameter warnings.*


My argument is that step #3 is unnecessary and #4 needs clarification in
that IMO APIs should only be removed when they are no longer supported,
otherwise you're just creating busy work for yourself and the clients.

On Thu, Jun 4, 2015 at 1:59 PM, Brian Gerstle bgers...@wikimedia.org
wrote:

 I know about the phab ticket, but I'm glad you referenced it because I
 found this:

 (Anomie): I think feature flags to *select new behavior* and *a good
 deprecation process* will take care of most things that actually need
 improvement, to the point where we can do per-module versioning on an ad
 hoc basis rather than trying to force it everywhere.


 IOW, why don't we continue w/ this feature flagging approach, which seems
 like a decent way to version APIs and prevent breaking backwards
 compatibility?




 On Thu, Jun 4, 2015 at 1:05 PM, Legoktm legoktm.wikipe...@gmail.com
 wrote:

 On 06/04/2015 09:45 AM, Brian Gerstle wrote:
  While it is (a little bit) nicer for new developers, they'll just burned
  (along with all the other current API users) when you change the
 defaults.
  What I'm trying to say is, changing the default seems like more work for
  more people with very little benefit. This is why
  https://developer.github.com/v3/ people 
 https://www.reddit.com/dev/api
  version https://stripe.com/docs/api#charge_object APIs
  https://developer.linkedin.com/docs/rest-api.

 I'd recommend reading https://phabricator.wikimedia.org/T41592, which
 contains a pretty good rationale of why we currently don't version the
 API.

 -- Legoktm

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-04 Thread Brian Gerstle
Sorry, guess I'm also calling #1 into question, since the old behavior
should have already been the default or selectable by its own
flag/parameter/URL-path-component/content-type/whatever.

On Thu, Jun 4, 2015 at 2:08 PM, Brian Gerstle bgers...@wikimedia.org
wrote:

 Just found the deprecation process document
 https://www.mediawiki.org/wiki/Requests_for_comment/API_roadmap,
 specifically:


1. If not already present, a request parameter will be added to
specifically request the old behavior.
2. The change will be announced:
   - A message will be sent to the mediawiki-api-announce
   http://lists.wikimedia.org/pipermail/mediawiki-api-announce/ mailing
   list.
   - Deprecation warnings will be output when neither the
   select-new-version nor the select-old-version flags are used. Logs will
   also be made.
3. *After a suitable timeframe, the new version will become the
default.*
4. *Any flag to select the new version explicitly may at some point
be removed, leading to unrecognized parameter warnings.*


 My argument is that step #3 is unnecessary and #4 needs clarification in
 that IMO APIs should only be removed when they are no longer supported,
 otherwise you're just creating busy work for yourself and the clients.

 On Thu, Jun 4, 2015 at 1:59 PM, Brian Gerstle bgers...@wikimedia.org
 wrote:

 I know about the phab ticket, but I'm glad you referenced it because I
 found this:

 (Anomie): I think feature flags to *select new behavior* and *a good
 deprecation process* will take care of most things that actually need
 improvement, to the point where we can do per-module versioning on an ad
 hoc basis rather than trying to force it everywhere.


 IOW, why don't we continue w/ this feature flagging approach, which seems
 like a decent way to version APIs and prevent breaking backwards
 compatibility?




 On Thu, Jun 4, 2015 at 1:05 PM, Legoktm legoktm.wikipe...@gmail.com
 wrote:

 On 06/04/2015 09:45 AM, Brian Gerstle wrote:
  While it is (a little bit) nicer for new developers, they'll just
 burned
  (along with all the other current API users) when you change the
 defaults.
  What I'm trying to say is, changing the default seems like more work
 for
  more people with very little benefit. This is why
  https://developer.github.com/v3/ people 
 https://www.reddit.com/dev/api
  version https://stripe.com/docs/api#charge_object APIs
  https://developer.linkedin.com/docs/rest-api.

 I'd recommend reading https://phabricator.wikimedia.org/T41592, which
 contains a pretty good rationale of why we currently don't version the
 API.

 -- Legoktm

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-04 Thread Brian Gerstle
-announce

Maybe we should have a reply-to for api-announce mailings that prevent
accidental chatter?

On Thu, Jun 4, 2015 at 12:45 PM, Brian Gerstle bgers...@wikimedia.org
wrote:



 On Wed, Jun 3, 2015 at 1:13 PM, Brad Jorsch (Anomie) 
 bjor...@wikimedia.org wrote:

 On Wed, Jun 3, 2015 at 10:04 AM, Brian Gerstle bgers...@wikimedia.org
 wrote:

  My question is: why does the default behavior need to change?  Wouldn't
  continuing with the default behavior allow people to continue using the
  rawcontinue behavior for as long as we want to support it—without
 making
  any changes?
 

 The decision to change the default came out of the same concerns that led
 to the improved action=help output and some of the other work I've been
 doing lately: We want to lower the barriers for using our API, which means
 that the default shouldn't be something user-hostile.


 While it is (a little bit) nicer for new developers, they'll just burned
 (along with all the other current API users) when you change the defaults.
 What I'm trying to say is, changing the default seems like more work for
 more people with very little benefit. This is why
 https://developer.github.com/v3/ people https://www.reddit.com/dev/api
 version https://stripe.com/docs/api#charge_object APIs
 https://developer.linkedin.com/docs/rest-api.



 The raw continuation is deceptively simple: it looks straightforward, but
 if you're using it with a generator, multiple prop modules, and meta or
 list modules, your client code has to know when to ignore the returned
 continuation for the generator, when to remove a module from prop and then
 when to re-add it, and when to remove the meta or list modules. I wouldn't
 be that surprised to learn that more people have it wrong than correct if
 their code supports using prop modules with generators.

 The new continuation actually is simple: you send the equivalent of
 array_merge( $originalParams, $continueParams ) and it just works.


 Yes, some of the same could be said for making format=jsonformatversion=2
 the default. In this case the formatversion=1 output is just annoying
 rather than actually hostile (although representing boolean true as a
 falsey string comes close), so at this time there's no plan to make that
 breaking change.


 That's my point, why *ever* make it a breaking change? It's such a low
 cost to add a few characters to the URL to specify an API version.  Our
 docs should tell developers what the current version is.



 --
 Brad Jorsch (Anomie)
 Software Engineer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-04 Thread Brian Gerstle
On Wed, Jun 3, 2015 at 1:13 PM, Brad Jorsch (Anomie) bjor...@wikimedia.org
wrote:

 On Wed, Jun 3, 2015 at 10:04 AM, Brian Gerstle bgers...@wikimedia.org
 wrote:

  My question is: why does the default behavior need to change?  Wouldn't
  continuing with the default behavior allow people to continue using the
  rawcontinue behavior for as long as we want to support it—without
 making
  any changes?
 

 The decision to change the default came out of the same concerns that led
 to the improved action=help output and some of the other work I've been
 doing lately: We want to lower the barriers for using our API, which means
 that the default shouldn't be something user-hostile.


While it is (a little bit) nicer for new developers, they'll just burned
(along with all the other current API users) when you change the defaults.
What I'm trying to say is, changing the default seems like more work for
more people with very little benefit. This is why
https://developer.github.com/v3/ people https://www.reddit.com/dev/api
version https://stripe.com/docs/api#charge_object APIs
https://developer.linkedin.com/docs/rest-api.



 The raw continuation is deceptively simple: it looks straightforward, but
 if you're using it with a generator, multiple prop modules, and meta or
 list modules, your client code has to know when to ignore the returned
 continuation for the generator, when to remove a module from prop and then
 when to re-add it, and when to remove the meta or list modules. I wouldn't
 be that surprised to learn that more people have it wrong than correct if
 their code supports using prop modules with generators.

 The new continuation actually is simple: you send the equivalent of
 array_merge( $originalParams, $continueParams ) and it just works.


 Yes, some of the same could be said for making format=jsonformatversion=2
 the default. In this case the formatversion=1 output is just annoying
 rather than actually hostile (although representing boolean true as a
 falsey string comes close), so at this time there's no plan to make that
 breaking change.


That's my point, why *ever* make it a breaking change? It's such a low cost
to add a few characters to the URL to specify an API version.  Our docs
should tell developers what the current version is.



 --
 Brad Jorsch (Anomie)
 Software Engineer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-03 Thread Brian Gerstle
- mediawiki-api-annnounce

Sorry, didn't mean to CC the api announcement list.  Seems like my message
got bounced anyway.

On Wed, Jun 3, 2015 at 10:04 AM, Brian Gerstle bgers...@wikimedia.org
wrote:

 First, kudos to the API team for going the extra mile and reaching out to
 the community to guide them through this process.  This doesn't effect the
 apps at the moment, but it's good to know you guys are thinking about
 clients and how API changes affect them.

 My question is: why does the default behavior need to change?  Wouldn't
 continuing with the default behavior allow people to continue using the
 rawcontinue behavior for as long as we want to support it—without making
 any changes?

 On the other hand, if we don't want to support the old behavior, would it
 be better to simply return an error (e.g. HTTP 400) instead of breaking
 clients in a less explicit way?  For example, as a client, I would prefer
 my code failed faster (bad request) instead of failing more-or-less
 silently.

 Cheers,

 Brian


 On Wed, Jun 3, 2015 at 7:50 AM, Yuri Astrakhan yastrak...@wikimedia.org
 wrote:

 I feel that bot operators should actively pay attention to the technical
 aspects of the community and the mailing lists. So, the bot operator who
 never updates their software, doesn't pay attention to the announcements,
 and ignores api warnings should be blocked after the deadline.  Bot
 operators do not operate in a vacuum, and should never run bots just for
 the sake of running them.
 Community should always be able to find and communicate with the bot
 operators.
 Obviously we should not make sudden changes (except in the
 security/breaking matters), and try to make the process as easy as
 possible. The rawcontinue param is exactly that, simply adding it will
 keep
 the logic as before.

 Lastly, I again would like to promote the idea discussed at the hackathon
 -- a client side minimalistic library that bigger frameworks like
 pywikibot
 rely on, and that is designed in part by the core developers. See the
 proposal at

 https://www.mediawiki.org/wiki/Requests_for_comment/Minimalistic_MW_API_Client_Lib_Specification
 On Jun 3, 2015 2:29 PM, John Mark Vandenberg jay...@gmail.com wrote:

  On Wed, Jun 3, 2015 at 3:42 AM, Brad Jorsch (Anomie)
  bjor...@wikimedia.org wrote:
   ...
   I've compiled a list of bots that have hit the deprecation warning
 more
   than 1 times over the course of the week May 23–29. If you are
   responsible for any of these bots, please fix them. If you know who
 is,
   please make sure they've seen this notification. Thanks.
 
  Thank you Brad for doing impact analysis and providing a list of the
  71 bots with more than 10,000 problems per week.  We can try to solve
  those by working with the bot operators.
 
  If possible, could you compile a list of bots affected at a lower
  threshold - maybe 1,000.  That will give us a better idea of the scale
  of bots operators that will be affected when this lands - currently in
  one months time.
 
  Will the deploy date be moved back if the impact doesnt diminish by
  bots being fixed?
 
  --
  John Vandenberg
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] API BREAKING CHANGE: Default continuation mode for action=query will change at the end of this month

2015-06-03 Thread Brian Gerstle
First, kudos to the API team for going the extra mile and reaching out to
the community to guide them through this process.  This doesn't effect the
apps at the moment, but it's good to know you guys are thinking about
clients and how API changes affect them.

My question is: why does the default behavior need to change?  Wouldn't
continuing with the default behavior allow people to continue using the
rawcontinue behavior for as long as we want to support it—without making
any changes?

On the other hand, if we don't want to support the old behavior, would it
be better to simply return an error (e.g. HTTP 400) instead of breaking
clients in a less explicit way?  For example, as a client, I would prefer
my code failed faster (bad request) instead of failing more-or-less
silently.

Cheers,

Brian


On Wed, Jun 3, 2015 at 7:50 AM, Yuri Astrakhan yastrak...@wikimedia.org
wrote:

 I feel that bot operators should actively pay attention to the technical
 aspects of the community and the mailing lists. So, the bot operator who
 never updates their software, doesn't pay attention to the announcements,
 and ignores api warnings should be blocked after the deadline.  Bot
 operators do not operate in a vacuum, and should never run bots just for
 the sake of running them.
 Community should always be able to find and communicate with the bot
 operators.
 Obviously we should not make sudden changes (except in the
 security/breaking matters), and try to make the process as easy as
 possible. The rawcontinue param is exactly that, simply adding it will keep
 the logic as before.

 Lastly, I again would like to promote the idea discussed at the hackathon
 -- a client side minimalistic library that bigger frameworks like pywikibot
 rely on, and that is designed in part by the core developers. See the
 proposal at

 https://www.mediawiki.org/wiki/Requests_for_comment/Minimalistic_MW_API_Client_Lib_Specification
 On Jun 3, 2015 2:29 PM, John Mark Vandenberg jay...@gmail.com wrote:

  On Wed, Jun 3, 2015 at 3:42 AM, Brad Jorsch (Anomie)
  bjor...@wikimedia.org wrote:
   ...
   I've compiled a list of bots that have hit the deprecation warning more
   than 1 times over the course of the week May 23–29. If you are
   responsible for any of these bots, please fix them. If you know who is,
   please make sure they've seen this notification. Thanks.
 
  Thank you Brad for doing impact analysis and providing a list of the
  71 bots with more than 10,000 problems per week.  We can try to solve
  those by working with the bot operators.
 
  If possible, could you compile a list of bots affected at a lower
  threshold - maybe 1,000.  That will give us a better idea of the scale
  of bots operators that will be affected when this lands - currently in
  one months time.
 
  Will the deploy date be moved back if the impact doesnt diminish by
  bots being fixed?
 
  --
  John Vandenberg
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] API returns empty strings for boolean fields?

2015-05-28 Thread Brian Gerstle
For example:

mainpage field is omitted when querying Barack Obama
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=mobileviewformat=jsonpage=Barack_Obamaprop=pagepropspageprops=mainpage
:

 {
 mobileview: {
 pageprops: [],
 sections: []
 }
 }


But, it's present when querying Main Page
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=mobileviewformat=jsonpage=Main_Pageprop=pagepropspageprops=mainpage
:

 {
 mobileview: {
 pageprops: [],
 mainpage: ,
 sections: [ ... ]

}

}


Is this the desired behavior?

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikipedia iOS, now Swift enabled!

2015-05-23 Thread Brian Gerstle
If you want to write some Swift in the main Wikipedia iOS app project,
please work off the swift branch.

Happy hacking!

Brian

-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Multimedia team?

2015-05-11 Thread Brian Gerstle
I'm also curious what our audio/video storage/transcoding/playback roadmap
is.  IMO it's a pretty fundamental feature that isn't well supported in all
the clients (especially mobile).  Could probably do some interesting audio
stuff (e.g. narration in many languages) for visually impaired.

On Mon, May 11, 2015 at 7:42 AM, Jean-Frédéric jeanfrederic.w...@gmail.com
wrote:

 2015-05-11 10:29 GMT+01:00 Antoine Musso hashar+...@free.fr:

  On 11/05/15 02:18, Tim Starling wrote:
 
  On 10/05/15 07:06, Brian Wolff wrote:
 
  People have been talking about vr for a long time. I think there is
 more
  pressing concerns (e.g. video). I suspect VR will stay in the video
 game
  realm  or gimmick realm for a while yet
 
  Maybe VR is a gimmick, but VRML, or X3D as it is now called, could be
  a useful way to present 3D diagrams embedded in pages. Like SVG, we
  could use it with or without browser support.
 
 
  Hello,
 
  A potential use case for the encyclopedia, would be to display models of
  chemistry molecules. An example:
 
http://wiki.jmol.org/index.php/Jmol_MediaWiki_Extension
 

 See https://phabricator.wikimedia.org/project/profile/16/ and 
 https://phabricator.wikimedia.org/project/profile/804/

 --
 Jean-Frédéric
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit 2.11 release adds inline editing

2015-05-07 Thread Brian Gerstle
And objective-c syntax highlighting!

On Thu, May 7, 2015 at 8:39 AM, florian.schmidt.wel...@t-online.de 
florian.schmidt.wel...@t-online.de wrote:

 fyi,

 https://phabricator.wikimedia.org/T65847#1268528

 Best,
 Florian
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Connecting to github community

2015-05-05 Thread Brian Gerstle
I'd also like to mention some similar, third-party solutions that also
address this problem:

Github plugin for Gerrit https://gerrit.googlesource.com/plugins/github/:
seems to act as a translator between Gerrit/GH while maintaining sync
between the two. Last I checked, we won't use it because we'd have to
upgrade our Gerrit install, and as Greg mentioned, we're not willing to
upgrade Gerrit because we're trying to migrate to Differential.

Third-party site which does mirroring http://gerrithub.io/: this website
seems to allow you to have a gerrit interface to a GitHub repo (different
approach, AFAICT same end result that you can contribute  review via
gerrit or github). Probably less feasible for us because it's a third party
solution—though I'm not sure whether it's open source.

On Tue, May 5, 2015 at 4:28 PM, Bryan Davis bd...@wikimedia.org wrote:

 On Tue, May 5, 2015 at 1:05 AM, Amir Ladsgroup ladsgr...@gmail.com
 wrote:
  Hey,
  Github has a huge community of developers that collaborating with them
 can
  be beneficial for us and them but Wikimedia codes are in gerrit (and in
  future in phabricator) and our bug tracker is in phabrictor. sometimes It
  feels we are in another planet.
  Wikimedia has a mirror in github but we close pull requests immediately
 and
  we barely check issues raised there. Also there is a big notice in
  github[1], if you want to help, do it our way. Suddenly I got an idea
  that if we can synchronize github activities with gerrit and phabricator,
  it would help us by letting others help in their own way. It made me so
  excited that I wrote a bot yesterday to automatically duplicates patches
 of
  pull requests in gerrit and makes a comment in the pull request stating
 we
  made a patch in gerrit. I did a test in pywikibot and it worked well
 [2][3].
 
  Note that the bot doesn't create a pull request for every gerrit patch
 but
  it  creates a gerrit patch for every (open) pull requests.
 
  But before I go on we need to discuss on several important aspects of
 this
  idea:
  1- Is it really necessary to do this? Do you agree we need something like
  that?
  2-I think a bot to duplicate pull requests is not the best idea since it
  creates them under the bot account and not under original user account.
 We
  can create a plugin for phabrictor to do that but issues like privacy
 would
  bother us. (using OAuth wouldn't be a bad idea) What do you think? What
 do
  you suggest?
  3- Even if we create a plugin, still a bot to synchronize comments and
 code
  reviews is needed. I wrote my original code in a way that I can expand
 this
  to do this job too, but do you agree we need to do this?
  4- We can also expand this bot to create a phabricator task for each
 issue
  that has been created (except pull requests). Is it okay?
 
  I published my code in [4].
 
  [1]: https://github.com/wikimedia/pywikibot-core Github mirror of
  pywikibot/core - our actual code is hosted with Gerrit (please see
  https://www.mediawiki.org/wiki/Developer_access for contributing
  [2]: https://github.com/wikimedia/pywikibot-core/pull/5
  [3]: https://gerrit.wikimedia.org/r/208906
  [4]: https://github.com/Ladsgroup/sync_github_bot

 I think this is a cool idea. What I like about this is the general
 idea of trying to lower the barriers to contribution while still
 preserving a single source of truth and reviewer workflow.

 RobLa and I talked a couple of times in the past about the potential
 usefulness of something similar. I'm actually more interested in
 seeing tools built to bridge GitHub and Phabricator than GitHub and
 Gerrit however. Gerrit's days as the Wikimedia code review system are
 numbered and Phabricator will be the next system we use.

 Facebook uses a bot to transfer pull requests from GitHub [5] to their
 Phabricator instance [6] for HHVM. Having a system like this for the
 Wikimedia projects would be nice. It would be interesting to see
 something similar built to transfer GitHub issues to Phabricator as
 well possibly with an additional status change on the GitHub side when
 the associated Phabricator task was resolved.

 [5]: https://github.com/facebook/hhvm/pull/4924#issuecomment-76651483
 [6]: https://reviews.facebook.net/D34215

 Bryan
 --
 Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
 [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
 irc: bd808v:415.839.6885 x6855

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reminder: Breaking change to API continuation planned for 1.26

2015-04-20 Thread Brian Gerstle
It doesn't seem like anyone's mentioned it yet, but it would be even nicer
IMHO to not even worry about this class of problems by versioning the API
itself.  This way, you can provide an explicit upgrade path for
actively-developed clients while keeping old versions around for legacy
clients.   As a result, marking an old (or previously deprecated) API
(version) obsolete will only effect clients who haven't upgraded to the new
version.

On Mon, Apr 20, 2015 at 1:51 PM, Yuri Astrakhan yastrak...@wikimedia.org
wrote:

 I still think that we should provide a simple API clients for JS, PHP, and
 python. JS version should support both Browser  node.js. The libs should
 handle the most rudimentary API functioning like logins, warnings, 
 continuation, in the way that API devs feel is best, but nothing specific
 to any of the modules (e.g. it should not have a separate function to get a
 list of all pages).

 On Mon, Apr 20, 2015 at 8:38 PM, Jon Robson jdlrob...@gmail.com wrote:

  Is there a phab task for that.. ? :-)
 
 
  On Mon, Apr 20, 2015 at 10:21 AM, Brad Jorsch (Anomie)
  bjor...@wikimedia.org wrote:
   On Mon, Apr 20, 2015 at 1:19 PM, Jon Robson jdlrob...@gmail.com
 wrote:
  
   I use mw.api so I suspect that to handle deprecation notices - does it
   not? If not why not?
  
  
   Because no one coded it for that framework yet?
  
  
   --
   Brad Jorsch (Anomie)
   Software Engineer
   Wikimedia Foundation
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 
 
  --
  Jon Robson
  * http://jonrobson.me.uk
  * https://www.facebook.com/jonrobson
  * @rakugojon
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Wikimedia REST content API is now available in beta

2015-03-11 Thread Brian Gerstle
Really impressive work guys, love the generated docs!

On Tue, Mar 10, 2015 at 6:23 PM, Gabriel Wicke gwi...@wikimedia.org wrote:

 Hello all,

 I am happy to announce the beta release of the Wikimedia REST Content API
 at

 https://rest.wikimedia.org/

 Each domain has its own API documentation, which is auto-generated from
 Swagger API specs. For example, here is the link for the English Wikipedia:

 https://rest.wikimedia.org/en.wikipedia.org/v1/?doc

 At present, this API provides convenient and low-latency access to article
 HTML, page metadata and content conversions between HTML and wikitext.
 After extensive testing we are confident that these endpoints are ready for
 production use, but have marked them as 'unstable' until we have also
 validated this with production users. You can start writing applications
 that depend on it now, if you aren't afraid of possible minor changes
 before transitioning to 'stable' status. For the definition of the terms
 ‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning
 .

 While general and not specific to VisualEditor, the selection of endpoints
 reflects this release's focus on speeding up VisualEditor. By storing
 private Parsoid round-trip information separately, we were able to reduce
 the HTML size by about 40%. This in turn reduces network transfer and
 processing times, which will make loading and saving with VisualEditor
 faster. We are also switching from a cache to actual storage, which will
 eliminate slow VisualEditor loads caused by cache misses. Other users of
 Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
 translation will benefit similarly.

 But, we are not done yet. In the medium term, we plan to further reduce
 the HTML size by separating out all read-write metadata. This should allow
 us to use Parsoid HTML with its semantic markup
 https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec directly for
 both views and editing without increasing the HTML size over the current
 output. Combined with performance work in VisualEditor, this has the
 potential to make switching to visual editing instantaneous and free of any
 scrolling.

 We are also investigating a sub-page-level edit API for
 micro-contributions and very fast VisualEditor saves. HTML saves don't
 necessarily have to wait for the page to re-render from wikitext, which
 means that we can potentially make them faster than wikitext saves. For
 this to work we'll need to minimize network transfer and processing time on
 both client and server.

 More generally, this API is intended to be the beginning of a
 multi-purpose content API. Its implementation (RESTBase
 http://www.mediawiki.org/wiki/RESTBase) is driven by a declarative
 Swagger API specification, which helps to make it straightforward to extend
 the API with new entry points. The same API spec is also used to
 auto-generate the aforementioned sandbox environment, complete with handy
 try it buttons. So, please give it a try and let us know what you think!

 This API is currently unmetered; we recommend that users not perform more
 than 200 requests per second and may implement limitations if necessary.

 I also want to use this opportunity to thank all contributors who made
 this possible:

 - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
 Services team worked hard to build RESTBase, and to make it as extensible
 and clean as it is now.

 - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
 Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
 cluster backing this API.

 - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
 Marc Ordinas i Llopis is solving the extremely difficult task of converting
 between wikitext and HTML, and built a new API that lets us retrieve and
 pass in metadata separately.

 - On the MediaWiki core team, Brad Jorsch quickly created a minimal
 authorization API that will let us support private wikis, and Aaron Schulz,
 Alex Monk and Ori Livneh built and extended the VirtualRestService that
 lets VisualEditor and MediaWiki in general easily access external services.

 We welcome your feedback here:
 https://www.mediawiki.org/wiki/Talk:RESTBase - and in Phabricator
 https://phabricator.wikimedia.org/maniphest/task/create/?projects=RESTBasetitle=Feedback:
 .

 Sincerely --

 Gabriel Wicke

 Principal Software Engineer, Wikimedia Foundation

 ___
 Engineering mailing list
 engineer...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/engineering




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [gerrit] EUREKA!

2015-02-05 Thread Brian Gerstle
Very cool, Quim!  Can't wait until it's ready for early adoption ;-)

On Thu, Feb 5, 2015 at 4:59 AM, Federico Leva (Nemo) nemow...@gmail.com
wrote:

 This issue is tracked at https://phabricator.wikimedia.org/T40100 . Cf.
 https://phabricator.wikimedia.org/T55958 , https://phabricator.wikimedia.
 org/T47267#1017438

 Nemo


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Brian Gerstle
TL;DR; this discussion is great, but I think moving to docs/wikis/etc.
instead of continuing the thread could improve communication and give the
people who end up working on this something to reference later. could just
be my n00b-ness, but I thought others might share the sentiment.

I'm still new here, so please excuse me for possibly going against
convention, but does anyone else think it would be beneficial to move this
problem  proposal into a living document (RFC, wiki, google doc,
whatever)?  In doing so, I hope we can:

   1. Keep track of what the actual problems are along with the proposed
   solution(s)
   2. Group related concerns together, making them easier for those voicing
   them to be heard while also facilitating understanding and resolution
   3. Give us something concrete to go back to whenever we decide to
   dedicate resources to solving this problem, whether it's the next mobile
   apps sprint or something the mobile web team needs more urgently
   4. Prevent the points raised in the email (or the problem itself) from
   being forgotten or lost in the deluge of other emails we get every day

I don't know about you, but I can't mentally juggle the multiple problems,
implications, and the great points everyone is raising—which keeping it in
an email forces me to do.

Either way, looking forward to discussing this further and taking steps to
solve it in the near term.

- Brian


On Wed, Feb 4, 2015 at 10:22 AM, Brad Jorsch (Anomie) bjor...@wikimedia.org
 wrote:

 On Wed, Feb 4, 2015 at 2:33 AM, Erik Moeller e...@wikimedia.org wrote:

  If not, then I think one thing to keep in mind is how to organize the
  transformation code in a manner that it doesn't just become a
  server-side hodgepodge still only useful to one consumer, to avoid
  some of the pitfalls Brian mentions.


 I think the MobileFrontend extension has probably run into these pitfalls
 already.


  Say you want to reformat infoboxes on the mobile web, but not do all the
  other stuff the mobile app does. Can you just get that specific
  transformation? Are some transformations dependent on others?  Or say we
  want to make a change only for the output that gets fed into the PDF
  generator, but not for any other outputs. Can we do that?
 

 Maybe what we really need is a way to register transformation classes (e.g.
 something like $wgAPIModules). Then have ApiParse have a parameter to
 select transformations and apply them to wikitext before and to HTML after
 calling the parser. And we'd probably want to do the wikitext-before bit in
 ApiExpandTemplates too, and add a new action that takes HTML and applies
 only the HTML-after transforms to it.

 Or we could go as far as giving ParserOptions (or the ParserEnvironment I
 recently heard Tim propose) a list of transformations, to allow for
 transformations at some of the points where we have parser hooks. Although
 that would probably cause problems for Parsoid.


 --
 Brad Jorsch (Anomie)
 Software Engineer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] [gerrit] EUREKA!

2015-02-04 Thread Brian Gerstle
Go to a change https://gerrit.wikimedia.org/r/#/c/187879/3, click on the
gitbit
https://git.wikimedia.org/commit/apps%2Fios%2Fwikipedia/6532021b4f4b1f09390b1ffc3f09d149b2a8d9d1
link next to a patch set, then behold: MAGIC!!!
https://git.wikimedia.org/commitdiff/apps%2Fios%2Fwikipedia/712f033031c3c11fe8d521f7fdac4252986ee741
GitHub like diff viewer! No more All Side-by-Side w/ 1e6 tabs open.

Enjoy!

Brian


-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-03 Thread Brian Gerstle
Thanks for getting this ball rolling, Dan! Couldn't agree more with the
points you raised—having in fact raised a few of them myself. Put me down
as one of the mobile/full-stack engineers who wants to work on this service
:-).

On Tue, Feb 3, 2015 at 8:46 PM, Dan Garry dga...@wikimedia.org wrote:

 *tl;dr: Mobile Apps will, in partnership with the Services, investigate
 building a content service for the Mobile Apps.*

 The Mobile Apps Team currently has quite a few pain points with the way we
 fetch article content currently:

- We have to make a lot of API requests to load an article: article
HTML, lead image, read more recommendations, and more
- We send the user HTML that we then discard, needlessly increasing data
usage
- We do transforms to the HTML in JavaScript on the client side, which
causes code duplication across the apps and degrades user-perceived
performance
- Trivial changes to the API (e.g. renaming a parameter) can break the
app which is problematic since apps can't be hotfixed easily

 To address these challenges, we are considering performing some or all of
 these tasks in a service developed by the Mobile Apps Team with help from
 Services. This service will hit the APIs we currently hit on the client,
 aggregate the content we need on the server side, perform transforms we're
 currently doing on the client on the server instead, and serve the full
 response to the user via RESTBase. In addition to providing a public API
 end point, RESTBase would help with common tasks like monitoring, caching
 and authorisation.


 So the Mobile Apps Team is going to spend a bit of time investigating
 whether using RESTBase with Node.js is an option for building a content
 service for the Wikipedia app to replace our current method of retrieving
 article content. Our initial scope for this is feature parity with our
 current content retrieval method.

 Our action items are as follows:

- Wait for RESTBase to be deployed.
- Timescale: Weeks
   - Owner: All of us :-)

   - Figure out what information the service should serve for the first
iteration (i.e. for feature parity) and what APIs it needs to hit to do
 that
- Timescale: Wed 4th Feb
   - Owner: Dan Garry
- Start implementing the service and see whether it meets our needs
- Timescale: Planning a spike for next apps sprint (16th Feb - 27th Feb)
   to perform initial investigation
   - Owner: Currently undecided engineer from Mobile Apps, with Services
   engineers serving as consultants

 As always, feel free to ask if there are any questions.

 Dan
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dev Summit debrief: SOA proliferation through specification

2015-01-28 Thread Brian Gerstle
JSON Schema is a recurring theme here which I'd like to encourage.  I've
thought it was a promising idea and would like to explore it further, both
on the client and server side.  If we can somehow keep data schema and API
specifications separate, it would be nice to develop both of these ideas in
parallel.

On Wed, Jan 28, 2015 at 10:57 PM, Ori Livneh o...@wikimedia.org wrote:

 On Wed, Jan 28, 2015 at 12:30 PM, James Douglas jdoug...@wikimedia.org
 wrote:

  Howdy all,
 
  It was a pleasure chatting with you at this year's Developer Summit[1]
  about how we might give SOA a shot in the arm by creating (and building
  from) specifications.
 
  The slides are available on the RESTBase project pages[2] and the session
  notes are available on Etherpad[3].
 

 Hi James,

 I missed your session at the developer summit, so the slides and notes are
 very useful. I think that having a formal specification for an API as a
 standalone, machine-readable document is a great idea. I have been poking
 at Chrome's Remote Debugging API this week and found this project, which is
 a cool demonstration of the power of this approach:
 https://github.com/cyrus-and/chrome-remote-interface

 The library consists of just two files: the protocol specification[0],
 which is represented as a JSON Schema, and the library code[1], which
 generates an API by walking the tree of objects and methods. This approach
 allows the code to be very concise. If future versions of the remote
 debugging protocol are published as JSON Schema files, the library could be
 updated without changing a single line of code.

 MediaWiki's API provides internal interfaces for API modules to describe
 their inputs and outputs, but that's not quite as powerful as having the
 specification truly decoupled from the code and published as a separate
 document. I'm glad to see that you are taking this approach with RestBASE.

   [0]:

 https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/protocol.json
   [1]:

 https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/chrome.js
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Github access?

2015-01-26 Thread Brian Gerstle
Can you add me to the Wikimedia org as well?

On Monday, January 26, 2015, Brion Vibber bvib...@wikimedia.org wrote:

 Turns out I actually do have rights to do this, just took me a while to
 figure out how to find it in github interface. :)

 Ok you should have rights on apps-ios-wikipedia and should be able to manip
 the pull reqs. Don't merge them directly though -- merge them manually via
 gerrit and close them out manually if github doesn't pick them up after
 (which it might not since the mirroring is broken atm).

 -- brion

 On Mon, Jan 26, 2015 at 3:46 PM, Corey Floyd cfl...@wikimedia.org
 javascript:; wrote:

  Does anyone know who has the administrative privlidges to add me to the
  github wikimedia organization?
 
  I need to cleanup/maintain our external pull requests.
 
  Thanks!
 
  --
  Corey Floyd
  Software Engineer
  Mobile Apps / iOS
  Wikimedia Foundation
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org javascript:;
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org javascript:;
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fun with code coverage

2015-01-15 Thread Brian Gerstle
I'd love to use coveralls for the iOS app!  I've thought it (and Travis)
looked promising before, put seem especially relevant for mediawiki
projects which are all OSS.

One other JS testing lib you guys should check out is JSVerify
http://jsverify.github.io/, which is a port of Haskell's QuickCheck.
This allows you to do property-based testing which is great for re-thinking
your designs and program requirements as well as hitting edge cases that
aren't feasible to think of ahead of time.

Happy to discuss more if anyone's interested, or you can watch these two
interesting https://www.youtube.com/watch?v=JMhNINPo__g talks
https://www.youtube.com/watch?v=HXGpBrmR70U about test.check
https://github.com/clojure/test.check, a Clojure property-based testing
library.

- Brian

On Wed, Jan 14, 2015 at 9:51 PM, Subramanya Sastry ssas...@wikimedia.org
wrote:

 On 01/14/2015 06:57 PM, James Douglas wrote:

 Howdy all,

 Recently we've been playing with tracking our code coverage in Services
 projects, and so far it's been pretty interesting.


 Based on your coverage work for restbase, we added code coverage using the
 same nodejs tools (instanbul) and service (coveralls.io) for Parsoid as
 well (https://github.com/wikimedia/parsoid; latest build:
 https://coveralls.io/builds/1744803).

 So far, we learnt that our coverage (via parser tests + mocha for other
 bits) is pretty decent and that a lot of our uncovered areas are in code
 that isn't yet enabled in testing (ex: tracing, debugging, logging), or not
 tested sufficiently because that feature is not enabled in production yet.

 But, I've also seen that there are some edge cases and failure scenarios
 that aren't tested via our existing parser tests. The edge case coverage
 are for scenarios that we saw in production but (at the time when we fixed
 those issues in code) for which we didn't add a sufficiently reduced parser
 test. As for the failure scenarios, we might need testing via mocha to
 simulate them (ex: cache failures for selective serialization, or timeouts,
 etc.).

 Some of the edge case scenario and more aggressive testing is taken care
 of by our nightly round-trip testing on 160K articles.

 But, adding this has definitely revealed gaps in our test coverage that we
 should / will address in the coming weeks, but at the same time, it has
 verified my / our intuition that we have pretty high coverage via parser
 tests that we constantly update and add to.

 Subbu.


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l