Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread James Douglas
 In general I'm in favor of more ad-hoc project-specific teams rather than
completely siloing every service to the Services group, or every mobile UI
to the Mobile group.

I strongly agree.  Based on experience on both sides of this spectrum, I
recommend (when feasible) favoring feature teams over functional teams.

On Wed, Feb 4, 2015 at 3:00 PM, Brion Vibber bvib...@wikimedia.org wrote:

 I think the way we'd want to go is roughly to have a *partnership between*
 the Services and Mobile teams produce and maintain the service.

 (Note that the state of the art is that Mobile Apps are using Mobile Web's
 MobileFrontend extension as an intermediate API to aggregate  format page
 data -- which basically means Max has done the server-side API work for
 Mobile Apps so far.)

 I'd expect to see Max and/or someone else from the Mobile team
 collaborating with the Services team to create what y'all need:
 1) something that does what Mobile Apps needs it to...
 2) and can be maintained like Services needs it to.

 In general I'm in favor of more ad-hoc project-specific teams rather than
 completely siloing every service to the Services group, or every mobile UI
 to the Mobile group.

 -- brion

 On Wed, Feb 4, 2015 at 2:29 PM, Corey Floyd cfl...@wikimedia.org wrote:

  On Wed, Feb 4, 2015 at 11:41 AM, Gabriel Wicke gwi...@wikimedia.org
  wrote:
 
   Regarding general-purpose APIs vs. mobile: I think mobile is in some
  ways a
   special case as their content transformation needs are closely coupled
  with
   the way the apps are presenting the content. Additionally, at least
 until
   SPDY is deployed there is a strong performance incentive to bundle
   information in a single response tailored to the app's needs. One
  strategy
   employed by Netflix is to introduce a second API layer
   
  
 
 http://techblog.netflix.com/2012/07/embracing-differences-inside-netflix.html
   
   on
   top of the general content API to handle device-specific needs. I think
   this is a sound strategy, as it contains the volatility in a separate
  layer
   while ensuring that everything is ultimately consuming the
  general-purpose
   API. If the need for app-specific massaging disappears over time, we
 can
   simply shut down the custom service / API end point without affecting
 the
   general API.
  
 
 
  I can definitely understand that motivation for providing mobile specific
  service layer - so if the services team wants to implement the API in
 this
  way and support that architecture, I am totally on board.
 
  My remaining hesitation here is that from the reading of this proposal,
 the
  mobile team is the owner of implementing this service, not the services
  team (Maybe I am misreading?).
 
  This leads me to ask questions like:
  Why is the mobile apps team investigating which is the best server side
  technology? That seems outside of our domain knowledge.
  Who will be responsible for maintaining this code?
  Who will be testing it to make sure that is performant?
 
  I'm new, so maybe these answers are obvious to others, but to me they
 seem
  fuzzy when responsibilities are divided between two teams.
 
  I would propose that this be a project that the Services Team owns. And
  that the Mobile Apps Team defines specs on what they need the new service
  to provide.
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [gerrit] EUREKA!

2015-02-04 Thread James Douglas
Hooray!  Thank you for this!  Gerrit's multi-tab diff has been my biggest
pain point in migrating from GitHub.

On Wed, Feb 4, 2015 at 2:34 PM, Brian Gerstle bgers...@wikimedia.org
wrote:

 Go to a change https://gerrit.wikimedia.org/r/#/c/187879/3, click on the
 gitbit
 
 https://git.wikimedia.org/commit/apps%2Fios%2Fwikipedia/6532021b4f4b1f09390b1ffc3f09d149b2a8d9d1
 
 link next to a patch set, then behold: MAGIC!!!
 
 https://git.wikimedia.org/commitdiff/apps%2Fios%2Fwikipedia/712f033031c3c11fe8d521f7fdac4252986ee741
 
 GitHub like diff viewer! No more All Side-by-Side w/ 1e6 tabs open.

 Enjoy!

 Brian


 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] C2.com switches to single-page app distributed nodejs backend

2015-02-02 Thread James Douglas
This is an interesting change.  I wonder how they keep the site accessible
by search engine indexers, and folks with older/limited/text-only browsers
or limited connectivity.

On Mon, Feb 2, 2015 at 8:20 AM, Gabriel Wicke gwi...@wikimedia.org wrote:

 The original wiki is getting a technical facelift:

- http://c2.com/cgi/wiki?WikiWikiSystemNotice
- http://c2.fed.wiki.org/view/welcome-visitors
- https://news.ycombinator.com/item?id=8983158

 Gabriel
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] C2.com switches to single-page app distributed nodejs backend

2015-02-02 Thread James Douglas
Oh, cool.  +1 for progressive enhancement!

On Mon, Feb 2, 2015 at 9:47 AM, Max Semenik maxsem.w...@gmail.com wrote:

 Actually, it has a nice HTML-only fallback, and Googlebot executes JS these
 days.

 On Mon, Feb 2, 2015 at 8:53 AM, James Douglas jdoug...@wikimedia.org
 wrote:

  This is an interesting change.  I wonder how they keep the site
 accessible
  by search engine indexers, and folks with older/limited/text-only
 browsers
  or limited connectivity.
 
  On Mon, Feb 2, 2015 at 8:20 AM, Gabriel Wicke gwi...@wikimedia.org
  wrote:
 
   The original wiki is getting a technical facelift:
  
  - http://c2.com/cgi/wiki?WikiWikiSystemNotice
  - http://c2.fed.wiki.org/view/welcome-visitors
  - https://news.ycombinator.com/item?id=8983158
  
   Gabriel
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 Best regards,
 Max Semenik ([[User:MaxSem]])
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] From Node.js to Go

2015-01-30 Thread James Douglas
I wonder whether Go's lack of parametric polymorphism might make it a
pretty tough sell.  Given the potential benefit of introducing a statically
typed language, it might be interesting to investigate and compare some of
the different options.

Regarding Yuri's point about tools, what would it take to integrate Hack
into the current MediaWiki build processes?  It *seems* like it wouldn't be
a huge diversion, but I'm quite unfamiliar with what's in place now.  Have
we dabbled in Hack since the HHVM switch?

On Thu, Jan 29, 2015 at 9:18 PM, Yuri Astrakhan yastrak...@wikimedia.org
wrote:

 Language fragmentation is always fun, but, as with any new one, my concerns
 lie in the environment - is there enough tools to make the advertised
 benefits worth it, does it have a decent IDE with the smart code
 completion, refactoring, and a good debugger? Does it have a
 packaging/dependency system? How extensive is the standard library, and
 user contributed packages. How well does it play with the code written in
 other languages? The list could go on.  In short - we can always try new
 things as a small service ))  And yes, Rust also sounds interesting.
 On Jan 29, 2015 7:22 PM, Ori Livneh o...@wikimedia.org wrote:

  (Sorry, this was meant for wikitech-l.)
 
  On Thu, Jan 29, 2015 at 7:20 PM, Ori Livneh o...@wikimedia.org wrote:
 
   We should do the same, IMO.
   http://bowery.io/posts/Nodejs-to-Golang-Bowery/
  
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dev Summit debrief: SOA proliferation through specification

2015-01-29 Thread James Douglas
 As to JSON, IMHO YAML is better, more human-readable and less verbose

I agree.  As a human, YAML is rather nicer to read (and write) than JSON.
Fortunately it's pretty easy to convert one to the other.  We've even made
some tweaks to Swagger UI to support both YAML and JSON, and we're
currently generating our tests from a YAML-formatted version of our Swagger
spec.


On Thu, Jan 29, 2015 at 12:31 AM, Marko Obrovac mobro...@wikimedia.org
wrote:

 On Wed, Jan 28, 2015 at 11:29 PM, Brian Gerstle bgers...@wikimedia.org
 wrote:

  JSON Schema is a recurring theme here which I'd like to encourage.  I've
  thought it was a promising idea and would like to explore it further,
 both
  on the client and server side.  If we can somehow keep data schema and
 API
  specifications separate, it would be nice to develop both of these ideas
 in
  parallel.
 
 That's exactly our idea for RESTBase/SOA. Each interface would be packaged
 separately as a swagger specification, which would then be required by and
 implemented by modules. Having such a clean and clear separation of the two
 would allow us to:
   - consult the interface independently of the implementation
   - have multiple modules implementing the same interface

 As to JSON, IMHO YAML is better, more human-readable and less verbose, but
 that's just a matter of personal preference - computers can read them all
 :P

 Marko


  On Wed, Jan 28, 2015 at 10:57 PM, Ori Livneh o...@wikimedia.org wrote:
 
   On Wed, Jan 28, 2015 at 12:30 PM, James Douglas 
 jdoug...@wikimedia.org
   wrote:
  
Howdy all,
   
It was a pleasure chatting with you at this year's Developer
 Summit[1]
about how we might give SOA a shot in the arm by creating (and
 building
from) specifications.
   
The slides are available on the RESTBase project pages[2] and the
  session
notes are available on Etherpad[3].
   
  
   Hi James,
  
   I missed your session at the developer summit, so the slides and notes
  are
   very useful. I think that having a formal specification for an API as a
   standalone, machine-readable document is a great idea. I have been
 poking
   at Chrome's Remote Debugging API this week and found this project,
 which
  is
   a cool demonstration of the power of this approach:
   https://github.com/cyrus-and/chrome-remote-interface
  
   The library consists of just two files: the protocol specification[0],
   which is represented as a JSON Schema, and the library code[1], which
   generates an API by walking the tree of objects and methods. This
  approach
   allows the code to be very concise. If future versions of the remote
   debugging protocol are published as JSON Schema files, the library
 could
  be
   updated without changing a single line of code.
  
   MediaWiki's API provides internal interfaces for API modules to
 describe
   their inputs and outputs, but that's not quite as powerful as having
 the
   specification truly decoupled from the code and published as a separate
   document. I'm glad to see that you are taking this approach with
  RestBASE.
  
 [0]:
  
  
 
 https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/protocol.json
 [1]:
  
  
 
 https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/chrome.js
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
 
 
 
  --
  EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
  IRC: bgerstle
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 

 Marko Obrovac
 Senior Services Engineer
 Wikimedia Foundation
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Improving our code review efficiency

2015-01-29 Thread James Douglas
This is a situation where disciplined testing can come in really handy.

If I submit a patch, and the patch passes the tests that have been
specified for the feature it implements (or the bug it fixes), and the code
coverage is sufficiently high, then a reviewer has a running start in terms
of confidence in the correctness and completeness of the patch.

Practically speaking, this doesn't necessarily rely on rest of the project
already having a very level of code coverage; as long as there are tests
laid out for the feature in question, and the patch makes those tests pass,
it gives the code reviewer a real shot in the arm.

On Thu, Jan 29, 2015 at 1:14 PM, Jon Robson jdlrob...@gmail.com wrote:

 Thanks for kicking off the conversation Brad :-)

 Just mean at the moment. I hacked together and I'm more than happy to
 iterate on this and improve the reporting.

 On the subject of patch abandonment: Personally I think we should be
 abandoning inactive patches. They cause unnecessary confusion to
 someone coming into a new extension wanting to help out. We may want
 to change the name to 'abandon' to 'remove from code review queue' as
 abandon sounds rather nasty and that's not at all what it actually
 does - any abandoned patch can be restored at any time if necessary.


 On Thu, Jan 29, 2015 at 1:11 PM, Brad Jorsch (Anomie)
 bjor...@wikimedia.org wrote:
  On Thu, Jan 29, 2015 at 12:56 PM, Jon Robson jdlrob...@gmail.com
 wrote:
 
  The average time for code to go from submitted to merged appears to be
  29 days over a dataset of 524 patches, excluding all that were written
  by the L10n bot. There is a patchset there that has been _open_ for
  766 days - if you look at it it was uploaded on Dec 23, 2012 12:23 PM
  is -1ed by me and needs a rebase.
 
 
  Mean or median?
 
  I recall talk a while back about someone else (Quim, I think?) doing this
  same sort of analysis, and considering the same issues over patches that
  seem to have been abandoned by their author and so on, which led to
  discussions of whether we should go around abandoning patches that have
  been -1ed for a long time, etc. Without proper consideration of those
 sorts
  of issues, the statistics don't seem particularly useful.
 
 
  --
  Brad Jorsch (Anomie)
  Software Engineer
  Wikimedia Foundation
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Jon Robson
 * http://jonrobson.me.uk
 * https://www.facebook.com/jonrobson
 * @rakugojon

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Dev Summit debrief: SOA proliferation through specification

2015-01-28 Thread James Douglas
Howdy all,

It was a pleasure chatting with you at this year's Developer Summit[1]
about how we might give SOA a shot in the arm by creating (and building
from) specifications.

The slides are available on the RESTBase project pages[2] and the session
notes are available on Etherpad[3].

I'm eager to keep the conversation going on the mailing list, and want to
address a couple items that came up (or were missing) during the session,
as well as prompt for further discussion.

I mentioned after the presentation that we're using our spec to drive our
automated testing.  I added some info about that to slide #14 in the
slides[2].  The idea is that, since Swagger lets us add custom fields to a
spec, we can augment each endpoint specification with a functional
description of its expected inputs and outputs.  During testing, we parse
the spec and verify that these indeed hold true.  There's a lot of
opportunity for enhancement of our (currently very basic) approach to this,
but it's already proving pretty handy from a coverage standpoint.

There was a question in the notes[3] about Swagger's support for
internationalization, but I'm not familiar with the use case in mind.  How
might an API differ, aside from the content of fields in a specified model,
under different localizations?  Might users want the models themselves (or
parameter names, etc.) to vary?

Cheers!
James

[1] https://www.mediawiki.org/wiki/MediaWiki_Developer_Summit_2015
[2]
http://wikimedia.github.io/restbase/docs/presentations/wm-dev-summit-2015/soa-via-specs/index.html
[3] https://etherpad.wikimedia.org/p/mwds15-spec-oriented-architecture
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-20 Thread James Douglas
 But there's plenty of room for other initiatives (some could even make
money out of this :)).

For what it's worth, this brings to mind a couple interesting examples of
this pattern:
* https://travis-ci.com/ (and its free counterpart, https://travis-ci.org/),
the hosted version of https://github.com/travis-ci/travis-ci
* https://gitlab.com/, the hosted version of
https://github.com/gitlabhq/gitlabhq


On Tue, Jan 20, 2015 at 3:37 PM, Markus Glaser gla...@hallowelt.biz wrote:

 Hello everyone,

 On 19/01/15 06:47, Tim Starling wrote:
  As long as there are no actual reasons for dropping pure-PHP core
 functionality,
  the idea of WMF versus shared hosting is a false dichotomy.
 I kind of agree. Instead of seeing things in black and white, aka shared
 hosting or not, we should take a look at the needs of users who run their
 MW on a shared hosting. What exactly do they consider core functionality?
 I don't think we actually know the answer yet. To me, it seems very likely
 that MWs on shared hosts are a starting base into the MW world. Probably,
 their admins are mostly not technologically experienced. In the near
 future, most of them will want to see VE on their instances for improved
 user experience. But do they really care about wikicode? Or do they care
 about some functionality that solves their problems. I could imagine,
 templating is one of the main reasons to ask for wikicode. Can we, then,
 support templating in pure HTML versions of parsoid? Are there other
 demands and solutions? What I mean is: there are many shades of [any color
 you like], in order to make users outside the WMF happy.

 I see shared hosts somewhere in a middle position in terms of skills
 needed to run and in terms of dedication to the site. On the lower
 ground, there are test instances run on local computers. These can be
 supported, for example, with vagrant setups, in order to make it very easy
 to get started. On the upper level, there are instances that run on
 servers with root access, vms, in clouds, etc. They can be supported, for
 instance, with modular setup instructions, packages, predefined machines,
 puppet and other install scripts in order to get a proper setup. So shared
 hosting is a special case, then, but it seems to have a significant base of
 users and supporters.

 While the current SOA approach makes things more complex in terms of
 technologies one needs to support in order to run a setup that matches one
 of the top 5 websites, it also makes things easier in terms of modularity,
 if we do it right. So, for example, we (tm) could provide a lightweight PHP
 implementation of parsoid. Or someone (tm) runs a parsoid service somewhere
 in the net.

 The question is, then, who should be someone. Naturally, WMF seems to be
 predestined to lay the ground, e.g. by publishing setup instructions,
 interfaces, puppet rules, etc. But there's plenty of room for other
 initiatives (some could even make money out of this :)). An ecosystem
 around MediaWiki can help do the trick. But here's the crucial bit: We will
 only get a healthy ecosystem around MediaWiki, if things are reliable in a
 way. If the developer community and the WMF commits to support such an
 environment. In the current situation, there's so much insecurity I doubt
 anyone will seriously consider putting a lot of effort in, say, a PHP
 parsoid port (I'd be happy if someone proves me wrong).

 So, to make a long story short: Let's look forward and try to find
 solutions. MediaWiki is an amazing piece of software and we should never
 stop to salutate and support the hundreds of thousands of people that are
 using it as a basis of furthering the cause of free knowledge.

 Best,
 Markus

 -Ursprüngliche Nachricht-
 Von: wikitech-l-boun...@lists.wikimedia.org [mailto:
 wikitech-l-boun...@lists.wikimedia.org] Im Auftrag von Tim Starling
 Gesendet: Montag, 19. Januar 2015 06:47
 An: wikitech-l@lists.wikimedia.org
 Betreff: Re: [Wikitech-l] The future of shared hosting

 On 16/01/15 17:38, Bryan Davis wrote:
  The solution to these issues proposed in the RFC is to create
  independent services (eg Parsoid, RESTBase) to implement features that
  were previously handled by the core MediaWiki application. Thus far
  Parsoid is only required if a wiki wants to use VisualEditor. There
  has been discussion however of it being required in some future
  version of MediaWiki where HTML is the canonical representation of
  articles {{citation needed}}.

 Parsoid depends on the MediaWiki parser, it calls it via api.php. It's not
 a complete, standalone implementation of wikitext to HTML transformation.

 HTML storage would be a pretty simple feature, and would allow third-party
 users to use VE without Parsoid. It's not so simple to use Parsoid without
 the MediaWiki parser, especially if you want to support all existing
 extensions.

 So, as currently proposed, HTML storage is actually a way to reduce the
 dependency on services for non-WMF wikis, not to 

Re: [Wikitech-l] Fun with code coverage

2015-01-15 Thread James Douglas
+1 for property-based testing.  JSVerify's Haskell-like syntax makes it
super easy to conjure up arbitrary generators.

On Thu, Jan 15, 2015 at 7:44 AM, Brian Gerstle bgers...@wikimedia.org
wrote:

 I'd love to use coveralls for the iOS app!  I've thought it (and Travis)
 looked promising before, put seem especially relevant for mediawiki
 projects which are all OSS.

 One other JS testing lib you guys should check out is JSVerify
 http://jsverify.github.io/, which is a port of Haskell's QuickCheck.
 This allows you to do property-based testing which is great for re-thinking
 your designs and program requirements as well as hitting edge cases that
 aren't feasible to think of ahead of time.

 Happy to discuss more if anyone's interested, or you can watch these two
 interesting https://www.youtube.com/watch?v=JMhNINPo__g talks
 https://www.youtube.com/watch?v=HXGpBrmR70U about test.check
 https://github.com/clojure/test.check, a Clojure property-based testing
 library.

 - Brian

 On Wed, Jan 14, 2015 at 9:51 PM, Subramanya Sastry ssas...@wikimedia.org
 wrote:

  On 01/14/2015 06:57 PM, James Douglas wrote:
 
  Howdy all,
 
  Recently we've been playing with tracking our code coverage in Services
  projects, and so far it's been pretty interesting.
 
 
  Based on your coverage work for restbase, we added code coverage using
 the
  same nodejs tools (instanbul) and service (coveralls.io) for Parsoid as
  well (https://github.com/wikimedia/parsoid; latest build:
  https://coveralls.io/builds/1744803).
 
  So far, we learnt that our coverage (via parser tests + mocha for other
  bits) is pretty decent and that a lot of our uncovered areas are in code
  that isn't yet enabled in testing (ex: tracing, debugging, logging), or
 not
  tested sufficiently because that feature is not enabled in production
 yet.
 
  But, I've also seen that there are some edge cases and failure scenarios
  that aren't tested via our existing parser tests. The edge case coverage
  are for scenarios that we saw in production but (at the time when we
 fixed
  those issues in code) for which we didn't add a sufficiently reduced
 parser
  test. As for the failure scenarios, we might need testing via mocha to
  simulate them (ex: cache failures for selective serialization, or
 timeouts,
  etc.).
 
  Some of the edge case scenario and more aggressive testing is taken care
  of by our nightly round-trip testing on 160K articles.
 
  But, adding this has definitely revealed gaps in our test coverage that
 we
  should / will address in the coming weeks, but at the same time, it has
  verified my / our intuition that we have pretty high coverage via parser
  tests that we constantly update and add to.
 
  Subbu.
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fun with code coverage

2015-01-14 Thread James Douglas
Howdy all,

Recently we've been playing with tracking our code coverage in Services
projects, and so far it's been pretty interesting.

We've learned about where the gaps are in our testing (which has even
revealed holes in our understanding of our own specifications and use
cases), and had fun watching the coverage climb with (nearly) each pull
request.

I've slapped together some notes about our experience here:

https://github.com/wikimedia/restbase/tree/master/doc/coverage#code-coverage

I'd love to hear your thoughts and learn about your related experiences.
What are your favorite code coverage tools and services?

Cheers!
James
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l