Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-11-04 Thread David Mitchell
Thanks Colin

I probably should've added that this is a large enterprise doing what's 
essentially an in-house startup - building an app using Clojure.  The app 
initially sat somewhat off to the side of the corporation's main line of 
business, but there was always the intent to incorporate the new app into 
the existing core infrastructure at some future date.  It was left open as 
to how that would actually happen, and now we're having to confront that 
problem directly.  While we have something of a startup mentality within 
the project with respect to what gets done day to day and how it gets done, 
there's an overarching set of tools/processes/requirements/constraints that 
come hand in hand with working for a large enterprise.

One of those is that we've got an existing pool of testers already within 
the enterprise, and need to find some alignment between their existing 
tools/capabilities/processes/... and the requirements of our new shiny 
Clojure app.  We've got testers who've been on the project virtually since 
it began (who've NOT used Clojure but instead have focused on testing 
interfaces using SoapUI and the web client) but now the project has grown 
to the point where we need both (a) more testers just to keep pace with 
development, (b) testers with different backgrounds (mobile being the big 
one) and (c) the testers need to come from the existing enterprise pool. 
 I'm sure we COULD cross train at least some of these testers to do their 
work within Clojure, but there's currently zero long-term tangible benefit 
for either their career path or the enterprise in doing so.  If the 
enterprise had a pool of upcoming Clojure projects, then it would probably 
be a different story.

Absolutely agree that there's a lot of time and expense in testing that's 
not always focused appropriately, and that's ultimately what I'm trying to 
fix/avoid in this project.  Personally, I wish we'd all stop using the 
phrase testing and instead use risk mitigation to try to keep the 
desired outcome of the activity clear in everyone's mind at key decision 
points, but this isn't the forum for THAT discussion ;-

Thanks again for your response

Dave M.



On Wednesday, 29 October 2014 20:07:34 UTC+11, Colin Yates wrote:

 I also meant to say that it is worth doing a cost-benefit analysis on 
 testing. When I did my consultant thing I frequently saw massive investment 
 in tools, process, emotion(!) etc. in things that on paper were a good 
 thing but in reality weren't mitigating any real risk.

 All testing isn't equal - what risks do you currently face right now (e.g. 
 are you playing whack-a-moley due to developer's poor impact analysis, or 
 for example, are the developers consistently producing great code which 
 doesn't piece together well and so on)? Put your energies into resolving 
 those risks.

 Most testing processes and resources I see are by definition re-active - 
 they find effects (bugs) of the cause (which is typically developer 
 insufficiency in terms of requirements). In my view resources should all be 
 about mitigating the cause. Why not get your testers (although I really 
 don't like segmenting resources by calling them testers) to sit with a 
 developer just before they do a new piece of work and think through the 
 impact analysis? Have your testers take the use cases and start building 
 the test scenarios immediately. Have your testers review the developers 
 unit tests - if it doesn't make sense to a (technically orientated) tester 
 than the developer is probably doing it wrong and so on. 

 Simply spotting effects is helpful but all resources should be focused on 
 mitigating the cause of those effects and more often than not I see a whole 
 bunch of testing activity which isn't really solving any real problems 
 but is certainly slowing down the flow. Please let me clear - I am not 
 challenging the _requirements_ of the traditional testing process (e.g. 
 ensuring quality), I am claiming the way most people do it is incredibly 
 expensive and inefficient.

 I can hear the internet taking a breath saying No!, he didn't just go 
 there :).

 On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the 
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got 
 a mutant form of agile process running whereby we work in 2 week sprints, 
 but there's rarely an installable product that emerges at the end of the 2 
 weeks.  I won't go into detail as to what I feel are the root causes in a 
 public forum - however I'm convinced that our adoption of Clojure is at 
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to use 
 for this project, and personally I'll be actively seeking out other Clojure 
 projects in the future.  I'm saying that from the viewpoint of someone 
 who's

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-11-04 Thread David Mitchell
 and inefficient.

 I can hear the internet taking a breath saying No!, he didn't just go 
 there :).

 On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the 
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got 
 a mutant form of agile process running whereby we work in 2 week sprints, 
 but there's rarely an installable product that emerges at the end of the 2 
 weeks.  I won't go into detail as to what I feel are the root causes in a 
 public forum - however I'm convinced that our adoption of Clojure is at 
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to 
 use for this project, and personally I'll be actively seeking out other 
 Clojure projects in the future.  I'm saying that from the viewpoint of 
 someone who's employed in the testing area, but who also has quite a bit of 
 Clojure development experience.  There's just this gulf at present between 
 the people who know Clojure (almost exclusively developers) and other 
 technical staff involved in the application lifecycle (testers, 
 infrastructure owners, all the various technical managers) that we're 
 finding very difficult to manage.

 For example, it'd be great if we could pair up our testers and 
 developers, have them working side by side and rapidly iterating through 
 e.g. Cucumber feature definition, coding and testing.  That would be 
 absolutely ideal for this particular project, where a complete set of test 
 cases can't be 100% defined up front and lots of minor questions arise even 
 within an iteration.  If this working arrangement was viable, every time we 
 hit a point that needed clarification, the tester could engage the product 
 owner, get clarification and jump back in to their normal work with minimal 
 disruption.  However, our testers simply can't provide enough useful input 
 into development - they're currently stuck waiting for developers to hand 
 their code over *in a form that the testers can test it*, and often there's 
 a lot of extra (wasted?) effort involved to take working Clojure code and 
 make it testable using non-Clojure tools.  

 To say this is an inefficient working model would be a massive 
 understatement.  What we're seeing is that our developers work like mad for 
 the first week of a 2 week iteration, while the testers are largely idle; 
 then code gets handed over and the developers are largely idle while the 
 testers work like mad trying to finish their work before the end of the 
 iteration.  Our automation testers are valiantly trying to use SoapUI and 
 Groovy and (to a small extent) Cucumber/Ruby to test our Clojure code, but 
 those tools require that there are exposed HTTP endpoints (SoapUI) or Java 
 classes (Groovy or *JRuby*) that the tool can use to exercise the 
 underlying Clojure code.  These endpoints exist, but only at a very high 
 level - our UI testing, which works very well, is already hitting those 
 same endpoints.

 Additionally, our QA manager wants our testers to be able to do more 
 exploratory testing, based on his personal experience of using Ruby's 
 interactive shell, and simply trying stuff out.  That approach makes a 
 lot of sense for this project, and I know that using a Clojure REPL could 
 provide a great platform for this type of testing, but doing that would 
 require a sizeable investment in our testers learning to use Clojure.

 I'm starting to wonder whether there's actually any point trying to do 
 *any* system testing of Clojure apps under active development, as maybe 
 that risk exposure can best be addressed by enforcing suitable coding 
 standards (e.g. :pre and :post conditions), and then extending what would 
 normally be unit tests to address whole-of-system functionality.  After 
 all, for an app written in a functional language - where you've basically 
 only got functions that take parameters and return a result, minimal state 
 to manage, and usually a small set of functions having side effects like 
 database IO - surely a lot of your traditional functional test scenarios 
 would simply be tightly-targetted unit tests anyway.

 Maybe we should be handing off our single-system functional testing 
 entirely to developers, and only engaging our dedicated QA people once we 
 get to integrating all the different streams of development together.  That 
 seems to be the approach that Craig's project (thanks Craig!) is taking, 
 and it'd definitely be easier to work with compared to our current 
 processes.  Due to the lack of oversight and up-front objective 
 requirements, there could be an increased risk that our developers are 
 writing code to solve the wrong problem, but maybe that's just something we 
 need to live with.

 If anyone else has any thoughts, I'd REALLY appreciate hearing about 
 them.  Thanks again to Colin and Craig

 On Tuesday

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-28 Thread David Mitchell
 in http://specificationbyexample.com/. If you haven't, I 
 strongly recommend you do as the win in this situation is they separate the 
 required behaviour of the system (i.e. the specification) being tested from 
 the technical geekery of asserting that behaviour. In brief, this process, 
 when done well:
  - defines behaviour in readable text documents (albeit restricted by the 
 jerkin grammar)
  - the same specification is consumed by the stake holders and the 
 computer (and if you want bonus points are produced by/with the stake 
 holders :))
  - provides access to many libraries to interpret and execute those specs (
 http://cukes.info/ being the main one etc.)

 Once you get into the whole vibe of freeing your specs from implementation 
 a whole new world opens up. http://fitnesse.org/ for example, is another 
 approach.

 I am suggesting the tension in your post around how do we collate all our 
 resources around an unfamiliar tool might be best addressed by using a new 
 tool - the shared artifacts are readable English textual specifications 
 which everybody collaborates on. The geeks do their thing (using Ruby, 
 Clojure, groovy, Scala, Selenium, A.N.Other etc.) to execute those same 
 specs.

 On Monday, 27 October 2014 04:21:07 UTC, David Mitchell wrote:

 Hi group,

 Apologies for the somewhat cryptic subject line - I'll try to explain... 
  Apologies also for the length of the post, but I'm sure others will hit 
 the same problem if they haven't already done so, and hopefully this 
 discussion will help them find a way out of a sticky situation.

 We've got a (notionally agile) Clojure app under heavy development.  The 
 project itself follows the Agile Manifesto to a degree, but is constrained 
 in having to interface with other applications that are following a 
 waterfall process.  Yep, it's awkward, but that's not what I'm asking about.

 Simplifying it as much as possible, we started with a pre-existing, 
 somewhat clunky, Java app, then extended the server side extensively using 
 Clojure, and added a web client.  There's loads of (non-Clojure) supporting 
 infrastructure - database cluster, queue servers, identity management, etc. 
  At any point, we've got multiple streams of Clojure development going on, 
 hitting different parts of the app.  The web client development is 
 traditional in that it's not using ClojureScript, and probably won't in 
 the foreseeable future.  As mentioned above, a key point is that the app 
 has a significant requirement to interface to legacy systems - other Java 
 apps, SAP, Oracle identity management stack and so on.

 From a testing perspective, for this app we've got unit tests written in 
 Clojure/midje which are maintained by the app developers (as you'd expect). 
  These work well and midje is a good fit for the app.  However, given all 
 the various infrastructure requirements of the app, it's hard to see how we 
 can use midje to go all the way up the testing stack (unit - system - 
 integration - pre-production - production).

 From the web client perspective, we've got UI automation tests written 
 using Ruby/Capybara, a toolset which I suspect was chosen based on the 
 existing skillset of the pool of testers.  Again this works well for us.

 The problem is with the middle ground between the two extremes of unit 
 and UI testing - our glaring problem at present is with integration 
 testing, but there's also a smaller problem with system testing.  We're 
 struggling to find an approach that works here, given the skillsets we have 
 on hand - fundamentally, we've got a (small) pool of developers who know 
 Clojure, a (small) pool of testers who know Ruby, and a larger pool of 
 testers who do primarily non-automated testing.

 In an ideal world, we'd probably use Clojure for all automated testing. 
  It seems relatively straightforward to use Stuart Sierra's component 
 library (https://github.com/stuartsierra/component) to mock out 
 infrastructure components such as databases, queues, email servers etc., 
 and doing so would let us address our system-level testing.  

 On the integration front, we could conceivably also leverage the same 
 component library to manage the state of all the various infrastructure 
 components that the app depends on, and thus ensure that we had a suitably 
 production-like environment for integration testing.  This would be a 
 non-trivial piece of work.

 Our big problem really boils down to just not having enough skilled 
 Clojure people available to the project.  You could point to any of the 
 following areas that are probably common to any non-trivial Clojure 
 application: either we don't have enough Clojure developers to address the 
 various requirements of system and integration testing, or our techops guys 
 don't have the necessary skills to expose a Clojure/component interface to 
 the various test/development environments, or our testers don't know 
 Clojure and not willing to take the word