Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-11-04 Thread David Mitchell
Thanks Colin

I probably should've added that this is a large enterprise doing what's 
essentially an in-house startup - building an app using Clojure.  The app 
initially sat somewhat off to the side of the corporation's main line of 
business, but there was always the intent to incorporate the new app into 
the existing core infrastructure at some future date.  It was left open as 
to how that would actually happen, and now we're having to confront that 
problem directly.  While we have something of a startup mentality within 
the project with respect to what gets done day to day and how it gets done, 
there's an overarching set of tools/processes/requirements/constraints that 
come hand in hand with working for a large enterprise.

One of those is that we've got an existing pool of testers already within 
the enterprise, and need to find some alignment between their existing 
tools/capabilities/processes/... and the requirements of our new shiny 
Clojure app.  We've got testers who've been on the project virtually since 
it began (who've NOT used Clojure but instead have focused on testing 
interfaces using SoapUI and the web client) but now the project has grown 
to the point where we need both (a) more testers just to keep pace with 
development, (b) testers with different backgrounds (mobile being the big 
one) and (c) the testers need to come from the existing enterprise pool. 
 I'm sure we COULD cross train at least some of these testers to do their 
work within Clojure, but there's currently zero long-term tangible benefit 
for either their career path or the enterprise in doing so.  If the 
enterprise had a pool of upcoming Clojure projects, then it would probably 
be a different story.

Absolutely agree that there's a lot of time and expense in testing that's 
not always focused appropriately, and that's ultimately what I'm trying to 
fix/avoid in this project.  Personally, I wish we'd all stop using the 
phrase testing and instead use risk mitigation to try to keep the 
desired outcome of the activity clear in everyone's mind at key decision 
points, but this isn't the forum for THAT discussion ;-

Thanks again for your response

Dave M.



On Wednesday, 29 October 2014 20:07:34 UTC+11, Colin Yates wrote:

 I also meant to say that it is worth doing a cost-benefit analysis on 
 testing. When I did my consultant thing I frequently saw massive investment 
 in tools, process, emotion(!) etc. in things that on paper were a good 
 thing but in reality weren't mitigating any real risk.

 All testing isn't equal - what risks do you currently face right now (e.g. 
 are you playing whack-a-moley due to developer's poor impact analysis, or 
 for example, are the developers consistently producing great code which 
 doesn't piece together well and so on)? Put your energies into resolving 
 those risks.

 Most testing processes and resources I see are by definition re-active - 
 they find effects (bugs) of the cause (which is typically developer 
 insufficiency in terms of requirements). In my view resources should all be 
 about mitigating the cause. Why not get your testers (although I really 
 don't like segmenting resources by calling them testers) to sit with a 
 developer just before they do a new piece of work and think through the 
 impact analysis? Have your testers take the use cases and start building 
 the test scenarios immediately. Have your testers review the developers 
 unit tests - if it doesn't make sense to a (technically orientated) tester 
 than the developer is probably doing it wrong and so on. 

 Simply spotting effects is helpful but all resources should be focused on 
 mitigating the cause of those effects and more often than not I see a whole 
 bunch of testing activity which isn't really solving any real problems 
 but is certainly slowing down the flow. Please let me clear - I am not 
 challenging the _requirements_ of the traditional testing process (e.g. 
 ensuring quality), I am claiming the way most people do it is incredibly 
 expensive and inefficient.

 I can hear the internet taking a breath saying No!, he didn't just go 
 there :).

 On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the 
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got 
 a mutant form of agile process running whereby we work in 2 week sprints, 
 but there's rarely an installable product that emerges at the end of the 2 
 weeks.  I won't go into detail as to what I feel are the root causes in a 
 public forum - however I'm convinced that our adoption of Clojure is at 
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to use 
 for this project, and personally I'll be actively seeking out other Clojure 
 projects in the future.  I'm saying that from the viewpoint of someone 
 who's 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-11-04 Thread David Mitchell
Thanks Linus,

You make a really good point - why can't testers use the REPL?  I'd like to 
think that was possible too - after all, *I* can do it so why can't anyone 
else?

That said, I'm slightly blessed in that I've done a lot of work in Erlang 
and R over many years, so I was already comfortable with functional 
programming.  Even then, I found it quite a learning curve to get a level 
of competency with Clojure - the language itself is pretty easy, but 
working out what libraries exist for a specific domain and how best to 
leverage them is hard work.

If we had a bunch of pre-canned test scripts that the testers had to run 
and check the results, then the REPL wouldn't be at all important - testers 
would simply e.g. type commands into the REPL and make sure the responses 
were appropriate.  Unfortunately, we want our testers to do a lot more 
exploratory-style testing; for example, send some data through a published 
API and confirm it gets written to the database correctly, then send some 
incorrect data through the same API and confirm it DOESN'T get written to 
the database.  Doing these sorts of tasks require either (a) a fair grasp 
of Clojure *and* several libraries if the tester is working alone, or (b) a 
fairly tight partnership with a developer who would essentially tell the 
tester what to type, or (c) a set of domain-specific tools built to 
simplify this testing to the point where the tester can do these tasks with 
minimal assistance.  

Now, if you or I was given this testing task to perform *and* we knew it 
was going to be an ongoing demand on our time, we'd probably take option 
(c) and build some tools to make our lives as simple as possible.  That 
presupposes that we have the Clojure and domain knowledge to do so, which 
our testers don't have.  Option (b) is what we're essentially doing today - 
being tied so closely to QA with a developer, it's inherently non-scalable 
and we're now hitting the limitations of how far this approach will take 
us.  Option (a) - ... well, I'd love it if we had testers who already knew 
Clojure, or a stream of Clojure projects whereby acquiring these skills 
made sense for the testers' long term career prospects.

What's really frustrating is that the app we're building is actually pretty 
simple, and a competent tester doesn't require deep domain knowledge in 
order to work out what needs to be tested.  The problem is in their working 
out how to attack it, within a set of constraints that are probably pretty 
common to most work environments.

Thanks again

Dave M.

On Thursday, 30 October 2014 17:55:48 UTC+11, Linus Ericsson wrote:


 I really cant see how the testers could NOT be able to use a repl to do 
 some exploratory testing.

 Clojure's strength is really that you can align the code very closely to 
 the domain, although this modelling is (as always) challenging.

 And the application logic does not have to be tested through and 
 http-based interface, sometimes it's a good start to test it through the 
 Repl.

 If this is hard to use, then everything will be hard with the application.

 /Linus
 Den 29 okt 2014 10:07 skrev Colin Yates colin...@gmail.com 
 javascript::

 I also meant to say that it is worth doing a cost-benefit analysis on 
 testing. When I did my consultant thing I frequently saw massive investment 
 in tools, process, emotion(!) etc. in things that on paper were a good 
 thing but in reality weren't mitigating any real risk.

 All testing isn't equal - what risks do you currently face right now 
 (e.g. are you playing whack-a-moley due to developer's poor impact 
 analysis, or for example, are the developers consistently producing great 
 code which doesn't piece together well and so on)? Put your energies into 
 resolving those risks.

 Most testing processes and resources I see are by definition re-active - 
 they find effects (bugs) of the cause (which is typically developer 
 insufficiency in terms of requirements). In my view resources should all be 
 about mitigating the cause. Why not get your testers (although I really 
 don't like segmenting resources by calling them testers) to sit with a 
 developer just before they do a new piece of work and think through the 
 impact analysis? Have your testers take the use cases and start building 
 the test scenarios immediately. Have your testers review the developers 
 unit tests - if it doesn't make sense to a (technically orientated) tester 
 than the developer is probably doing it wrong and so on. 

 Simply spotting effects is helpful but all resources should be focused on 
 mitigating the cause of those effects and more often than not I see a whole 
 bunch of testing activity which isn't really solving any real problems 
 but is certainly slowing down the flow. Please let me clear - I am not 
 challenging the _requirements_ of the traditional testing process (e.g. 
 ensuring quality), I am claiming the way most people do it is incredibly 
 expensive and 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-30 Thread Linus Ericsson
I really cant see how the testers could NOT be able to use a repl to do
some exploratory testing.

Clojure's strength is really that you can align the code very closely to
the domain, although this modelling is (as always) challenging.

And the application logic does not have to be tested through and http-based
interface, sometimes it's a good start to test it through the Repl.

If this is hard to use, then everything will be hard with the application.

/Linus
Den 29 okt 2014 10:07 skrev Colin Yates colin.ya...@gmail.com:

 I also meant to say that it is worth doing a cost-benefit analysis on
 testing. When I did my consultant thing I frequently saw massive investment
 in tools, process, emotion(!) etc. in things that on paper were a good
 thing but in reality weren't mitigating any real risk.

 All testing isn't equal - what risks do you currently face right now (e.g.
 are you playing whack-a-moley due to developer's poor impact analysis, or
 for example, are the developers consistently producing great code which
 doesn't piece together well and so on)? Put your energies into resolving
 those risks.

 Most testing processes and resources I see are by definition re-active -
 they find effects (bugs) of the cause (which is typically developer
 insufficiency in terms of requirements). In my view resources should all be
 about mitigating the cause. Why not get your testers (although I really
 don't like segmenting resources by calling them testers) to sit with a
 developer just before they do a new piece of work and think through the
 impact analysis? Have your testers take the use cases and start building
 the test scenarios immediately. Have your testers review the developers
 unit tests - if it doesn't make sense to a (technically orientated) tester
 than the developer is probably doing it wrong and so on.

 Simply spotting effects is helpful but all resources should be focused on
 mitigating the cause of those effects and more often than not I see a whole
 bunch of testing activity which isn't really solving any real problems
 but is certainly slowing down the flow. Please let me clear - I am not
 challenging the _requirements_ of the traditional testing process (e.g.
 ensuring quality), I am claiming the way most people do it is incredibly
 expensive and inefficient.

 I can hear the internet taking a breath saying No!, he didn't just go
 there :).

 On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got
 a mutant form of agile process running whereby we work in 2 week sprints,
 but there's rarely an installable product that emerges at the end of the 2
 weeks.  I won't go into detail as to what I feel are the root causes in a
 public forum - however I'm convinced that our adoption of Clojure is at
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to use
 for this project, and personally I'll be actively seeking out other Clojure
 projects in the future.  I'm saying that from the viewpoint of someone
 who's employed in the testing area, but who also has quite a bit of Clojure
 development experience.  There's just this gulf at present between the
 people who know Clojure (almost exclusively developers) and other technical
 staff involved in the application lifecycle (testers, infrastructure
 owners, all the various technical managers) that we're finding very
 difficult to manage.

 For example, it'd be great if we could pair up our testers and
 developers, have them working side by side and rapidly iterating through
 e.g. Cucumber feature definition, coding and testing.  That would be
 absolutely ideal for this particular project, where a complete set of test
 cases can't be 100% defined up front and lots of minor questions arise even
 within an iteration.  If this working arrangement was viable, every time we
 hit a point that needed clarification, the tester could engage the product
 owner, get clarification and jump back in to their normal work with minimal
 disruption.  However, our testers simply can't provide enough useful input
 into development - they're currently stuck waiting for developers to hand
 their code over *in a form that the testers can test it*, and often there's
 a lot of extra (wasted?) effort involved to take working Clojure code and
 make it testable using non-Clojure tools.

 To say this is an inefficient working model would be a massive
 understatement.  What we're seeing is that our developers work like mad for
 the first week of a 2 week iteration, while the testers are largely idle;
 then code gets handed over and the developers are largely idle while the
 testers work like mad trying to finish their work before the end of the
 iteration.  Our automation testers are valiantly trying to use SoapUI and
 Groovy 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-29 Thread Colin Yates
Hi David,

Not to overstep the implied etiquette of these situations/patronise or 
condescend(!), but reading between the lines, I don't think you have a 
technology constraint I think you have a process constraint. You dropped 
the word agile in there (which is usually a license for people to 
mercilessly butcher a process, removing everything they personally don't 
like and call it a lightweight agile process :)) but what you describe 
doesn't really relate to me definition of agile. The word agile also 
doesn't really mean anything as it has been abused and associated with so 
many things.

For me, agile (http://www.agilemanifesto.org/) is first and foremost about 
getting the right people in the right place at the right time. Agility is 
about reponding to change, whether that is a change in requirements, 
tooling, process, large/small design, implementation etc. I have never seen 
anything so efficient as this when it is done right. The main criteria of 
success for implementing agile is mindset and culture.

TOC (http://www.agilemanifesto.org/) also tells us to focus *only* on the 
one thing that is holding up the flow the most. Finding the answer to that 
is the interesting part, but testers idle waiting for programmers and 
programmers idle waiting for testers can't be right.

If I were in your steps I would ask myself the following questions:

- what is the main cause of this problem? Process or tech?
- can I cause the necessary change? (If not, get out now!)
- what is the constraint of the system and how can I help them (e.g. get 
everybody else to leave them the heck alone/provide whatever they need)

I would also push for the following changes:
- working software is key. The software works. Always. If you aren't 
producing software that works then what are people paid for? Everything 
should subordinate to that.
- developers write code that works, period. There is no separate quality 
enforcement, there is only development. Developer's definition of done 
is sufficient. Of course you still might want a tick box quality test but 
it should be a safety check which consistently says yeah, all fine.
- people need to have skills not job titles. If you are really good at 
finding overlooked edge cases then great come here for a bit and kick the 
tyres on this. I don't care what your title is. Don't segment your 
resources - it is all one team
- figure out what actions need to be done to get software to the clients, 
reframe everybody's purpose in terms of addressing those actions. Often the 
right thing is to have resources sitting idle so they can immediately 
subordinate to the constraint
- process, just like everything else is just another chunk of 
inventory/investment which should be scrutinised and refactored ruthlessly

I don't know what to say to help you move forward, and this is one guy's 
opinion and I can guarantee that there will be many contradictory ones, and 
that is great. The point being there are very few silver bullets (except 
Clojure, obviously ;)); process is incredibly context sensitive so what 
works for me might not work for you. 

As I say, these are just my thoughts and might not work out for you, but 
even if everybody became Clojure experts I am not sure that is solving your 
biggest constraint. I also fully expect somebody else to come and explain 
why these things (which have worked for me for a good while now) are 
completely wrong :). It is all context sensitive. I am not sure continuing 
this (process discussion) on a Clojure thread is the right way forward 
either but please feel free to email me if this is helpful.

If you wanted some more reading, I can highly recommend The Pragmatic 
Programmer[1] and The Clean Coder[2] (not Clean Code though - sorry Bob 
:)).

[1] https://pragprog.com/book/tpp/the-pragmatic-programmer
[2] 
http://www.amazon.co.uk/The-Clean-Coder-Professional-Programmers/dp/0137081073%3FSubscriptionId%3DAKIAILSHYYTFIVPWUY6Q%26tag%3Dduc08-21%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3D0137081073

On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the 
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got a 
 mutant form of agile process running whereby we work in 2 week sprints, but 
 there's rarely an installable product that emerges at the end of the 2 
 weeks.  I won't go into detail as to what I feel are the root causes in a 
 public forum - however I'm convinced that our adoption of Clojure is at 
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to use 
 for this project, and personally I'll be actively seeking out other Clojure 
 projects in the future.  I'm saying that from the viewpoint of someone 
 who's employed in the testing area, but who also has quite a bit of Clojure 
 development experience.  

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-29 Thread Colin Yates
I also meant to say that it is worth doing a cost-benefit analysis on 
testing. When I did my consultant thing I frequently saw massive investment 
in tools, process, emotion(!) etc. in things that on paper were a good 
thing but in reality weren't mitigating any real risk.

All testing isn't equal - what risks do you currently face right now (e.g. 
are you playing whack-a-moley due to developer's poor impact analysis, or 
for example, are the developers consistently producing great code which 
doesn't piece together well and so on)? Put your energies into resolving 
those risks.

Most testing processes and resources I see are by definition re-active - 
they find effects (bugs) of the cause (which is typically developer 
insufficiency in terms of requirements). In my view resources should all be 
about mitigating the cause. Why not get your testers (although I really 
don't like segmenting resources by calling them testers) to sit with a 
developer just before they do a new piece of work and think through the 
impact analysis? Have your testers take the use cases and start building 
the test scenarios immediately. Have your testers review the developers 
unit tests - if it doesn't make sense to a (technically orientated) tester 
than the developer is probably doing it wrong and so on. 

Simply spotting effects is helpful but all resources should be focused on 
mitigating the cause of those effects and more often than not I see a whole 
bunch of testing activity which isn't really solving any real problems 
but is certainly slowing down the flow. Please let me clear - I am not 
challenging the _requirements_ of the traditional testing process (e.g. 
ensuring quality), I am claiming the way most people do it is incredibly 
expensive and inefficient.

I can hear the internet taking a breath saying No!, he didn't just go 
there :).

On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:

 Hi Colin

 Thanks for your reply.

 My post is almost exclusively technology oriented because I think the 
 technology is what's killing us!

 We've got what you'd probably call BDD lite working, in that we've got a 
 mutant form of agile process running whereby we work in 2 week sprints, but 
 there's rarely an installable product that emerges at the end of the 2 
 weeks.  I won't go into detail as to what I feel are the root causes in a 
 public forum - however I'm convinced that our adoption of Clojure is at 
 least partly to blame.

 Just to make it clear, I absolutely believe Clojure is a good tool to use 
 for this project, and personally I'll be actively seeking out other Clojure 
 projects in the future.  I'm saying that from the viewpoint of someone 
 who's employed in the testing area, but who also has quite a bit of Clojure 
 development experience.  There's just this gulf at present between the 
 people who know Clojure (almost exclusively developers) and other technical 
 staff involved in the application lifecycle (testers, infrastructure 
 owners, all the various technical managers) that we're finding very 
 difficult to manage.

 For example, it'd be great if we could pair up our testers and developers, 
 have them working side by side and rapidly iterating through e.g. Cucumber 
 feature definition, coding and testing.  That would be absolutely ideal for 
 this particular project, where a complete set of test cases can't be 100% 
 defined up front and lots of minor questions arise even within an 
 iteration.  If this working arrangement was viable, every time we hit a 
 point that needed clarification, the tester could engage the product owner, 
 get clarification and jump back in to their normal work with minimal 
 disruption.  However, our testers simply can't provide enough useful input 
 into development - they're currently stuck waiting for developers to hand 
 their code over *in a form that the testers can test it*, and often there's 
 a lot of extra (wasted?) effort involved to take working Clojure code and 
 make it testable using non-Clojure tools.  

 To say this is an inefficient working model would be a massive 
 understatement.  What we're seeing is that our developers work like mad for 
 the first week of a 2 week iteration, while the testers are largely idle; 
 then code gets handed over and the developers are largely idle while the 
 testers work like mad trying to finish their work before the end of the 
 iteration.  Our automation testers are valiantly trying to use SoapUI and 
 Groovy and (to a small extent) Cucumber/Ruby to test our Clojure code, but 
 those tools require that there are exposed HTTP endpoints (SoapUI) or Java 
 classes (Groovy or *JRuby*) that the tool can use to exercise the 
 underlying Clojure code.  These endpoints exist, but only at a very high 
 level - our UI testing, which works very well, is already hitting those 
 same endpoints.

 Additionally, our QA manager wants our testers to be able to do more 
 exploratory testing, based on his personal 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-28 Thread Colin Yates
Hi David,

Your post is very technology orientated (which is fine!). Have you looked 
into BDD type specifications? I am talking specifically the process 
described in http://specificationbyexample.com/. If you haven't, I strongly 
recommend you do as the win in this situation is they separate the required 
behaviour of the system (i.e. the specification) being tested from the 
technical geekery of asserting that behaviour. In brief, this process, when 
done well:
 - defines behaviour in readable text documents (albeit restricted by the 
jerkin grammar)
 - the same specification is consumed by the stake holders and the computer 
(and if you want bonus points are produced by/with the stake holders :))
 - provides access to many libraries to interpret and execute those specs 
(http://cukes.info/ being the main one etc.)

Once you get into the whole vibe of freeing your specs from implementation 
a whole new world opens up. http://fitnesse.org/ for example, is another 
approach.

I am suggesting the tension in your post around how do we collate all our 
resources around an unfamiliar tool might be best addressed by using a new 
tool - the shared artifacts are readable English textual specifications 
which everybody collaborates on. The geeks do their thing (using Ruby, 
Clojure, groovy, Scala, Selenium, A.N.Other etc.) to execute those same 
specs.

On Monday, 27 October 2014 04:21:07 UTC, David Mitchell wrote:

 Hi group,

 Apologies for the somewhat cryptic subject line - I'll try to explain... 
  Apologies also for the length of the post, but I'm sure others will hit 
 the same problem if they haven't already done so, and hopefully this 
 discussion will help them find a way out of a sticky situation.

 We've got a (notionally agile) Clojure app under heavy development.  The 
 project itself follows the Agile Manifesto to a degree, but is constrained 
 in having to interface with other applications that are following a 
 waterfall process.  Yep, it's awkward, but that's not what I'm asking about.

 Simplifying it as much as possible, we started with a pre-existing, 
 somewhat clunky, Java app, then extended the server side extensively using 
 Clojure, and added a web client.  There's loads of (non-Clojure) supporting 
 infrastructure - database cluster, queue servers, identity management, etc. 
  At any point, we've got multiple streams of Clojure development going on, 
 hitting different parts of the app.  The web client development is 
 traditional in that it's not using ClojureScript, and probably won't in 
 the foreseeable future.  As mentioned above, a key point is that the app 
 has a significant requirement to interface to legacy systems - other Java 
 apps, SAP, Oracle identity management stack and so on.

 From a testing perspective, for this app we've got unit tests written in 
 Clojure/midje which are maintained by the app developers (as you'd expect). 
  These work well and midje is a good fit for the app.  However, given all 
 the various infrastructure requirements of the app, it's hard to see how we 
 can use midje to go all the way up the testing stack (unit - system - 
 integration - pre-production - production).

 From the web client perspective, we've got UI automation tests written 
 using Ruby/Capybara, a toolset which I suspect was chosen based on the 
 existing skillset of the pool of testers.  Again this works well for us.

 The problem is with the middle ground between the two extremes of unit 
 and UI testing - our glaring problem at present is with integration 
 testing, but there's also a smaller problem with system testing.  We're 
 struggling to find an approach that works here, given the skillsets we have 
 on hand - fundamentally, we've got a (small) pool of developers who know 
 Clojure, a (small) pool of testers who know Ruby, and a larger pool of 
 testers who do primarily non-automated testing.

 In an ideal world, we'd probably use Clojure for all automated testing. 
  It seems relatively straightforward to use Stuart Sierra's component 
 library (https://github.com/stuartsierra/component) to mock out 
 infrastructure components such as databases, queues, email servers etc., 
 and doing so would let us address our system-level testing.  

 On the integration front, we could conceivably also leverage the same 
 component library to manage the state of all the various infrastructure 
 components that the app depends on, and thus ensure that we had a suitably 
 production-like environment for integration testing.  This would be a 
 non-trivial piece of work.

 Our big problem really boils down to just not having enough skilled 
 Clojure people available to the project.  You could point to any of the 
 following areas that are probably common to any non-trivial Clojure 
 application: either we don't have enough Clojure developers to address the 
 various requirements of system and integration testing, or our techops guys 
 don't have the necessary skills to expose a 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-28 Thread David Mitchell
Hi Colin

Thanks for your reply.

My post is almost exclusively technology oriented because I think the 
technology is what's killing us!

We've got what you'd probably call BDD lite working, in that we've got a 
mutant form of agile process running whereby we work in 2 week sprints, but 
there's rarely an installable product that emerges at the end of the 2 
weeks.  I won't go into detail as to what I feel are the root causes in a 
public forum - however I'm convinced that our adoption of Clojure is at 
least partly to blame.

Just to make it clear, I absolutely believe Clojure is a good tool to use 
for this project, and personally I'll be actively seeking out other Clojure 
projects in the future.  I'm saying that from the viewpoint of someone 
who's employed in the testing area, but who also has quite a bit of Clojure 
development experience.  There's just this gulf at present between the 
people who know Clojure (almost exclusively developers) and other technical 
staff involved in the application lifecycle (testers, infrastructure 
owners, all the various technical managers) that we're finding very 
difficult to manage.

For example, it'd be great if we could pair up our testers and developers, 
have them working side by side and rapidly iterating through e.g. Cucumber 
feature definition, coding and testing.  That would be absolutely ideal for 
this particular project, where a complete set of test cases can't be 100% 
defined up front and lots of minor questions arise even within an 
iteration.  If this working arrangement was viable, every time we hit a 
point that needed clarification, the tester could engage the product owner, 
get clarification and jump back in to their normal work with minimal 
disruption.  However, our testers simply can't provide enough useful input 
into development - they're currently stuck waiting for developers to hand 
their code over *in a form that the testers can test it*, and often there's 
a lot of extra (wasted?) effort involved to take working Clojure code and 
make it testable using non-Clojure tools.  

To say this is an inefficient working model would be a massive 
understatement.  What we're seeing is that our developers work like mad for 
the first week of a 2 week iteration, while the testers are largely idle; 
then code gets handed over and the developers are largely idle while the 
testers work like mad trying to finish their work before the end of the 
iteration.  Our automation testers are valiantly trying to use SoapUI and 
Groovy and (to a small extent) Cucumber/Ruby to test our Clojure code, but 
those tools require that there are exposed HTTP endpoints (SoapUI) or Java 
classes (Groovy or *JRuby*) that the tool can use to exercise the 
underlying Clojure code.  These endpoints exist, but only at a very high 
level - our UI testing, which works very well, is already hitting those 
same endpoints.

Additionally, our QA manager wants our testers to be able to do more 
exploratory testing, based on his personal experience of using Ruby's 
interactive shell, and simply trying stuff out.  That approach makes a 
lot of sense for this project, and I know that using a Clojure REPL could 
provide a great platform for this type of testing, but doing that would 
require a sizeable investment in our testers learning to use Clojure.

I'm starting to wonder whether there's actually any point trying to do 
*any* system testing of Clojure apps under active development, as maybe 
that risk exposure can best be addressed by enforcing suitable coding 
standards (e.g. :pre and :post conditions), and then extending what would 
normally be unit tests to address whole-of-system functionality.  After 
all, for an app written in a functional language - where you've basically 
only got functions that take parameters and return a result, minimal state 
to manage, and usually a small set of functions having side effects like 
database IO - surely a lot of your traditional functional test scenarios 
would simply be tightly-targetted unit tests anyway.

Maybe we should be handing off our single-system functional testing 
entirely to developers, and only engaging our dedicated QA people once we 
get to integrating all the different streams of development together.  That 
seems to be the approach that Craig's project (thanks Craig!) is taking, 
and it'd definitely be easier to work with compared to our current 
processes.  Due to the lack of oversight and up-front objective 
requirements, there could be an increased risk that our developers are 
writing code to solve the wrong problem, but maybe that's just something we 
need to live with.

If anyone else has any thoughts, I'd REALLY appreciate hearing about them. 
 Thanks again to Colin and Craig

On Tuesday, 28 October 2014 20:04:39 UTC+11, Colin Yates wrote:

 Hi David,

 Your post is very technology orientated (which is fine!). Have you looked 
 into BDD type specifications? I am talking specifically the process 
 described in 

Re: Testing and infrastructure management for large Clojure app - how are you doing it?

2014-10-27 Thread Craig Brozefsky
On our non-trivial application, we have broken our testing into the
following sets:

* Unit Tests -- written by devs, run as part of our integration builder and
when doing dev
* Integration Tests -- automated, hitting our external APIs, written in
clojure, maintained by the devs mostly, run as part of integration builder,
and occasionally during dev
* System Test -- Performed by QA team, manual and automated (using Ruby and
pythong) against our external APIs, performed as final phase of our 2 week
release cycle

Domain knowledge (malware analysis) is a bigger limiter than clojure
experience when it comes to our testing.  Our QA team doesn't know clojure
at all.  They use the same public facing APIs for their tests that we
expose to the customers.

As far as infrastructure management, the same mechanism we use to deploy a
production instance is used to deploy a staging and testing instance.
There is nothing clojure specific about this, in fact it's Arch linux
packages, systemd, and our own hand-rolled tachikoma orchestration tool.
I would not try and pull in everything into your system and have
deployment/configuration managed by clojure.

I would not have testers writing unit tests.  I would not have coders
writing System Tests.  Integration tests, depending on what interface they
are targeting, can be done in any language.  Most of ours are in clojure
because we wrote an HTTP client early on, clj-mook, and didn't have a
dedicatd QA team.  Since then, we have integration tests implemented in a
few different languages.

In short, our system seems quite similiar to yours, but we are not trying
to unify our test stack, or take over deploy/config in clojure (even tho
we're at the core of the whole system).  We leave that to the tools our Ops
and QA team selects, and I don't see sufficient win in unifying all of them
under clojure to justify that.



On Mon, Oct 27, 2014 at 12:21 AM, David Mitchell monch1...@gmail.com
wrote:

 Hi group,

 Apologies for the somewhat cryptic subject line - I'll try to explain...
 Apologies also for the length of the post, but I'm sure others will hit the
 same problem if they haven't already done so, and hopefully this discussion
 will help them find a way out of a sticky situation.

 We've got a (notionally agile) Clojure app under heavy development.  The
 project itself follows the Agile Manifesto to a degree, but is constrained
 in having to interface with other applications that are following a
 waterfall process.  Yep, it's awkward, but that's not what I'm asking about.

 Simplifying it as much as possible, we started with a pre-existing,
 somewhat clunky, Java app, then extended the server side extensively using
 Clojure, and added a web client.  There's loads of (non-Clojure) supporting
 infrastructure - database cluster, queue servers, identity management,
 etc.  At any point, we've got multiple streams of Clojure development going
 on, hitting different parts of the app.  The web client development is
 traditional in that it's not using ClojureScript, and probably won't in
 the foreseeable future.  As mentioned above, a key point is that the app
 has a significant requirement to interface to legacy systems - other Java
 apps, SAP, Oracle identity management stack and so on.

 From a testing perspective, for this app we've got unit tests written in
 Clojure/midje which are maintained by the app developers (as you'd
 expect).  These work well and midje is a good fit for the app.  However,
 given all the various infrastructure requirements of the app, it's hard to
 see how we can use midje to go all the way up the testing stack (unit -
 system - integration - pre-production - production).

 From the web client perspective, we've got UI automation tests written
 using Ruby/Capybara, a toolset which I suspect was chosen based on the
 existing skillset of the pool of testers.  Again this works well for us.

 The problem is with the middle ground between the two extremes of unit
 and UI testing - our glaring problem at present is with integration
 testing, but there's also a smaller problem with system testing.  We're
 struggling to find an approach that works here, given the skillsets we have
 on hand - fundamentally, we've got a (small) pool of developers who know
 Clojure, a (small) pool of testers who know Ruby, and a larger pool of
 testers who do primarily non-automated testing.

 In an ideal world, we'd probably use Clojure for all automated testing.
 It seems relatively straightforward to use Stuart Sierra's component
 library (https://github.com/stuartsierra/component) to mock out
 infrastructure components such as databases, queues, email servers etc.,
 and doing so would let us address our system-level testing.

 On the integration front, we could conceivably also leverage the same
 component library to manage the state of all the various infrastructure
 components that the app depends on, and thus ensure that we had a suitably
 production-like environment for