Hi all,

A while ago I opened a JIRA ticket for various test improvements I had been 
thinking about:

https://issues.apache.org/jira/browse/LIBCLOUD-289

It comes with some code, too.


There were a few key points I wanted to address:

 * It is very easy to break things when refactoring. I've managed to slip quite 
a few bugs past the tests and review so far, and this is something I want to 
stop! Anything I can do to make the tests better at spotting my bugs I have to 
do.

 * The tests don't enforce our interface. The danger here is that we implement 
50 compute abstractions that sort of work the same. For me, subtle and 
surprising differences are worse than using multiple different libraries.

 * The mocks should make more assertions. It's possible to calls a server API 
but, for example, not pass any parameters and still have the correct data 
returned!


For the code referenced in the ticket:

 * There are a shared set of tests that all LB drivers (that I have updated) 
use. This helps to pin down the interface and ensure all drivers have the same 
behaviour.

 * Throws out static fixtures in favour of a mock that "works" — if you call 
create twice and then list, you will see 2 records.

 * These mocks are checking much more - first of all that parameters are 
actually transmitted to the server, and in the AWS case I do some checks of the 
auth.


The change in approach uncovered bugs and incompatible interfaces. For example, 
some implementations are using strings for ports and some integers. If memory 
serves (I wrote this at the start of feb) some use both. Some return values are 
of different types. 


The code isn't finished yet, but reached a point where I wanted some feedback 
before continuing. Initially I was going to just update the load balancers, and 
then DNS and others as separate tickets.

Cheers,
John

Reply via email to