On Mar 19, 2008, at 1:03 PM, David Chelimsky wrote:

> On Wed, Mar 19, 2008 at 10:42 AM, Glenn Ford <[EMAIL PROTECTED]>  
> wrote:
>>>> [Big Snip]
>>
>>
>>>
>>> There are a few bad assumptions in your colleague's response, so to
>>> set the record straight:
>>>
>>> * test coverage and tests which use the interaction-based test
>>> approach are not mutually exclusive
>>> * you can have crappy tests which take the state-based approach and
>>> crappy tests which use a interaction-based approach
>>> * interaction-based testing is not merely limited to contrived
>>> examples on people's blogs, it is a real practice which adds value  
>>> on
>>> lots of "real-world" projects
>>> * using factories to generate required objects in tests has several
>>> pros over the use of fixtures, and very very very few cons
>>>
>>> State-based testing and interaction-based testing both have their
>>> place. There are number of reasons why they are both useful, but I'm
>>> going to pick two: object decomposition (and coordinators) and
>>> integration testing. Others have mentioned the value of writing  
>>> tests
>>> with the interface you want so I'm going to leave that out.
>>>
>>> As an application grows in features and complexity (business logic  
>>> of
>>> course) good developers will decompose the problem into a number of
>>> simple objects. Some of these objects are responsible for doing the
>>> work and others are responsible for coordinating other objects to do
>>> the work. Objects which are responsible for coordinating are great
>>> candidates for using interaction-based testing, because you are
>>> concerned in the interaction, not the "state".
>>>
>>> If you don't have integration tests then using an interaction-based
>>> testing approach is not worth it because you need something that is
>>> going to test the real objects working with real objects. In Rails  
>>> you
>>> can write integration tests as Rail's
>>> ActionController::IntegrationTests, Rail's functional tests, RSpec
>>> stories, or RSpec controller tests w/view isolation turned off.
>>>
>>> IMO, one false benefit of only using a state-based approach when
>>> writing a full-fledged application is that every object is  
>>> essentially
>>> an integration test at some level. You are always testing everything
>>> with everything that it touches. This can lead to having one failure
>>> in one model make several other model tests fail, and it can make
>>> several controller tests failing (as well as any other object which
>>> touches the model that is failing). I see this has a big negative
>>> because it makes it more difficult to pinpoint the issue. People  
>>> will
>>> end up tracking it down, but it can be time consuming and  
>>> frustrating.
>>>
>>> Now on the flip side people will complain that they renamed a model
>>> method and re-ran all of their tests and everything passed, but when
>>> running the application a bug exists. Doh, we forgot to update the
>>> controller that relied on calling that model method. It is normal to
>>> say/think, "well that should have failed because the method doesn't
>>> exist on the model". (It sounds like David Chelimsky may have
>>> something in trunk to help with this.) The main problem here  
>>> though is
>>> that an integration test didn't fail exposing that you weren't done
>>> with your change.
>>>
>>> Thinking back to coordinating objects, my controllers don't contain
>>> business logic in them because they are application layer classes,
>>> they aren't apart of the domain of my software. They are only used  
>>> by
>>> the application to allow the software to fulfill the requirements of
>>> my customer. Controllers are coordinators, not DOERS. They ask other
>>> objects to fulfill a business requirement for them like moving  
>>> stocks
>>> from one portfolio to the another. So I used interaction-based  
>>> testing
>>> here to ensure that my controller is finding a stock, finding a
>>> portfolio and asking a portfolio manager to move the stock to the
>>> designed portfolio. I don't need to have those things written or  
>>> even
>>> fully implemented to ensure my controller works as I expect. I  
>>> should
>>> be able to see that my controller does what it should be doing, even
>>> if the pieces it will use to do the work in the application aren't
>>> finished. Now if those aren't implemented I should have an  
>>> integration
>>> test which fails showing me that the feature for moving stocks from
>>> one portfolio to another is not completed, but that isn't what I'm
>>> testing in my controller.
>>>
>>> Also after my controller works as expected I can go make sure the
>>> PortfolioManager works as expected, and then I can go down and make
>>> sure the Stock model does what I expect. When these objects are
>>> working correctly individual I run my integration tests to ensure  
>>> they
>>> work well together.
>>>
>>> Another drawback of only using state-based testing is that you  
>>> always
>>> have to develop bottom up. You have to start with the low level
>>> components and work your way out. I used to write code this way. I
>>> think I have progressed beyond that, and now I write things in a
>>> Acceptance Test Driven Development style. I start by writing an
>>> integration test from the user's perspective proving that the  
>>> feature
>>> doesn't work, and then I move to the view, and then to the  
>>> controller,
>>> then to any manager/factory/presenter/service objects that are
>>> required, and then down to any domain level objects (models and
>>> non-models alike). You can't do this approach with state-based  
>>> testing
>>> only. There is a lot of value that can be gained by developing
>>> software this way.
>>>
>>> In short: Interaction-based testing allows you to ensure that an
>>> object is doing what you expect, without the underlying  
>>> implementation
>>> having to exist yet at all or in full. It is great for application
>>> layer objects which typically only coordinate domain layer objects
>>> where the correct interaction is what is important. It also helps  
>>> you
>>> develop interfaces, and it can scream loudly when you have an object
>>> doing way too much.
>>>
>>> * "Blaming "brittleness" of tests upon interaction-based testing  
>>> is a
>>> red herring. Both interaction-based tests and state-based tests  
>>> become
>>> brittle if they make assertions upon implementation details and  
>>> overly
>>> constrain the interfaces between modules." - Nat Pryce
>>>
>>> * http://nat.truemesh.com/archives/000342.html - a wonderful read on
>>> interaction-based vs state-based testing
>>>
>>> --
>>> Zach Dennis
>>> http://www.continuousthinking.com
>>> _______________________________________________
>>> rspec-users mailing list
>>> rspec-users@rubyforge.org
>>> http://rubyforge.org/mailman/listinfo/rspec-users
>>
>>
>> A lot of what you say makes me wish I was more experienced in this
>> department :)  I am very new to this!  A part of me wishes I had the
>> knowledge to write in the order of story -> view spec -> controller
>> spec -> model spec.  However most of the time (I emphasize MOST) I
>> don't have the foresight to do that.  The problem I'm trying to solve
>> is almost always too complicated for me to know right away where to
>> really start (my latest client has some crazy ideas).  Maybe the
>> problem is that I make things too complicated for myself :)   
>> However I
>> have been a developer (just not using RSpec) for a very long time  
>> so I
>> know fairly well how to recognize when things need to be complicated
>> and when they don't.  This means .should_receive is often out of the
>> question because I have no idea what the model should receive!
>>
>> My primary concern when writing my specs that are to cover  
>> complicated
>> features is that I do NOT want false confidence.  If I write a spec,
>> and it passes, I want that to mean it works in my app.  When the spec
>> goes green, my next step is to go hit Refresh in my browser.  If it
>> doesn't work in my browser, then in my opinion, my spec is crap.   
>> It's
>> telling me things work when they don't.
>
> Sounds like you're thinking of specs as application-level
> specifications. They *can* be, but that is not the intent. They are
> intended to be examples of how individual objects work in isolation.
> So I disagree that if the spec passes and the application fails that
> the spec is crap. It's just isolated.

I see your point here, very true.  If my usage of RSpec improves with  
better integration testing, I'm sure I'll be able to use them more  
appropriately in this manner.

>> I hear the concern being voiced that if you break one thing and 15
>> specs pass then you're not mocking enough.  Well since this is BDD,
>> after all, then we should be working closely to the current spec  
>> we're
>> trying to make pass.  I change code, and 15 specs break, well I  
>> have a
>> good idea of what code got broken because it's the top-most file in  
>> my
>> editor!  I hit save, Autotest screamed at me, I'm going to go hit  
>> undo
>> now.
>
> That's great that you can back out, but you now have a small problem
> to solve that has a bigger short-term impact than you want. Ideally,
> you'd be able to solve the small problems, one at a time, until the
> big problem is solved. The way you know you're solving small problems
> is the object-level examples pass. The way you know you're solving big
> problems is that the application-level examples (stories, or even
> manual-in-browser testing)

I can see your point here, but I don't believe it disagrees with my  
idea so much.  My approach still involves small specs passing building  
up to larger functionality in the end.  The limitation is just that I  
start from the bottom level and work my way up.  I just don't have the  
option of working in the other direction, which from a design  
perspective can certainly be limiting like you're pointing out.  I  
think it's my own fault that more than anything else that I can't ever  
seem to plan in the other direction :)  It's typically not until I  
find a solution on the model level that I understand what will be  
coming back on the top level.  Experience will probably change this so  
that I can see it more like you do.

>> Sometimes I make noob decisions and give model A a certain
>> responsibility when it should have been done by model B.  I get it to
>> work in the short term, my spec's pass, but later I need to add
>> another feature and realize that old responsibility needs to be moved
>> from A to B.  Now I have a red spec and X green specs.  I move that
>> responsibility, X specs are green, with still the same 1 red spec.  I
>> implement the new feature, X+1 green specs.  I refresh in my browser,
>> sure enough, it all works.  I didn't have to go change all of my  
>> stubs
>> and should_recieve's everywhere that just got moved.  There's no need
>> to, because my specs cover the true business logic behavior, and not
>> the model-specific "behavior".
>
> Again - this is a matter of granularity. The whole point of having
> granular examples is to enable you to make changes to the system
> easily via refactoring. Sometimes refactoring requires moving examples
> around along with the implementation code. This is refactoring 101
> stuff, and an accepted part of the refactoring process in all of my
> experience prior to working in Ruby. It's only people in the Ruby
> community that I see expressing this concern. I think it's because the
> refactoring tools for Java and C# are superior, so they automate a lot
> of the granular steps you need to take when refactoring manually.
>
> The problem with decreasing the granularity is that it makes fault
> isolation more difficult. It means less work right now for lots more
> work down the road.

Perhaps my example simplified my problem too much.  The chore that I  
was referring to unfortunately wasn't about refactoring.  I wish it  
had been that easy! Instead it was really the behavior that got  
transfered, but the logic and code was quite different.  The goal was  
to ensure the end result was still the same, that way I would know I  
hadn't broken anything but still had the new structure that I needed.   
Without my specs covering the resulting state I would have had no  
guidance to help me here.

>> While I do certainly believe the ability to spread everything out  
>> well
>> enough so that 1 error = 1 broken spec comes from great wisdom and
>> experience, I certainly don't have it, and I don't want to encourage
>> others like me to try to strive for that because I don't know how to
>> teach them from my own example.  What I do know, is that I use a lot
>> of real models, and I don't spend any time fixing specs that are
>> broken by working code.  I did that on my last project and it, in my
>> opinion, wasn't worth it.  I'd change a variable assignment with
>> a .save to a .update_attribute and then I had a broken spec.
>>
>> My fear is that I'll write green specs, pat myself on the back, and
>> then my company loses money because the site has bugs that my specs
>> show green for because I don't have enough integration tests or
>> whatever.
>
> You throw that out like integration testing is an afterthought and
> specs are king. In my view, they are equal partners.

Chalk that one up to ignorance I suppose, I don't have much good  
integration testing and I don't even know much about good ways to do  
it.  Even with integration testing, however, I'd still have my  
original spec broken and in need of repair when I don't believe it  
should need fixing.

>> But I don't want to have to double my tests for the same
>> amount of coverage.
>
> You just suggested that you fear that your green specs don't provide
> enough coverage, but that the addition of integration testing would
> provide sufficient additional coverage. How is that "the same amount
> of coverage"?

What I meant was that if I wrote them with less real models and more  
mocks, I would feel I had less coverage, and that the only way to  
match coverage without real models would be to up the number of tests  
(aka lots more integration testing).  If I can just change my approach  
to writing the spec in a way that's more easy to write anyway, why  
should I feel required to write additional tests on top of that to re- 
verify that things work?

>> I should have 1 spec for 1 feature and when that
>> feature is working, that spec is green, plain and simple. I admit I
>> may be ignorant to a lot of the power behind RSpec, but I like this
>> level of simplicity and straightforwardness.
>
> This is not really about RSpec. It is about an approach to using tests
> to drive out the behaviour of your application, and then help you to
> maintain it by simply living on as regression tests.
>
> I appreciate the motivation to do less work now. Especially with
> Rails, which makes it so damned easy to prototype something that the
> extra effort of automated testing seems painful. All I can say is that
> I've consistently had a much easier time of maintaining an application
> over time when I've been disciplined about both high and low level
> testing. If you find that you can live with only one level of testing,
> then more power to you.

I don't mean to make it sound so much like a short-term laziness  
thing.  Maybe it comes out that way!  I'm certainly more encouraged  
now to better learn about the high-level testing options.  Thank you  
for the time you spent offering your feedback.  I hope you can also  
find value in the perspectives of those who come in struggling for  
direction.  Even as hard as I try to read and research everything  
(while still managing to find development time in there) it's often  
difficult to see what all is available/possible/best.

Thank you!
Glenn

> FWIW,
> David
>
>>
>> Glenn Ford
>> http://www.glennfu.com
>>
>>
>> _______________________________________________
>> rspec-users mailing list
>> rspec-users@rubyforge.org
>> http://rubyforge.org/mailman/listinfo/rspec-users
>>
> _______________________________________________
> rspec-users mailing list
> rspec-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users

_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Reply via email to