Hi Adam,

Great feedback. Only one comment is that a mixin could potentially depend on
an environmental interface (instance variables or methods) that exists only
in some of the classes it is mixed into. I think it¹d be valid to ask
whether that is a good design or not, but I could imagine use cases where a
mixin would work correctly in some objects and not others.

Definitely interesting stuff.

Best Wishes,
Peter


On 10/30/07 8:16 PM, "Adam Haskell" <[EMAIL PROTECTED]> wrote:

> First to your earlier post about CF runtime in CFE:
> Fusion Debug correlates execution of the bytecode back to CFM lines so code
> coverage should be possible outside of needing a CF runtime. That being said I
> wouldn't honestly have a clue where to start...
> 
> Now on to the fun stuff.. You bring up some really valid points Peter. On a
> mixins, shoudln't a mixin take in data and return data regardless of the
> object it is used in it should unit test the same, no? In theory you should be
> able to create a mock object for the mixin to unit test it. Now when you look
> at mixins from an integration level (also very valid tests with xUnit) code
> coverage probably would be much less meaningful but the use of the Profiler in
> CF8 might be able to lead to dynamic language metrics like Mixin Coverage.
> 
> As for the code coverage doesn't mean everything ( I paraphrased ;) I
> completely agree, its a starting point though. With some Java tools you can
> actually get insight into complexity and see how well exercised your code is
> in those areas. The more complex your code is the more exercise it needs. This
> can also help in pointing out potential spots in need of refactoring. Code
> coverage is subjective and it is often up to the developer to help a manager
> understand an acceptable level of code coverage. In some applications code
> coverage at 70% my be fine in others 95% is good. Its a matter of finding the
> happy spot with diminishing returns.
> 
> Beyond unit test code coverage I honestly think looking at code coverage at
> different levels of testing is also important. Look at a separate code
> coverage stats when running say UA tests. This gives you insight into how much
> of your code is covered in actual application usage, thus giving better
> confidence in mixin/injected code.
> 
> Adam Haskell
> 
> 
> On 10/30/07, Peter Bell <[EMAIL PROTECTED]> wrote:
>> 
>>> > the question I'd like to ask then is how do you know if you've written
>>> > enough unit tests?
>> 
>> Errr, my eyes are bleeding and my fingers are sore?
>> 
>> :->
>> 
>> More seriously, code coverage is cool but doesn't solve the problem either.
>> It lets you know which lines are exercised, but the more meta-programming,
>> dynamic generation, mixins and the like you do, the less meaningful those
>> metrics are. Do you count code coverage of the classes that you gen, write
>> to disk and then include at runtime? What about coverage of object based
>> mixins or class based mixins on a per class basis? Maybe you mixin a method
>> that's tested in UserService but that would break within the ProductService
>> class . . .
>> 
>> Also, it only checks that you exercise the code - not that you fully
>> exercise it. You may run a method, passing it a positive integer, so you've
>> covered the code. But what about 0's, negative integers, invalid types or
>> even passing objects in there? What about fractions? All could break
>> "covered code" (I'm assuming we DON'T have control flow code for these -
>> ironically, the better your code is in terms of having special code to catch
>> exceptions, the more likely that code coverage tools will show your tests
>> aren't exercising the code for the special cases - fi you don't HAVE code
>> for the exceptions, you'll pass the code coverage and not know there is a
>> problem until your site crashes when someone says that they have -2
>> dependent children.
>> 
>> The best answer to this flows from the requirements. You come up with a
>> collection of use cases, for each method that could be called by each
>> screen, you come up with a list of possible inputs/assertions and expected
>> outputs. There is no practical way to prove correctness of an app (unless
>> you program in Z or something), but going through each method in each use
>> case is a good starting point and will probably give you better coverage
>> than a simple code coverage tool.
>> 
>> Not to say code coverage isn't useful, but it isn't the whole story, and I
>> wonder if it isn't a bit of a red herring in a dynamic language like CF.
>> Thoughts?
>> 
>> Best Wishes,
>> Peter
>> 
>> 
>> 
>> On 10/30/07 5:28 PM, "Barry Beattie" <[EMAIL PROTECTED]> wrote:
>> 
>>> >
>>>> >> It seems like this question comes up more and more often.
>>> >
>>> > that's because some ppl have seen it put to good use in the Java world
>>> > (inc me) and find the  case for their use worthwhile, if not
>>> > compelling.
>>> >
>>> > the question I'd like to ask then is how do you know if you've written
>>> > enough unit tests?
>>> >
>>>> > >
>> 
>> 
>> 
>> 
> 
> 
> > 
> 



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CFCDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/cfcdev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to