Tim Ellison wrote:
Regis wrote:
I think real data is always better than mock data.
Mark Hindess wrote:
On 16 April 2008 at 18:54, Regis <[EMAIL PROTECTED]> wrote:
Ideally, I could use "real" schema data for testing, as we saw, the
data is big,
80k+ in serialized form, I can't image how many hashmap.set() would
be used.
IMHO, if we used mock data, hashmap.set() is the best choice. Maybe
using
text data is best the choice for me now.
You say "ideally", but I don't really see the benefit of "real" schema
data over a set of mock data constructed to have the same test coverage.
Is 80k+ really needed to achieve test coverage?
No, we could use very small mock data to achieve test coverage. But
achieving test coverage is not only purpose we write unit test,
right? If the data file is too big for the ut, I could use mock data
and rewrite the corresponding tests as scenario test add to bti.
Mark will answer for himself of course, but I think the question is:
How are the tests better for using 80k of data compared to, say, using
1k? Is the volume useful? I can believe it might be.
Regards,
Tim
real data and mock data are both OK for me. And I have created a new patch
on JIRA which use mock data and remove the big resource file, so the
test case
seems more clear and readable now.
Best Regards,
Regis.