I think real data is always better than mock data.

Mark Hindess wrote:
On 16 April 2008 at 18:54, Regis <[EMAIL PROTECTED]> wrote:
Ideally, I could use "real" schema data for testing, as we saw, the data is big,
80k+ in serialized form, I can't image how many hashmap.set() would be used.
IMHO, if we used mock data, hashmap.set() is the best choice. Maybe using
text data is best the choice for me now.

You say "ideally", but I don't really see the benefit of "real" schema
data over a set of mock data constructed to have the same test coverage.
Is 80k+ really needed to achieve test coverage?
No, we could use very small mock data to achieve test coverage. But achieving test coverage is not only purpose we write unit test, right? If the data file is too big for the ut, I could use mock data and rewrite the corresponding tests as scenario test add
to bti.

-Mark.

Tony Wu wrote:
What we need actually is some hashmaps. why we can not create it in
java src code such as setUp method through several/a lot hashmap.set()
. What's your concern on this manner?

On 4/16/08, Regis <[EMAIL PROTECTED]> wrote:
I agree that the large binary file is hard to modify and maitain, maybe we
have another choice to
construct the schemaTable, using raw text data returned from server. We ne
ed
parse the raw text
data first, and then construct the hashmap. The data is in text format, so
it's easy understand and modify.

What i concerned about this way is for unit test it seems a bit complex an
d
involve process of
parsing schema which should be tested separately.

Best Regards,
Regis.


Tim Ellison wrote:
Regis Xu (JIRA) wrote:

Let me explain.

The file contains necessary data to construct LdapSchemaContextImpl
instance for testing.
LdapSchemaContextImpl is constructed using schema data queried from LDAP
server, then
it cached all scehma data in "schemaTable", it's fast for searching.
It's best if we could test LdapSchemaContextImpl with "real data"
returned from LDAP server. So we serialize the "real" schemaTable to the
file and deserialize it when testing, then we got a "real"
LdapSchemaContextImpl without connecting with server.
for 1)  we can construct the table from java code but the schema data is
too large, i think it's not reasonable
to put it in java file.

Maybe we could remove .ser from file name, which cause confusion with
testing serialization capabilities.
I hope i had myself explained clearly.

Yes, it is a clear explanation Regis, but I have a concern that this
serialized file is a large binary blob that is hard for people to modify i
f
they choose to change the tests.
I realize that we don't want to require an LDAP server to test against,
and your idea for capturing the "real data" as a serialized data object
makes sense.  But, what if there is a bug or enhancement that somebody els
e
needs to make, and they don't have your LDAP server available to re-create
the "real data"?  I think it will be a maintenance problem.
Maybe I'm being too unreasonable, and right now I can't think of a
different way to do it (other than write the Java source code to create th
e
"real data" hashmap of hashmaps I saw when I deserialized the blob).
Regards,
Tim







Best Regards,
Regis.

Reply via email to