Re: [classlib] Testing conventions - a proposal

2006-07-26 Thread George Harley

Alexei Zakharov wrote:

Hi George,

Sorry for the late reply.



Hi Alexei,

Not a problem. Especially when my reply to you is even later (sorry).



It looks like you are using an os.any group for those test methods
(the majority) which may be run anywhere. That's a different approach to
what I have been doing. I have been thinking more along the lines of
avoiding the creation of groups that cover the majority of tests and
trying to focus on groups that identify those edge cases like
platform-specific, temporarily broken, temporarily broken on platform
os.blah etc. This means my tests that are run anywhere and are
public api type (as opposed to being specific to the Harmony
implementation) are just annotated with @Test. I guess that the
equivalent in your scheme would be annotated as @Test(groups =
{os.any, type.api}) ?


Well, in general I like the idea of having standalone @Test that
denotes something, for example os.any  type.api. The simpler is
better – as you have already said. But by the moment of writing of my
previous message I didn't know how to implement this idea technically.
TestNG just filters away all tests that don't have the group attribute
if the group include filter is specified. It seems Richard had the
same problem.


Right. That was the point when the BeanShell option began to look good 
to me.




This is why the group os.any has appeared in my
script. After a few experiments I've realized we can avoid using the
include filter and use only excludeGroups instead. In that way, we
may modify the above script:

condition property=not_my_platform1 value=os.win.IA32
notos family=Windows//not
/condition
condition property=not_my_platform2 value=os.linux.IA32
notand
os name=linux/
os family=unix/
/and/not
/condition
condition property=not_my_platform3 value=os.mac
notos family=mac//not
/condition

property name=not_my_platform1 value=/
property name=not_my_platform2 value=/
property name=not_my_platform3 value=/
property name=not_my_platforms
value=${not_my_platform1},${not_my_platform2},${not_my_platform3}/

target name=run description=Run tests
taskdef name=testng classname=org.testng.TestNGAntTask
classpath=${jdk15.testng.jar}/
testng classpathref=run.cp
outputdir=${testng.report.dir}
excludedGroups=state.broken.*,${not_my_platforms}
enableAssert=false
jvm=${test.jvm}
classfileset dir=. includes=**/*.class/
/testng
/target

All tests marked with simple @Test will be included in the test run.
However, this script IMHO is less elegant than the first one and
@Test (groups=os.any) probably is more self explanatory than
simple @Test. But we will save our time and reduce the size of
resulting source code using simple @Test.
Any thoughts?

Regards,



I spent some time a few days ago investigating your earlier idea that 
used the os.any group and really liked the simplicity it brought to 
the Ant script as well as removing the need for a TestNG XML file to 
define the tests. As you say, the exclude-only approach set out in your 
more recent post is not as elegant. My vote would be for using your 
first approach and using the os.any group.


While I personally don't have a hang up with delegating what gets tested 
to a separate artefact like a testng.xml file, it is one more file 
format to learn and (if BeanShell gets used inside) one more language 
required. Your os.any approach keeps the whole test narrative firmly 
within the Ant file which is more familiar to us all and so that bit 
easier to maintain. I'm not completely off the idea of using a 
testng.xml file but think that its introduction should be kept until we 
*really* need it.


Best regards,
George



2006/7/20, George Harley [EMAIL PROTECTED]:

Alexei Zakharov wrote:
 George,

 I remember my past experience with BeanShell - I was trying to create
 the custom BeanShell task for ant 1.6.1. I can't say I haven't
 succeeded. But I remember this as a rather unpleasant experience. At
 that time BeanShell appeared to me as a not very well tested
 framework. Please don't throw rocks on me now, I am just talking about
 my old impressions. Probably BeanShell has become better since then.


Hi Alexei,

No rocks. I promise :-)


 But... Do we really need BS here? Why can't we manage everything from
 build.xml without extra testng.xml files? I mean something like this:

 !-- determines the OS --
 condition property=platform value=win.IA32
 os family=Windows/
 /condition
 condition property=platform value=linux.IA32
 and
 os name=linux/
 os family=unix/
 /and
 /condition

 property name=groups.included value=os.any, os.${platform}/
 property name=groups.excluded value=state.broken,
 state.broken.${platform}/

 target name=run description=Run tests
 taskdef name=testng classname=org.testng.TestNGAntTask
 classpath=${jdk15.testng.jar}/
 testng classpathref=run.cp
 outputdir=${testng.report.dir}
 groups=${groups.included}
 excludedGroups=${groups.excluded}
 classfileset dir=. includes=**/*.class/
 /testng
 /target

 Does this make sense?

 

Re: [classlib] Testing conventions - a proposal

2006-07-26 Thread Paulex Yang
FYI, I haven't studied it yet, but seems new TestNG 5 support ant task 
with JVM parameter[1]


[1] http://www.theserverside.com/news/thread.tss?thread_id=41479

Richard Liang wrote:

Just thinking about using TestNG to execute Harmony test cases.  :-)

Look at our build.xml (e.g., modules/luni/build.xml), you will see 
something like:

..
junit fork=yes
  forkmode=once
  printsummary=withOutAndErr
  errorproperty=test.errors
  failureproperty=test.failures
  showoutput=on
  dir=${hy.luni.bin.test}
  jvm=${test.jre.home}/bin/java

   jvmarg value=-showversion /
..

My question is the TestNG Ant task testng does not support 
attributes fork and jvm, how to run our test under Harmony VM? 
Thanks a lot.


Best regards,
Richard

Alexei Zakharov wrote:

Hmm, do we have problems with launching ant? I thought we have
problems with launching TestNG. Just checked - running tests for beans
on j9+fresh classlib works fine. I.e.
ant -Dbuild.module=beans
-Dbuild.compiler=org.eclipse.jdt.core.JDTCompilerAdapter test

2006/7/19, Richard Liang [EMAIL PROTECTED]:

According to TestNG Ant Task [1], it seems that the TestNG Ant task
does not support to fork a new JVM, that is, we must launch ant using
Harmony itself. Any comments? Thanks a lot.

[1]http://testng.org/doc/ant.html

Best regards,
Richard

George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define 
for

 use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as 
tests that

 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to 
go on

  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That 
is,

 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not 
pass when
 run against the RI or other conforming implementations. It's 
orthogonal

 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific 
platform

 
  * state.broken  --  tests broken on every platform but we 
want to

  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, 
the ids
 mentioned in the referenced email would seem a good starting 
point. Do

 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) 
group.

  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and 
state.broken.windows.amd

  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound
 reasonable ?
 
  I think so - it seems to cover our current requirements. 
Thanks for

  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be 
great to

 prove the worth of this by doing a trial on one of the existing
 modules,
 ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
 module.
 :)

 The assert statements are commented out in these tests, with FIXME
 mark.

 Furthurmore, I also find some platform dependent behaviours of
 FileChannel.
 If TestNG is applied on NIO, I will supplement new tests for
 FileChannel and
 fix the bug of source code.

 What's your opnion? Any 

Re: [classlib] Testing conventions - a proposal

2006-07-25 Thread Stepan Mishura

On 7/20/06, George Harley  wrote:


SNIP!
Anyway, the point I guess that I am trying to make here is that it is
possible in TestNG to select the methods to test dynamically using a
little bit of scripting that (a) gives us a lot more power than the
include/exclude technique and (b) will work the same across every
platform we test on. Because BeanShell allows us to instantiate and use
Java objects of any type on the classpath then the possibility of using
more than just group membership to decide on tests to run becomes
available to us. Please refer to the TestNG documentation for more on
the capabilities of BeanShell and the TestNG API. I had never heard of
it before never mind used it but still managed to get stuff working in a
relatively short space of time.

I hope this helps. Maybe I need to write a page on the wiki or something ?



Hi George,

It would be great to have your proposal for using TestNG on web-site like we
have for Testing conventions [1].

Thanks,
Stepan.

[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html

Best regards,

George



 Best regards,
 George



 Thanks for reading this far.

 Best regards,
 George



 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught
 my attention as I have been mulling over this issue for a little
 while now. I think that it is a good time for us to return to the
 topic of class library test layouts.

 The current proposal [1] sets out to segment our different types
 of test by placing them in different file locations. After looking
 at the recent changes to the LUNI module tests (where the layout
 guidelines were applied) I have a real concern that there are
 serious problems with this approach. We have started down a track
 of just continually growing the number of test source folders as
 new categories of test are identified and IMHO that is going to
 bring complexity and maintenance issues with these tests.

 Consider the dimensions of tests that we have ...

 API
 Harmony-specific
 Platform-specific
 Run on classpath
 Run on bootclasspath
 Behaves different between Harmony and RI
 Stress
 ...and so on...


 If you weigh up all of the different possible permutations and
 then consider that the above list is highly likely to be extended
 as things progress it is obvious that we are eventually heading
 for large amounts of related test code scattered or possibly
 duplicated across numerous hard wired source directories. How
 maintainable is that going to be ?

 If we want to run different tests in different configurations then
 IMHO we need to be thinking a whole lot smarter. We need to be
 thinking about keeping tests for specific areas of functionality
 together (thus easing maintenance); we need something quick and
 simple to re-configure if necessary (pushing whole directories of
 files around the place does not seem a particularly lightweight
 approach); and something that is not going to potentially mess up
 contributed patches when the file they patch is found to have been
 recently pushed from source folder A to B.

 To connect into another recent thread, there have been some posts
 lately about handling some test methods that fail on Harmony and
 have meant that entire test case classes have been excluded from
 our test runs. I have also been noticing some API test methods
 that pass fine on Harmony but fail when run against the RI. Are
 the different behaviours down to errors in the Harmony
 implementation ? An error in the RI implementation ? A bug in the
 RI Javadoc ? Only after some investigation has been carried out do
 we know for sure. That takes time. What do we do with the test
 methods in the meantime ? Do we push them round the file system
 into yet another new source folder ? IMHO we need a testing
 strategy that enables such problem methods to be tracked easily
 without disruption to the rest of the other tests.

 A couple of weeks ago I mentioned that the TestNG framework [2]
 seemed like a reasonably good way of allowing us to both group
 together different kinds of tests and permit the exclusion of
 individual tests/groups of tests [3]. I would like to strongly
 propose that we consider using TestNG as a means of providing the
 different test configurations required by Harmony. Using a
 combination of annotations and XML to capture the kinds of
 sophisticated test configurations that people need, and that
 allows us to specify down to the individual method, has got to be
 more scalable and flexible than where we are headed now.

 Thanks for reading this far.

 Best regards,
 George


 [1]

http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html

 [2] http://testng.org
 [3]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL
 PROTECTED]






--
Thanks,
Stepan Mishura
Intel Middleware Products Division

--
Terms of use : 

Re: [classlib] Testing conventions - a proposal

2006-07-24 Thread Alexei Zakharov

Hi George,

Sorry for the late reply.


It looks like you are using an os.any group for those test methods
(the majority) which may be run anywhere. That's a different approach to
what I have been doing. I have been thinking more along the lines of
avoiding the creation of groups that cover the majority of tests and
trying to focus on groups that identify those edge cases like
platform-specific, temporarily broken, temporarily broken on platform
os.blah etc. This means my tests that are run anywhere and are
public api type (as opposed to being specific to the Harmony
implementation) are just annotated with @Test. I guess that the
equivalent in your scheme would be annotated as @Test(groups =
{os.any, type.api}) ?


Well, in general I like the idea of having standalone @Test that
denotes something, for example os.any  type.api. The simpler is
better – as you have already said. But by the moment of writing of my
previous message I didn't know how to implement this idea technically.
TestNG just filters away all tests that don't have the group attribute
if the group include filter is specified. It seems Richard had the
same problem. This is why the group os.any has appeared in my
script. After a few experiments I've realized we can avoid using the
include filter and use only excludeGroups instead. In that way, we
may modify the above script:

 condition property=not_my_platform1 value=os.win.IA32
 notos family=Windows//not
 /condition
 condition property=not_my_platform2 value=os.linux.IA32
 notand
 os name=linux/
 os family=unix/
 /and/not
 /condition
 condition property=not_my_platform3 value=os.mac
 notos family=mac//not
 /condition

 property name=not_my_platform1 value=/
 property name=not_my_platform2 value=/
 property name=not_my_platform3 value=/
 property name=not_my_platforms
   value=${not_my_platform1},${not_my_platform2},${not_my_platform3}/

 target name=run description=Run tests
taskdef name=testng classname=org.testng.TestNGAntTask
classpath=${jdk15.testng.jar}/
testng classpathref=run.cp
outputdir=${testng.report.dir}
   excludedGroups=state.broken.*,${not_my_platforms}
   enableAssert=false
   jvm=${test.jvm}
classfileset dir=. includes=**/*.class/
/testng
 /target

All tests marked with simple @Test will be included in the test run.
However, this script IMHO is less elegant than the first one and
@Test (groups=os.any) probably is more self explanatory than
simple @Test. But we will save our time and reduce the size of
resulting source code using simple @Test.
Any thoughts?

Regards,


2006/7/20, George Harley [EMAIL PROTECTED]:

Alexei Zakharov wrote:
 George,

 I remember my past experience with BeanShell - I was trying to create
 the custom BeanShell task for ant 1.6.1. I can't say I haven't
 succeeded. But I remember this as a rather unpleasant experience. At
 that time BeanShell appeared to me as a not very well tested
 framework. Please don't throw rocks on me now, I am just talking about
 my old impressions. Probably BeanShell has become better since then.


Hi Alexei,

No rocks. I promise :-)


 But... Do we really need BS here? Why can't we manage everything from
 build.xml without extra testng.xml files? I mean something like this:

 !-- determines the OS --
 condition property=platform value=win.IA32
  os family=Windows/
 /condition
 condition property=platform value=linux.IA32
  and
  os name=linux/
  os family=unix/
  /and
 /condition

 property name=groups.included value=os.any, os.${platform}/
 property name=groups.excluded value=state.broken,
 state.broken.${platform}/

 target name=run description=Run tests
 taskdef name=testng classname=org.testng.TestNGAntTask
 classpath=${jdk15.testng.jar}/
 testng classpathref=run.cp
outputdir=${testng.report.dir}
groups=${groups.included}
excludedGroups=${groups.excluded}
classfileset dir=. includes=**/*.class/
 /testng
 /target

 Does this make sense?

 Thanks,

Yes, that makes sense and if it gives the degree of control that we need
then I would be all for it. The simpler the better.

It looks like you are using an os.any group for those test methods
(the majority) which may be run anywhere. That's a different approach to
what I have been doing. I have been thinking more along the lines of
avoiding the creation of groups that cover the majority of tests and
trying to focus on groups that identify those edge cases like
platform-specific, temporarily broken, temporarily broken on platform
os.blah etc. This means my tests that are run anywhere and are
public api type (as opposed to being specific to the Harmony
implementation) are just annotated with @Test. I guess that the
equivalent in your scheme would be annotated as @Test(groups =
{os.any, type.api}) ?

If I have inferred correct from your Ant fragment then I think it means
requiring more information 

Re: [classlib] Testing conventions - a proposal

2006-07-20 Thread Alexei Zakharov

Hi George,

Wow, they are fast guys! Thanks for the link. Do you know when do they
plan to release 5.0 officially?

Regards,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Alexei,

I just downloaded the latest working build of TestNG 5.0 [1] and support
for the jvm attribute is in there. This is not the official release
build.

Best regards,
George

[1] http://testng.org/testng-5.0.zip


Alexei Zakharov wrote:
 Hi George,

 Agree, we may experience problems in case of VM hang or crash. I
 suggest this only as a temporary solution. BTW, the fact that TestNG
 ant task still doesn't have such attributes looks like a sign for me -
 TestNG can be still immature in some aspects. Still comparing TestNG
 and JUnit.

 Regards,

 2006/7/19, George Harley [EMAIL PROTECTED]:
 Hi Alexei,

 It's encouraging to hear that (Ant + TestNG + sample tests) all worked
 fine together on Harmony. In answer to your question I suppose that the
 ability to fork the tests in a separate VM means that we do not run the
 risk of possible bugs in Harmony affecting the test harness and
 therefore the outcome of the tests.

 Best regards,
 George


 Alexei Zakharov wrote:
  Probably my previous message was not clear enough.
  Why can't we just invoke everything including ant on top of Harmony
  for now? At least I was able to build and run test-14 examples from
  TestNG 4.7 distribution solely on top of j9 + our classlib today.
 
  C:\Java\testng-4.7\test-14set
  JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
  \deploy\jdk\jre
 
  C:\Java\testng-4.7\test-14ant
  -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
  Adapter run
  Buildfile: build.xml
 
  prepare:
 
  compile:
  [echo]  -- Compiling JDK 1.4
 tests --
 
  run:
  [echo]  -- Running JDK 1.4
 tests   --
  [echo]  --
 testng-4.7-jdk14.jar  --
 
  [testng-14] ===
  [testng-14] TestNG JDK 1.4
  [testng-14] Total tests run: 179, Failures: 10, Skips: 0
  [testng-14] ===
  ...
 
  Exactly the same results as with Sun JDK 1.4.
  Note: you may need to hatch the build.xml a little bit to achieve
 this.
 
  Thanks,
 
  2006/7/19, George Harley [EMAIL PROTECTED]:
  Hi Richard,
 
  Actually the Ant task always runs the tests in a forked VM. At
 present,
  however, the task does not support specifying the forked VM (i.e.
 there
  is no equivalent to the JUnit Ant task's jvm attribute). This
 matter
  has already been raised with the TestNG folks who seem happy to
  introduce this.
 
  In the meantime we could run the tests using the Ant java task.
 
 
  Best regards,
  George
 
 
 
  Richard Liang wrote:
   According to TestNG Ant Task [1], it seems that the TestNG Ant
 task
   does not support to fork a new JVM, that is, we must launch ant
 using
   Harmony itself. Any comments? Thanks a lot.
  
   [1]http://testng.org/doc/ant.html
  
   Best regards,
   Richard
  
   George Harley wrote:
   Andrew Zhang wrote:
   On 7/18/06, George Harley [EMAIL PROTECTED] wrote:
  
   Oliver Deakin wrote:
George Harley wrote:
SNIP!
   
Here the annotation on MyTestClass applies to all of its test
   methods.
   
So what are the well-known TestNG groups that we could define
   for use
inside Harmony ? Here are some of my initial thoughts:
   
   
* type.impl  --  tests that are specific to Harmony
   
So tests are implicitly API unless specified otherwise?
   
I'm slightly confused by your definition of impl tests as
 tests
   that
   are
specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to
  go on
the bootclasspath.
   
I think I just need a little clarification...
   
  
   Hi Oliver,
  
   I was using the definition of implementation-specific tests
 that we
   currently have on the Harmony testing conventions web page. That
  is,
   implementation-specific tests are those that are dependent on
 some
   aspect of the Harmony implementation and would therefore not
  pass when
   run against the RI or other conforming implementations. It's
   orthogonal
   to the classpath/bootclasspath issue.
  
  
* state.broken.platform id  --  tests bust on a specific
  platform
   
* state.broken  --  tests broken on every platform but we
  want to
decide whether or not to run from our suite configuration
   
* os.platform id  --  tests that are to be run only on the
specified platform (a test could be member of more than
 one of
   these)
   
And the defaults for these are an unbroken state and runs
 on any
platform.
That makes sense...
   
Will the platform ids be organised in a similar way to the
   platform ids
we've discussed before for organisation of native code [1]?
   
  
   The actual string used 

Re: [classlib] Testing conventions - a proposal

2006-07-20 Thread George Harley

Richard Liang wrote:



George Harley wrote:

Richard Liang wrote:



George Harley wrote:

Hi,

If annotations were to be used to help us categorise tests in order 
to simplify the definition of test configurations - what's included 
and excluded etc - then a core set of annotations would need to be 
agreed by the project. Consider the possibilities that the TestNG 
@Test annotation offers us in this respect.


First, if a test method was identified as being broken and needed 
to be excluded from all test runs while awaiting investigation then 
it would be a simple matter of setting its enabled field like this:


   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can 
be left in its original class and we do not have to refer to it in 
any suite configuration (e.g. in the suite xml file).


If a test method was identified as being broken on a specific 
platform then we could make use of the groups field of the @Test 
type by making the method a member of a group that identifies its 
predicament. Something like this:


   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then 
specifically exclude any test method (or class) that was a member 
of that group.


Making a test method or type a member of a well-known group 
(well-known in the sense that the name and meaning has been agreed 
within the project) is essentially adding some descriptive 
attributes to the test. Like adjectives (the groups) and nouns (the 
tests) in the English language. To take another example, if there 
was a test class that contained methods only intended to be run on 
Windows and that were all specific to Harmony (i.e. not API tests) 
then  one could envisage the following kind of annotation:



@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for 
use inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)



What does everyone else think ? Does such a scheme sound reasonable ?

Just one question: What's the default test annotation? I mean the 
successful api tests which will be run on every platform. Thanks a lot.


Best regards,
Richard


Hi Richard,

I think that just the basic @Test annotation on its own will suffice. 
Any better suggestions are welcome.



Just thinking about how to filter out the target test groups :-)

I tried to use the following groups to define the win.IA32 API tests, 
but it seems that the tests with the default annotation @Test cannot 
be selected. Do I miss anything? Thanks a lot.


   groups
   run
   include name=.*  /
   include name=os.win.IA32  /
   exclude name=type.impl /
   exclude name=state.broken /
   exclude name=state.broken.win.IA32 /
   exclude name=os.linux.IA32 /
   /run
   /groups

The groups I defined:
@Test
@Test(groups={os.win.IA32})
@Test(groups={os.win.IA32, state.broken.win.IA32})
@Test(groups={type.impl})
@Test(groups={state.broken})
@Test(groups={os.linux.IA32})
@Test(groups={state.broken.linux.IA32})

Best regards,
Richard.


Hi Richard,

Infuriating isn't it ?

The approach I have adopted so far is to aim for a single testng.xml 
file per module that could be used for all platforms that we run tests 
on. The thought of multiple testng.xml files for each module, with each 
XML file including platform-specific data duplicated across the files 
(save for a few platform identifiers) seemed less than optimal.


So how do we arrive at this single testng.xml file with awareness of its 
runtime platform ? And how can that knowledge be applied in the file to 
filter just the particular test groups that we want ? Well, the approach 
that seems to work best for me so far is to use the make use of some 
BeanShell script in which we can detect the platform id as a system 
property and then use that inside some pretty straightforward 
Java/BeanShell code to select precisely the groups we want to run in a 
particular test.


For example, in the following Ant fragment we use the testng task to 
launch the tests pointing at a specific testng.xml file 
(testng-with-beanshell.xml) and also setting the platform identifier as 
a system property 

Re: [classlib] Testing conventions - a proposal

2006-07-20 Thread George Harley

Alexei Zakharov wrote:

Hi George,

Wow, they are fast guys! Thanks for the link. Do you know when do they
plan to release 5.0 officially?

Regards,



Hi Alexei,

Actually, I just saw this announcement in my news reader about 15 
minutes ago ...


http://beust.com/weblog/archives/000400.html

Best regards,
George




2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Alexei,

I just downloaded the latest working build of TestNG 5.0 [1] and support
for the jvm attribute is in there. This is not the official release
build.

Best regards,
George

[1] http://testng.org/testng-5.0.zip


Alexei Zakharov wrote:
 Hi George,

 Agree, we may experience problems in case of VM hang or crash. I
 suggest this only as a temporary solution. BTW, the fact that TestNG
 ant task still doesn't have such attributes looks like a sign for me -
 TestNG can be still immature in some aspects. Still comparing TestNG
 and JUnit.

 Regards,

 2006/7/19, George Harley [EMAIL PROTECTED]:
 Hi Alexei,

 It's encouraging to hear that (Ant + TestNG + sample tests) all 
worked
 fine together on Harmony. In answer to your question I suppose 
that the
 ability to fork the tests in a separate VM means that we do not 
run the

 risk of possible bugs in Harmony affecting the test harness and
 therefore the outcome of the tests.

 Best regards,
 George


 Alexei Zakharov wrote:
  Probably my previous message was not clear enough.
  Why can't we just invoke everything including ant on top of Harmony
  for now? At least I was able to build and run test-14 examples from
  TestNG 4.7 distribution solely on top of j9 + our classlib today.
 
  C:\Java\testng-4.7\test-14set
  JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
  \deploy\jdk\jre
 
  C:\Java\testng-4.7\test-14ant
  -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
  Adapter run
  Buildfile: build.xml
 
  prepare:
 
  compile:
  [echo]  -- Compiling JDK 1.4
 tests --
 
  run:
  [echo]  -- Running JDK 1.4
 tests   --
  [echo]  --
 testng-4.7-jdk14.jar  --
 
  [testng-14] ===
  [testng-14] TestNG JDK 1.4
  [testng-14] Total tests run: 179, Failures: 10, Skips: 0
  [testng-14] ===
  ...
 
  Exactly the same results as with Sun JDK 1.4.
  Note: you may need to hatch the build.xml a little bit to achieve
 this.
 
  Thanks,
 
  2006/7/19, George Harley [EMAIL PROTECTED]:
  Hi Richard,
 
  Actually the Ant task always runs the tests in a forked VM. At
 present,
  however, the task does not support specifying the forked VM (i.e.
 there
  is no equivalent to the JUnit Ant task's jvm attribute). This
 matter
  has already been raised with the TestNG folks who seem happy to
  introduce this.
 
  In the meantime we could run the tests using the Ant java task.
 
 
  Best regards,
  George
 
 
 
  Richard Liang wrote:
   According to TestNG Ant Task [1], it seems that the TestNG Ant
 task
   does not support to fork a new JVM, that is, we must launch ant
 using
   Harmony itself. Any comments? Thanks a lot.
  
   [1]http://testng.org/doc/ant.html
  
   Best regards,
   Richard
  
   George Harley wrote:
   Andrew Zhang wrote:
   On 7/18/06, George Harley [EMAIL PROTECTED] 
wrote:

  
   Oliver Deakin wrote:
George Harley wrote:
SNIP!
   
Here the annotation on MyTestClass applies to all of 
its test

   methods.
   
So what are the well-known TestNG groups that we could 
define

   for use
inside Harmony ? Here are some of my initial thoughts:
   
   
* type.impl  --  tests that are specific to Harmony
   
So tests are implicitly API unless specified otherwise?
   
I'm slightly confused by your definition of impl tests as
 tests
   that
   are
specific to Harmony. Does this mean that impl tests are 
only

those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that 
need to

  go on
the bootclasspath.
   
I think I just need a little clarification...
   
  
   Hi Oliver,
  
   I was using the definition of implementation-specific tests
 that we
   currently have on the Harmony testing conventions web 
page. That

  is,
   implementation-specific tests are those that are dependent on
 some
   aspect of the Harmony implementation and would therefore not
  pass when
   run against the RI or other conforming implementations. It's
   orthogonal
   to the classpath/bootclasspath issue.
  
  
* state.broken.platform id  --  tests bust on a specific
  platform
   
* state.broken  --  tests broken on every platform but we
  want to
decide whether or not to run from our suite configuration
   
* os.platform id  --  tests that are to be run only 
on the

specified platform (a test could be member of more than
 one of
   these)
   
And the defaults for these are an unbroken state and runs
 on any

Re: [classlib] Testing conventions - a proposal

2006-07-20 Thread Alexei Zakharov

George,

I remember my past experience with BeanShell - I was trying to create
the custom BeanShell task for ant 1.6.1. I can't say I haven't
succeeded. But I remember this as a rather unpleasant experience. At
that time BeanShell appeared to me as a not very well tested
framework. Please don't throw rocks on me now, I am just talking about
my old impressions. Probably BeanShell has become better since then.

But... Do we really need BS here? Why can't we manage everything from
build.xml without extra testng.xml files? I mean something like this:

!-- determines the OS --
condition property=platform value=win.IA32
 os family=Windows/
/condition
condition property=platform value=linux.IA32
 and
 os name=linux/
 os family=unix/
 /and
/condition

property name=groups.included value=os.any, os.${platform}/
property name=groups.excluded value=state.broken,
state.broken.${platform}/

target name=run description=Run tests
taskdef name=testng classname=org.testng.TestNGAntTask
classpath=${jdk15.testng.jar}/
testng classpathref=run.cp
   outputdir=${testng.report.dir}
   groups=${groups.included}
   excludedGroups=${groups.excluded}
   classfileset dir=. includes=**/*.class/
/testng
/target

Does this make sense?

Thanks,

2006/7/20, George Harley [EMAIL PROTECTED]:

Richard Liang wrote:


 George Harley wrote:
 Richard Liang wrote:


 George Harley wrote:
 Hi,

 If annotations were to be used to help us categorise tests in order
 to simplify the definition of test configurations - what's included
 and excluded etc - then a core set of annotations would need to be
 agreed by the project. Consider the possibilities that the TestNG
 @Test annotation offers us in this respect.

 First, if a test method was identified as being broken and needed
 to be excluded from all test runs while awaiting investigation then
 it would be a simple matter of setting its enabled field like this:

@Test(enabled=false)
public void myTest() {
...
}

 Temporarily disabling a test method in this way means that it can
 be left in its original class and we do not have to refer to it in
 any suite configuration (e.g. in the suite xml file).

 If a test method was identified as being broken on a specific
 platform then we could make use of the groups field of the @Test
 type by making the method a member of a group that identifies its
 predicament. Something like this:

@Test(groups={state.broken.win.IA32})
public void myOtherTest() {
...
}

 The configuration for running tests on Windows would then
 specifically exclude any test method (or class) that was a member
 of that group.

 Making a test method or type a member of a well-known group
 (well-known in the sense that the name and meaning has been agreed
 within the project) is essentially adding some descriptive
 attributes to the test. Like adjectives (the groups) and nouns (the
 tests) in the English language. To take another example, if there
 was a test class that contained methods only intended to be run on
 Windows and that were all specific to Harmony (i.e. not API tests)
 then  one could envisage the following kind of annotation:


 @Test(groups={type.impl, os.win.IA32})
 public class MyTestClass {

public void testOne() {
...
}

public void testTwo() {
...
}

@Test(enabled=false)
public void brokenTest() {
...
}
 }

 Here the annotation on MyTestClass applies to all of its test methods.

 So what are the well-known TestNG groups that we could define for
 use inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 * state.broken.platform id  --  tests bust on a specific platform

 * state.broken  --  tests broken on every platform but we want to
 decide whether or not to run from our suite configuration

 * os.platform id  --  tests that are to be run only on the
 specified platform (a test could be member of more than one of these)


 What does everyone else think ? Does such a scheme sound reasonable ?

 Just one question: What's the default test annotation? I mean the
 successful api tests which will be run on every platform. Thanks a lot.

 Best regards,
 Richard

 Hi Richard,

 I think that just the basic @Test annotation on its own will suffice.
 Any better suggestions are welcome.

 Just thinking about how to filter out the target test groups :-)

 I tried to use the following groups to define the win.IA32 API tests,
 but it seems that the tests with the default annotation @Test cannot
 be selected. Do I miss anything? Thanks a lot.

groups
run
include name=.*  /
include name=os.win.IA32  /
exclude name=type.impl /
exclude name=state.broken /
exclude name=state.broken.win.IA32 /
exclude name=os.linux.IA32 /
/run
/groups

 The groups I defined:
 @Test
 

Re: [classlib] Testing conventions - a proposal

2006-07-20 Thread George Harley

Alexei Zakharov wrote:

George,

I remember my past experience with BeanShell - I was trying to create
the custom BeanShell task for ant 1.6.1. I can't say I haven't
succeeded. But I remember this as a rather unpleasant experience. At
that time BeanShell appeared to me as a not very well tested
framework. Please don't throw rocks on me now, I am just talking about
my old impressions. Probably BeanShell has become better since then.



Hi Alexei,

No rocks. I promise :-)



But... Do we really need BS here? Why can't we manage everything from
build.xml without extra testng.xml files? I mean something like this:

!-- determines the OS --
condition property=platform value=win.IA32
 os family=Windows/
/condition
condition property=platform value=linux.IA32
 and
 os name=linux/
 os family=unix/
 /and
/condition

property name=groups.included value=os.any, os.${platform}/
property name=groups.excluded value=state.broken,
state.broken.${platform}/

target name=run description=Run tests
taskdef name=testng classname=org.testng.TestNGAntTask
classpath=${jdk15.testng.jar}/
testng classpathref=run.cp
   outputdir=${testng.report.dir}
   groups=${groups.included}
   excludedGroups=${groups.excluded}
   classfileset dir=. includes=**/*.class/
/testng
/target

Does this make sense?

Thanks,


Yes, that makes sense and if it gives the degree of control that we need 
then I would be all for it. The simpler the better.


It looks like you are using an os.any group for those test methods 
(the majority) which may be run anywhere. That's a different approach to 
what I have been doing. I have been thinking more along the lines of 
avoiding the creation of groups that cover the majority of tests and 
trying to focus on groups that identify those edge cases like 
platform-specific, temporarily broken, temporarily broken on platform 
os.blah etc. This means my tests that are run anywhere and are 
public api type (as opposed to being specific to the Harmony 
implementation) are just annotated with @Test. I guess that the 
equivalent in your scheme would be annotated as @Test(groups = 
{os.any, type.api}) ?


If I have inferred correct from your Ant fragment then I think it means 
requiring more information on the annotations. I'm not throwing rocks at 
that idea (remember my promise ?), just trying to draw out the 
differences in our approaches. When I get a chance I will try and 
explore your idea further.


I really appreciate your input here.

Best regards,
George




2006/7/20, George Harley [EMAIL PROTECTED]:

Richard Liang wrote:


 George Harley wrote:
 Richard Liang wrote:


 George Harley wrote:
 Hi,

 If annotations were to be used to help us categorise tests in order
 to simplify the definition of test configurations - what's included
 and excluded etc - then a core set of annotations would need to be
 agreed by the project. Consider the possibilities that the TestNG
 @Test annotation offers us in this respect.

 First, if a test method was identified as being broken and needed
 to be excluded from all test runs while awaiting investigation then
 it would be a simple matter of setting its enabled field like this:

@Test(enabled=false)
public void myTest() {
...
}

 Temporarily disabling a test method in this way means that it can
 be left in its original class and we do not have to refer to it in
 any suite configuration (e.g. in the suite xml file).

 If a test method was identified as being broken on a specific
 platform then we could make use of the groups field of the @Test
 type by making the method a member of a group that identifies its
 predicament. Something like this:

@Test(groups={state.broken.win.IA32})
public void myOtherTest() {
...
}

 The configuration for running tests on Windows would then
 specifically exclude any test method (or class) that was a member
 of that group.

 Making a test method or type a member of a well-known group
 (well-known in the sense that the name and meaning has been agreed
 within the project) is essentially adding some descriptive
 attributes to the test. Like adjectives (the groups) and nouns (the
 tests) in the English language. To take another example, if there
 was a test class that contained methods only intended to be run on
 Windows and that were all specific to Harmony (i.e. not API tests)
 then  one could envisage the following kind of annotation:


 @Test(groups={type.impl, os.win.IA32})
 public class MyTestClass {

public void testOne() {
...
}

public void testTwo() {
...
}

@Test(enabled=false)
public void brokenTest() {
...
}
 }

 Here the annotation on MyTestClass applies to all of its test 
methods.


 So what are the well-known TestNG groups that we could define for
 use inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 * state.broken.platform id  --  tests bust on a 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Richard Liang



George Harley wrote:

Hi,

If annotations were to be used to help us categorise tests in order to 
simplify the definition of test configurations - what's included and 
excluded etc - then a core set of annotations would need to be agreed 
by the project. Consider the possibilities that the TestNG @Test 
annotation offers us in this respect.


First, if a test method was identified as being broken and needed to 
be excluded from all test runs while awaiting investigation then it 
would be a simple matter of setting its enabled field like this:


   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be 
left in its original class and we do not have to refer to it in any 
suite configuration (e.g. in the suite xml file).


If a test method was identified as being broken on a specific platform 
then we could make use of the groups field of the @Test type by 
making the method a member of a group that identifies its predicament. 
Something like this:


   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then specifically 
exclude any test method (or class) that was a member of that group.


Making a test method or type a member of a well-known group 
(well-known in the sense that the name and meaning has been agreed 
within the project) is essentially adding some descriptive attributes 
to the test. Like adjectives (the groups) and nouns (the tests) in the 
English language. To take another example, if there was a test class 
that contained methods only intended to be run on Windows and that 
were all specific to Harmony (i.e. not API tests) then  one could 
envisage the following kind of annotation:



@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use 
inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the specified 
platform (a test could be member of more than one of these)



What does everyone else think ? Does such a scheme sound reasonable ?

Just one question: What's the default test annotation? I mean the 
successful api tests which will be run on every platform. Thanks a lot.


Best regards,
Richard

Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are serious 
problems with this approach. We have started down a track of just 
continually growing the number of test source folders as new 
categories of test are identified and IMHO that is going to bring 
complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as 
things progress it is obvious that we are eventually heading for 
large amounts of related test code scattered or possibly duplicated 
across numerous hard wired source directories. How maintainable is 
that going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and 
simple to re-configure if necessary (pushing whole directories of 
files around the place does not seem a particularly lightweight 
approach); and something that is not going to potentially mess up 
contributed patches when the file they patch is found to have been 
recently pushed from source folder A to B.


To connect into another recent thread, there have been some posts 
lately about handling some test methods that fail on Harmony and have 
meant that entire test case classes have been 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Richard Liang
According to TestNG Ant Task [1], it seems that the TestNG Ant task 
does not support to fork a new JVM, that is, we must launch ant using 
Harmony itself. Any comments? Thanks a lot.


[1]http://testng.org/doc/ant.html

Best regards,
Richard

George Harley wrote:

Andrew Zhang wrote:

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:


Oliver Deakin wrote:
 George Harley wrote:
 SNIP!

 Here the annotation on MyTestClass applies to all of its test 
methods.


 So what are the well-known TestNG groups that we could define for 
use

 inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 So tests are implicitly API unless specified otherwise?

 I'm slightly confused by your definition of impl tests as tests that
are
 specific to Harmony. Does this mean that impl tests are only
 those that test classes in org.apache.harmony packages?
 I thought that impl was our way of saying tests that need to go on
 the bootclasspath.

 I think I just need a little clarification...


Hi Oliver,

I was using the definition of implementation-specific tests that we
currently have on the Harmony testing conventions web page. That is,
implementation-specific tests are those that are dependent on some
aspect of the Harmony implementation and would therefore not pass when
run against the RI or other conforming implementations. It's orthogonal
to the classpath/bootclasspath issue.


 * state.broken.platform id  --  tests bust on a specific platform

 * state.broken  --  tests broken on every platform but we want to
 decide whether or not to run from our suite configuration

 * os.platform id  --  tests that are to be run only on the
 specified platform (a test could be member of more than one of 
these)


 And the defaults for these are an unbroken state and runs on any
 platform.
 That makes sense...

 Will the platform ids be organised in a similar way to the 
platform ids

 we've discussed before for organisation of native code [1]?


The actual string used to identify a particular platform can be 
whatever

we want it to be, just so long as we are consistent. So, yes, the ids
mentioned in the referenced email would seem a good starting point. Do
we need to include a 32-bit/64-bit identifier ?


 So all tests are, by default, in an all-platforms (or shared) group.
 If a test fails on all Windows platforms, it is marked with
 state.broken.windows.
 If a test fails on Windows but only on, say, amd hardware,
 it is marked state.broken.windows.amd.


Yes. Agreed.


 Then when you come to run tests on your windows amd machine,
 you want to include all tests in the all-platform (shared) group,
 os.windows and os.windows.amd, and exclude all tests in
 the state.broken, state.broken.windows and state.broken.windows.amd
 groups.

 Does this tally with what you were thinking?


Yes, that is the idea.




 What does everyone else think ? Does such a scheme sound 
reasonable ?


 I think so - it seems to cover our current requirements. Thanks for
 coming up with this!


Thanks, but I don't see it as final yet really. It would be great to
prove the worth of this by doing a trial on one of the existing 
modules,

ideally something that contains tests that are platform-specific.



Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO 
module.

:)

The assert statements are commented out in these tests, with FIXME 
mark.


Furthurmore, I also find some platform dependent behaviours of 
FileChannel.
If TestNG is applied on NIO, I will supplement new tests for 
FileChannel and

fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!



Hi Andrew,

That sounds like a very good idea. If there is agreement in the 
project that 5.0 annotations are the way to go (as opposed to the 
pre-5.0 Javadoc comment support offered by TestNG) then to the best of 
my knowledge all that is stopping us from doing this trial is the lack 
of a 5.0 VM to run the Harmony tests on. Hopefully that will be 
addressed soon. When it is I would be happy to get stuck into this trial.


Best regards,
George



Best regards,

George


 Regards,
 Oliver

 [1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





 Thanks for reading this far.

 Best regards,
 George



 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really 
caught my

 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the 
topic of

 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. After looking at
 the recent changes to the LUNI module tests (where the layout
 guidelines were applied) I have a real concern that there are
 serious problems with this approach. We have started down a 
track of

 just continually 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Alexei Zakharov

Hmm, do we have problems with launching ant? I thought we have
problems with launching TestNG. Just checked - running tests for beans
on j9+fresh classlib works fine. I.e.
ant -Dbuild.module=beans
-Dbuild.compiler=org.eclipse.jdt.core.JDTCompilerAdapter test

2006/7/19, Richard Liang [EMAIL PROTECTED]:

According to TestNG Ant Task [1], it seems that the TestNG Ant task
does not support to fork a new JVM, that is, we must launch ant using
Harmony itself. Any comments? Thanks a lot.

[1]http://testng.org/doc/ant.html

Best regards,
Richard

George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define for
 use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests that
 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to go on
  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That is,
 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not pass when
 run against the RI or other conforming implementations. It's orthogonal
 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific platform
 
  * state.broken  --  tests broken on every platform but we want to
  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, the ids
 mentioned in the referenced email would seem a good starting point. Do
 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) group.
  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and state.broken.windows.amd
  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound
 reasonable ?
 
  I think so - it seems to cover our current requirements. Thanks for
  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing
 modules,
 ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
 module.
 :)

 The assert statements are commented out in these tests, with FIXME
 mark.

 Furthurmore, I also find some platform dependent behaviours of
 FileChannel.
 If TestNG is applied on NIO, I will supplement new tests for
 FileChannel and
 fix the bug of source code.

 What's your opnion? Any suggestions/comments?

 Thanks!


 Hi Andrew,

 That sounds like a very good idea. If there is agreement in the
 project that 5.0 annotations are the way to go (as opposed to the
 pre-5.0 Javadoc comment support offered by TestNG) then to the best of
 my knowledge all that is stopping us from doing this trial is the lack
 of a 5.0 VM to run the Harmony tests on. Hopefully that will be
 addressed soon. When it is I would be happy to get stuck into this trial.

 Best regards,
 George


 Best regards,
 George


  Regards,
  Oliver
 
  [1]
 
 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL 
PROTECTED]

 
 
 
  Thanks for reading this far.
 
  Best regards,
  George
 
 
 
  George Harley wrote:
  Hi,
 
  Just seen Tim's note on test support classes and it really
 caught my
  attention as I have been mulling over this issue for a little while
  now. I think that it is a good time for us to return to 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Richard Liang

Just thinking about using TestNG to execute Harmony test cases.  :-)

Look at our build.xml (e.g., modules/luni/build.xml), you will see 
something like:

..
junit fork=yes
  forkmode=once
  printsummary=withOutAndErr
  errorproperty=test.errors
  failureproperty=test.failures
  showoutput=on
  dir=${hy.luni.bin.test}
  jvm=${test.jre.home}/bin/java

   jvmarg value=-showversion /
..

My question is the TestNG Ant task testng does not support attributes 
fork and jvm, how to run our test under Harmony VM? Thanks a lot.


Best regards,
Richard

Alexei Zakharov wrote:

Hmm, do we have problems with launching ant? I thought we have
problems with launching TestNG. Just checked - running tests for beans
on j9+fresh classlib works fine. I.e.
ant -Dbuild.module=beans
-Dbuild.compiler=org.eclipse.jdt.core.JDTCompilerAdapter test

2006/7/19, Richard Liang [EMAIL PROTECTED]:

According to TestNG Ant Task [1], it seems that the TestNG Ant task
does not support to fork a new JVM, that is, we must launch ant using
Harmony itself. Any comments? Thanks a lot.

[1]http://testng.org/doc/ant.html

Best regards,
Richard

George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define for
 use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as 
tests that

 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to 
go on

  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That is,
 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not pass 
when
 run against the RI or other conforming implementations. It's 
orthogonal

 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific 
platform

 
  * state.broken  --  tests broken on every platform but we want to
  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, the 
ids
 mentioned in the referenced email would seem a good starting 
point. Do

 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) 
group.

  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and 
state.broken.windows.amd

  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound
 reasonable ?
 
  I think so - it seems to cover our current requirements. Thanks 
for

  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing
 modules,
 ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
 module.
 :)

 The assert statements are commented out in these tests, with FIXME
 mark.

 Furthurmore, I also find some platform dependent behaviours of
 FileChannel.
 If TestNG is applied on NIO, I will supplement new tests for
 FileChannel and
 fix the bug of source code.

 What's your opnion? Any suggestions/comments?

 Thanks!


 Hi Andrew,

 That sounds like a very good idea. If there is agreement in the
 project that 5.0 annotations are the way to go (as opposed to the
 pre-5.0 Javadoc comment 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread George Harley

Richard Liang wrote:



George Harley wrote:

Hi,

If annotations were to be used to help us categorise tests in order 
to simplify the definition of test configurations - what's included 
and excluded etc - then a core set of annotations would need to be 
agreed by the project. Consider the possibilities that the TestNG 
@Test annotation offers us in this respect.


First, if a test method was identified as being broken and needed to 
be excluded from all test runs while awaiting investigation then it 
would be a simple matter of setting its enabled field like this:


   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be 
left in its original class and we do not have to refer to it in any 
suite configuration (e.g. in the suite xml file).


If a test method was identified as being broken on a specific 
platform then we could make use of the groups field of the @Test 
type by making the method a member of a group that identifies its 
predicament. Something like this:


   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then 
specifically exclude any test method (or class) that was a member of 
that group.


Making a test method or type a member of a well-known group 
(well-known in the sense that the name and meaning has been agreed 
within the project) is essentially adding some descriptive attributes 
to the test. Like adjectives (the groups) and nouns (the tests) in 
the English language. To take another example, if there was a test 
class that contained methods only intended to be run on Windows and 
that were all specific to Harmony (i.e. not API tests) then  one 
could envisage the following kind of annotation:



@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use 
inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)



What does everyone else think ? Does such a scheme sound reasonable ?

Just one question: What's the default test annotation? I mean the 
successful api tests which will be run on every platform. Thanks a lot.


Best regards,
Richard


Hi Richard,

I think that just the basic @Test annotation on its own will suffice. 
Any better suggestions are welcome.


Best regards,
George




Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are 
serious problems with this approach. We have started down a track of 
just continually growing the number of test source folders as new 
categories of test are identified and IMHO that is going to bring 
complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as 
things progress it is obvious that we are eventually heading for 
large amounts of related test code scattered or possibly duplicated 
across numerous hard wired source directories. How maintainable is 
that going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and 
simple to re-configure if necessary (pushing whole directories of 
files around the place does not seem a particularly lightweight 
approach); and something that is not going to potentially mess up 
contributed patches when the file they patch is found to have been 
recently pushed from source folder A to B.


To connect into 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread George Harley

Hi Richard,

Actually the Ant task always runs the tests in a forked VM. At present, 
however, the task does not support specifying the forked VM (i.e. there 
is no equivalent to the JUnit Ant task's jvm attribute). This matter 
has already been raised with the TestNG folks who seem happy to 
introduce this.


In the meantime we could run the tests using the Ant java task.


Best regards,
George



Richard Liang wrote:
According to TestNG Ant Task [1], it seems that the TestNG Ant task 
does not support to fork a new JVM, that is, we must launch ant using 
Harmony itself. Any comments? Thanks a lot.


[1]http://testng.org/doc/ant.html

Best regards,
Richard

George Harley wrote:

Andrew Zhang wrote:

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:


Oliver Deakin wrote:
 George Harley wrote:
 SNIP!

 Here the annotation on MyTestClass applies to all of its test 
methods.


 So what are the well-known TestNG groups that we could define 
for use

 inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 So tests are implicitly API unless specified otherwise?

 I'm slightly confused by your definition of impl tests as tests 
that

are
 specific to Harmony. Does this mean that impl tests are only
 those that test classes in org.apache.harmony packages?
 I thought that impl was our way of saying tests that need to go on
 the bootclasspath.

 I think I just need a little clarification...


Hi Oliver,

I was using the definition of implementation-specific tests that we
currently have on the Harmony testing conventions web page. That is,
implementation-specific tests are those that are dependent on some
aspect of the Harmony implementation and would therefore not pass when
run against the RI or other conforming implementations. It's 
orthogonal

to the classpath/bootclasspath issue.


 * state.broken.platform id  --  tests bust on a specific platform

 * state.broken  --  tests broken on every platform but we want to
 decide whether or not to run from our suite configuration

 * os.platform id  --  tests that are to be run only on the
 specified platform (a test could be member of more than one of 
these)


 And the defaults for these are an unbroken state and runs on any
 platform.
 That makes sense...

 Will the platform ids be organised in a similar way to the 
platform ids

 we've discussed before for organisation of native code [1]?


The actual string used to identify a particular platform can be 
whatever

we want it to be, just so long as we are consistent. So, yes, the ids
mentioned in the referenced email would seem a good starting point. Do
we need to include a 32-bit/64-bit identifier ?


 So all tests are, by default, in an all-platforms (or shared) group.
 If a test fails on all Windows platforms, it is marked with
 state.broken.windows.
 If a test fails on Windows but only on, say, amd hardware,
 it is marked state.broken.windows.amd.


Yes. Agreed.


 Then when you come to run tests on your windows amd machine,
 you want to include all tests in the all-platform (shared) group,
 os.windows and os.windows.amd, and exclude all tests in
 the state.broken, state.broken.windows and state.broken.windows.amd
 groups.

 Does this tally with what you were thinking?


Yes, that is the idea.




 What does everyone else think ? Does such a scheme sound 
reasonable ?


 I think so - it seems to cover our current requirements. Thanks for
 coming up with this!


Thanks, but I don't see it as final yet really. It would be great to
prove the worth of this by doing a trial on one of the existing 
modules,

ideally something that contains tests that are platform-specific.



Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO 
module.

:)

The assert statements are commented out in these tests, with FIXME 
mark.


Furthurmore, I also find some platform dependent behaviours of 
FileChannel.
If TestNG is applied on NIO, I will supplement new tests for 
FileChannel and

fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!



Hi Andrew,

That sounds like a very good idea. If there is agreement in the 
project that 5.0 annotations are the way to go (as opposed to the 
pre-5.0 Javadoc comment support offered by TestNG) then to the best 
of my knowledge all that is stopping us from doing this trial is the 
lack of a 5.0 VM to run the Harmony tests on. Hopefully that will be 
addressed soon. When it is I would be happy to get stuck into this 
trial.


Best regards,
George



Best regards,

George


 Regards,
 Oliver

 [1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





 Thanks for reading this far.

 Best regards,
 George



 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really 
caught my
 attention as I have been mulling over this issue for a little 
while
 now. I think that it is 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Alexei Zakharov

Probably my previous message was not clear enough.
Why can't we just invoke everything including ant on top of Harmony
for now? At least I was able to build and run test-14 examples from
TestNG 4.7 distribution solely on top of j9 + our classlib today.

C:\Java\testng-4.7\test-14set JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
\deploy\jdk\jre

C:\Java\testng-4.7\test-14ant -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
Adapter run
Buildfile: build.xml

prepare:

compile:
[echo]  -- Compiling JDK 1.4 tests --

run:
[echo]  -- Running JDK 1.4 tests   --
[echo]  -- testng-4.7-jdk14.jar  --

[testng-14] ===
[testng-14] TestNG JDK 1.4
[testng-14] Total tests run: 179, Failures: 10, Skips: 0
[testng-14] ===
...

Exactly the same results as with Sun JDK 1.4.
Note: you may need to hatch the build.xml a little bit to achieve this.

Thanks,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Richard,

Actually the Ant task always runs the tests in a forked VM. At present,
however, the task does not support specifying the forked VM (i.e. there
is no equivalent to the JUnit Ant task's jvm attribute). This matter
has already been raised with the TestNG folks who seem happy to
introduce this.

In the meantime we could run the tests using the Ant java task.


Best regards,
George



Richard Liang wrote:
 According to TestNG Ant Task [1], it seems that the TestNG Ant task
 does not support to fork a new JVM, that is, we must launch ant using
 Harmony itself. Any comments? Thanks a lot.

 [1]http://testng.org/doc/ant.html

 Best regards,
 Richard

 George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define
 for use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests
 that
 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to go on
  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That is,
 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not pass when
 run against the RI or other conforming implementations. It's
 orthogonal
 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific platform
 
  * state.broken  --  tests broken on every platform but we want to
  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, the ids
 mentioned in the referenced email would seem a good starting point. Do
 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) group.
  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and state.broken.windows.amd
  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound
 reasonable ?
 
  I think so - it seems to cover our current requirements. Thanks for
  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing
 modules,
 ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
 module.
 :)

 The assert 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread George Harley

Hi Alexei,

It's encouraging to hear that (Ant + TestNG + sample tests) all worked 
fine together on Harmony. In answer to your question I suppose that the 
ability to fork the tests in a separate VM means that we do not run the 
risk of possible bugs in Harmony affecting the test harness and 
therefore the outcome of the tests.


Best regards,
George


Alexei Zakharov wrote:

Probably my previous message was not clear enough.
Why can't we just invoke everything including ant on top of Harmony
for now? At least I was able to build and run test-14 examples from
TestNG 4.7 distribution solely on top of j9 + our classlib today.

C:\Java\testng-4.7\test-14set 
JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk

\deploy\jdk\jre

C:\Java\testng-4.7\test-14ant 
-Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler

Adapter run
Buildfile: build.xml

prepare:

compile:
[echo]  -- Compiling JDK 1.4 tests --

run:
[echo]  -- Running JDK 1.4 tests   --
[echo]  -- testng-4.7-jdk14.jar  --

[testng-14] ===
[testng-14] TestNG JDK 1.4
[testng-14] Total tests run: 179, Failures: 10, Skips: 0
[testng-14] ===
...

Exactly the same results as with Sun JDK 1.4.
Note: you may need to hatch the build.xml a little bit to achieve this.

Thanks,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Richard,

Actually the Ant task always runs the tests in a forked VM. At present,
however, the task does not support specifying the forked VM (i.e. there
is no equivalent to the JUnit Ant task's jvm attribute). This matter
has already been raised with the TestNG folks who seem happy to
introduce this.

In the meantime we could run the tests using the Ant java task.


Best regards,
George



Richard Liang wrote:
 According to TestNG Ant Task [1], it seems that the TestNG Ant task
 does not support to fork a new JVM, that is, we must launch ant using
 Harmony itself. Any comments? Thanks a lot.

 [1]http://testng.org/doc/ant.html

 Best regards,
 Richard

 George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define
 for use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests
 that
 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to 
go on

  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That 
is,

 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not 
pass when

 run against the RI or other conforming implementations. It's
 orthogonal
 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific 
platform

 
  * state.broken  --  tests broken on every platform but we 
want to

  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, 
the ids
 mentioned in the referenced email would seem a good starting 
point. Do

 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) 
group.

  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and 
state.broken.windows.amd

  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound
 reasonable ?
 
  I think so - it seems to cover our current requirements. 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Alexei Zakharov

Hi George,

Agree, we may experience problems in case of VM hang or crash. I
suggest this only as a temporary solution. BTW, the fact that TestNG
ant task still doesn't have such attributes looks like a sign for me -
TestNG can be still immature in some aspects. Still comparing TestNG
and JUnit.

Regards,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Alexei,

It's encouraging to hear that (Ant + TestNG + sample tests) all worked
fine together on Harmony. In answer to your question I suppose that the
ability to fork the tests in a separate VM means that we do not run the
risk of possible bugs in Harmony affecting the test harness and
therefore the outcome of the tests.

Best regards,
George


Alexei Zakharov wrote:
 Probably my previous message was not clear enough.
 Why can't we just invoke everything including ant on top of Harmony
 for now? At least I was able to build and run test-14 examples from
 TestNG 4.7 distribution solely on top of j9 + our classlib today.

 C:\Java\testng-4.7\test-14set
 JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
 \deploy\jdk\jre

 C:\Java\testng-4.7\test-14ant
 -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
 Adapter run
 Buildfile: build.xml

 prepare:

 compile:
 [echo]  -- Compiling JDK 1.4 tests --

 run:
 [echo]  -- Running JDK 1.4 tests   --
 [echo]  -- testng-4.7-jdk14.jar  --

 [testng-14] ===
 [testng-14] TestNG JDK 1.4
 [testng-14] Total tests run: 179, Failures: 10, Skips: 0
 [testng-14] ===
 ...

 Exactly the same results as with Sun JDK 1.4.
 Note: you may need to hatch the build.xml a little bit to achieve this.

 Thanks,

 2006/7/19, George Harley [EMAIL PROTECTED]:
 Hi Richard,

 Actually the Ant task always runs the tests in a forked VM. At present,
 however, the task does not support specifying the forked VM (i.e. there
 is no equivalent to the JUnit Ant task's jvm attribute). This matter
 has already been raised with the TestNG folks who seem happy to
 introduce this.

 In the meantime we could run the tests using the Ant java task.


 Best regards,
 George



 Richard Liang wrote:
  According to TestNG Ant Task [1], it seems that the TestNG Ant task
  does not support to fork a new JVM, that is, we must launch ant using
  Harmony itself. Any comments? Thanks a lot.
 
  [1]http://testng.org/doc/ant.html
 
  Best regards,
  Richard
 
  George Harley wrote:
  Andrew Zhang wrote:
  On 7/18/06, George Harley [EMAIL PROTECTED] wrote:
 
  Oliver Deakin wrote:
   George Harley wrote:
   SNIP!
  
   Here the annotation on MyTestClass applies to all of its test
  methods.
  
   So what are the well-known TestNG groups that we could define
  for use
   inside Harmony ? Here are some of my initial thoughts:
  
  
   * type.impl  --  tests that are specific to Harmony
  
   So tests are implicitly API unless specified otherwise?
  
   I'm slightly confused by your definition of impl tests as tests
  that
  are
   specific to Harmony. Does this mean that impl tests are only
   those that test classes in org.apache.harmony packages?
   I thought that impl was our way of saying tests that need to
 go on
   the bootclasspath.
  
   I think I just need a little clarification...
  
 
  Hi Oliver,
 
  I was using the definition of implementation-specific tests that we
  currently have on the Harmony testing conventions web page. That
 is,
  implementation-specific tests are those that are dependent on some
  aspect of the Harmony implementation and would therefore not
 pass when
  run against the RI or other conforming implementations. It's
  orthogonal
  to the classpath/bootclasspath issue.
 
 
   * state.broken.platform id  --  tests bust on a specific
 platform
  
   * state.broken  --  tests broken on every platform but we
 want to
   decide whether or not to run from our suite configuration
  
   * os.platform id  --  tests that are to be run only on the
   specified platform (a test could be member of more than one of
  these)
  
   And the defaults for these are an unbroken state and runs on any
   platform.
   That makes sense...
  
   Will the platform ids be organised in a similar way to the
  platform ids
   we've discussed before for organisation of native code [1]?
  
 
  The actual string used to identify a particular platform can be
  whatever
  we want it to be, just so long as we are consistent. So, yes,
 the ids
  mentioned in the referenced email would seem a good starting
 point. Do
  we need to include a 32-bit/64-bit identifier ?
 
 
   So all tests are, by default, in an all-platforms (or shared)
 group.
   If a test fails on all Windows platforms, it is marked with
   state.broken.windows.
   If a test fails on Windows but only on, say, amd hardware,
   it is marked state.broken.windows.amd.
  
 
  Yes. Agreed.
 
 
   Then when you come to run 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread George Harley

Hi Alexei,

I just downloaded the latest working build of TestNG 5.0 [1] and support 
for the jvm attribute is in there. This is not the official release 
build.


Best regards,
George

[1] http://testng.org/testng-5.0.zip


Alexei Zakharov wrote:

Hi George,

Agree, we may experience problems in case of VM hang or crash. I
suggest this only as a temporary solution. BTW, the fact that TestNG
ant task still doesn't have such attributes looks like a sign for me -
TestNG can be still immature in some aspects. Still comparing TestNG
and JUnit.

Regards,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Alexei,

It's encouraging to hear that (Ant + TestNG + sample tests) all worked
fine together on Harmony. In answer to your question I suppose that the
ability to fork the tests in a separate VM means that we do not run the
risk of possible bugs in Harmony affecting the test harness and
therefore the outcome of the tests.

Best regards,
George


Alexei Zakharov wrote:
 Probably my previous message was not clear enough.
 Why can't we just invoke everything including ant on top of Harmony
 for now? At least I was able to build and run test-14 examples from
 TestNG 4.7 distribution solely on top of j9 + our classlib today.

 C:\Java\testng-4.7\test-14set
 JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
 \deploy\jdk\jre

 C:\Java\testng-4.7\test-14ant
 -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
 Adapter run
 Buildfile: build.xml

 prepare:

 compile:
 [echo]  -- Compiling JDK 1.4 
tests --


 run:
 [echo]  -- Running JDK 1.4 
tests   --
 [echo]  -- 
testng-4.7-jdk14.jar  --


 [testng-14] ===
 [testng-14] TestNG JDK 1.4
 [testng-14] Total tests run: 179, Failures: 10, Skips: 0
 [testng-14] ===
 ...

 Exactly the same results as with Sun JDK 1.4.
 Note: you may need to hatch the build.xml a little bit to achieve 
this.


 Thanks,

 2006/7/19, George Harley [EMAIL PROTECTED]:
 Hi Richard,

 Actually the Ant task always runs the tests in a forked VM. At 
present,
 however, the task does not support specifying the forked VM (i.e. 
there
 is no equivalent to the JUnit Ant task's jvm attribute). This 
matter

 has already been raised with the TestNG folks who seem happy to
 introduce this.

 In the meantime we could run the tests using the Ant java task.


 Best regards,
 George



 Richard Liang wrote:
  According to TestNG Ant Task [1], it seems that the TestNG Ant 
task
  does not support to fork a new JVM, that is, we must launch ant 
using

  Harmony itself. Any comments? Thanks a lot.
 
  [1]http://testng.org/doc/ant.html
 
  Best regards,
  Richard
 
  George Harley wrote:
  Andrew Zhang wrote:
  On 7/18/06, George Harley [EMAIL PROTECTED] wrote:
 
  Oliver Deakin wrote:
   George Harley wrote:
   SNIP!
  
   Here the annotation on MyTestClass applies to all of its test
  methods.
  
   So what are the well-known TestNG groups that we could define
  for use
   inside Harmony ? Here are some of my initial thoughts:
  
  
   * type.impl  --  tests that are specific to Harmony
  
   So tests are implicitly API unless specified otherwise?
  
   I'm slightly confused by your definition of impl tests as 
tests

  that
  are
   specific to Harmony. Does this mean that impl tests are only
   those that test classes in org.apache.harmony packages?
   I thought that impl was our way of saying tests that need to
 go on
   the bootclasspath.
  
   I think I just need a little clarification...
  
 
  Hi Oliver,
 
  I was using the definition of implementation-specific tests 
that we

  currently have on the Harmony testing conventions web page. That
 is,
  implementation-specific tests are those that are dependent on 
some

  aspect of the Harmony implementation and would therefore not
 pass when
  run against the RI or other conforming implementations. It's
  orthogonal
  to the classpath/bootclasspath issue.
 
 
   * state.broken.platform id  --  tests bust on a specific
 platform
  
   * state.broken  --  tests broken on every platform but we
 want to
   decide whether or not to run from our suite configuration
  
   * os.platform id  --  tests that are to be run only on the
   specified platform (a test could be member of more than 
one of

  these)
  
   And the defaults for these are an unbroken state and runs 
on any

   platform.
   That makes sense...
  
   Will the platform ids be organised in a similar way to the
  platform ids
   we've discussed before for organisation of native code [1]?
  
 
  The actual string used to identify a particular platform can be
  whatever
  we want it to be, just so long as we are consistent. So, yes,
 the ids
  mentioned in the referenced email would seem a good starting
 point. Do
  we need to include a 32-bit/64-bit identifier ?
 
 
   So all tests are, by default, in an 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Richard Liang



George Harley wrote:

Hi Alexei,

It's encouraging to hear that (Ant + TestNG + sample tests) all worked 
fine together on Harmony. In answer to your question I suppose that 
the ability to fork the tests in a separate VM means that we do not 
run the risk of possible bugs in Harmony affecting the test harness 
and therefore the outcome of the tests.


Do you think it's reasonable to launch ant using Harmony, and then 
TestNG will fork another Harmony VM to run our tests? Thanks a lot.


Best regards,
Richard.


Best regards,
George


Alexei Zakharov wrote:

Probably my previous message was not clear enough.
Why can't we just invoke everything including ant on top of Harmony
for now? At least I was able to build and run test-14 examples from
TestNG 4.7 distribution solely on top of j9 + our classlib today.

C:\Java\testng-4.7\test-14set 
JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk

\deploy\jdk\jre

C:\Java\testng-4.7\test-14ant 
-Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler

Adapter run
Buildfile: build.xml

prepare:

compile:
[echo]  -- Compiling JDK 1.4 
tests --


run:
[echo]  -- Running JDK 1.4 
tests   --

[echo]  -- testng-4.7-jdk14.jar  --

[testng-14] ===
[testng-14] TestNG JDK 1.4
[testng-14] Total tests run: 179, Failures: 10, Skips: 0
[testng-14] ===
...

Exactly the same results as with Sun JDK 1.4.
Note: you may need to hatch the build.xml a little bit to achieve this.

Thanks,

2006/7/19, George Harley [EMAIL PROTECTED]:

Hi Richard,

Actually the Ant task always runs the tests in a forked VM. At present,
however, the task does not support specifying the forked VM (i.e. there
is no equivalent to the JUnit Ant task's jvm attribute). This matter
has already been raised with the TestNG folks who seem happy to
introduce this.

In the meantime we could run the tests using the Ant java task.


Best regards,
George



Richard Liang wrote:
 According to TestNG Ant Task [1], it seems that the TestNG Ant task
 does not support to fork a new JVM, that is, we must launch ant using
 Harmony itself. Any comments? Thanks a lot.

 [1]http://testng.org/doc/ant.html

 Best regards,
 Richard

 George Harley wrote:
 Andrew Zhang wrote:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test
 methods.
 
  So what are the well-known TestNG groups that we could define
 for use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests
 that
 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to 
go on

  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests 
that we
 currently have on the Harmony testing conventions web page. 
That is,

 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not 
pass when

 run against the RI or other conforming implementations. It's
 orthogonal
 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific 
platform

 
  * state.broken  --  tests broken on every platform but we 
want to

  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of
 these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the
 platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be
 whatever
 we want it to be, just so long as we are consistent. So, yes, 
the ids
 mentioned in the referenced email would seem a good starting 
point. Do

 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) 
group.

  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) 
group,

  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and 
state.broken.windows.amd

  groups.
 
  Does this tally with 

Re: [classlib] Testing conventions - a proposal

2006-07-19 Thread Richard Liang



George Harley wrote:

Richard Liang wrote:



George Harley wrote:

Hi,

If annotations were to be used to help us categorise tests in order 
to simplify the definition of test configurations - what's included 
and excluded etc - then a core set of annotations would need to be 
agreed by the project. Consider the possibilities that the TestNG 
@Test annotation offers us in this respect.


First, if a test method was identified as being broken and needed to 
be excluded from all test runs while awaiting investigation then it 
would be a simple matter of setting its enabled field like this:


   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be 
left in its original class and we do not have to refer to it in any 
suite configuration (e.g. in the suite xml file).


If a test method was identified as being broken on a specific 
platform then we could make use of the groups field of the @Test 
type by making the method a member of a group that identifies its 
predicament. Something like this:


   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then 
specifically exclude any test method (or class) that was a member of 
that group.


Making a test method or type a member of a well-known group 
(well-known in the sense that the name and meaning has been agreed 
within the project) is essentially adding some descriptive 
attributes to the test. Like adjectives (the groups) and nouns (the 
tests) in the English language. To take another example, if there 
was a test class that contained methods only intended to be run on 
Windows and that were all specific to Harmony (i.e. not API tests) 
then  one could envisage the following kind of annotation:



@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for 
use inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)



What does everyone else think ? Does such a scheme sound reasonable ?

Just one question: What's the default test annotation? I mean the 
successful api tests which will be run on every platform. Thanks a lot.


Best regards,
Richard


Hi Richard,

I think that just the basic @Test annotation on its own will suffice. 
Any better suggestions are welcome.



Just thinking about how to filter out the target test groups :-)

I tried to use the following groups to define the win.IA32 API tests, 
but it seems that the tests with the default annotation @Test cannot be 
selected. Do I miss anything? Thanks a lot.


   groups
   run
   include name=.*  /
   include name=os.win.IA32  /
   exclude name=type.impl /
   exclude name=state.broken /
   exclude name=state.broken.win.IA32 /
   exclude name=os.linux.IA32 /
   /run
   /groups

The groups I defined:
@Test
@Test(groups={os.win.IA32})
@Test(groups={os.win.IA32, state.broken.win.IA32})
@Test(groups={type.impl})
@Test(groups={state.broken})
@Test(groups={os.linux.IA32})
@Test(groups={state.broken.linux.IA32})

Best regards,
Richard.

Best regards,
George




Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught 
my attention as I have been mulling over this issue for a little 
while now. I think that it is a good time for us to return to the 
topic of class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are 
serious problems with this approach. We have started down a track 
of just continually growing the number of test source folders as 
new categories of test are identified and IMHO that is going to 
bring complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Oliver Deakin

George Harley wrote:

SNIP!

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use 
inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony


So tests are implicitly API unless specified otherwise?

I'm slightly confused by your definition of impl tests as tests that are
specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to go on
the bootclasspath.

I think I just need a little clarification...


* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the specified 
platform (a test could be member of more than one of these)


And the defaults for these are an unbroken state and runs on any platform.
That makes sense...

Will the platform ids be organised in a similar way to the platform ids
we've discussed before for organisation of native code [1]?

So all tests are, by default, in an all-platforms (or shared) group.
If a test fails on all Windows platforms, it is marked with
state.broken.windows.
If a test fails on Windows but only on, say, amd hardware,
it is marked state.broken.windows.amd.

Then when you come to run tests on your windows amd machine,
you want to include all tests in the all-platform (shared) group,
os.windows and os.windows.amd, and exclude all tests in
the state.broken, state.broken.windows and state.broken.windows.amd
groups.

Does this tally with what you were thinking?




What does everyone else think ? Does such a scheme sound reasonable ?


I think so - it seems to cover our current requirements. Thanks for
coming up with this!

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]




Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are serious 
problems with this approach. We have started down a track of just 
continually growing the number of test source folders as new 
categories of test are identified and IMHO that is going to bring 
complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as 
things progress it is obvious that we are eventually heading for 
large amounts of related test code scattered or possibly duplicated 
across numerous hard wired source directories. How maintainable is 
that going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and 
simple to re-configure if necessary (pushing whole directories of 
files around the place does not seem a particularly lightweight 
approach); and something that is not going to potentially mess up 
contributed patches when the file they patch is found to have been 
recently pushed from source folder A to B.


To connect into another recent thread, there have been some posts 
lately about handling some test methods that fail on Harmony and have 
meant that entire test case classes have been excluded from our test 
runs. I have also been noticing some API test methods that pass fine 
on Harmony but fail when run against the RI. Are the different 
behaviours down to errors in the Harmony implementation ? An error in 
the RI implementation ? A bug in the RI Javadoc ? Only after some 
investigation has been carried out do we know for sure. That takes 
time. What do we do with the test methods in the meantime ? Do we 
push them round the file system into yet another new source folder ? 
IMHO we need a testing strategy that enables such problem methods 
to be tracked easily without disruption to the rest of the other tests.


A couple of weeks ago I mentioned that the TestNG framework [2] 
seemed like a reasonably good way of allowing us to both group 
together 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread George Harley

Oliver Deakin wrote:

George Harley wrote:

SNIP!

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use 
inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony


So tests are implicitly API unless specified otherwise?

I'm slightly confused by your definition of impl tests as tests that are
specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to go on
the bootclasspath.

I think I just need a little clarification...



Hi Oliver,

I was using the definition of implementation-specific tests that we 
currently have on the Harmony testing conventions web page. That is, 
implementation-specific tests are those that are dependent on some 
aspect of the Harmony implementation and would therefore not pass when 
run against the RI or other conforming implementations. It's orthogonal 
to the classpath/bootclasspath issue.




* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)


And the defaults for these are an unbroken state and runs on any 
platform.

That makes sense...

Will the platform ids be organised in a similar way to the platform ids
we've discussed before for organisation of native code [1]?



The actual string used to identify a particular platform can be whatever 
we want it to be, just so long as we are consistent. So, yes, the ids 
mentioned in the referenced email would seem a good starting point. Do 
we need to include a 32-bit/64-bit identifier ?




So all tests are, by default, in an all-platforms (or shared) group.
If a test fails on all Windows platforms, it is marked with
state.broken.windows.
If a test fails on Windows but only on, say, amd hardware,
it is marked state.broken.windows.amd.



Yes. Agreed.



Then when you come to run tests on your windows amd machine,
you want to include all tests in the all-platform (shared) group,
os.windows and os.windows.amd, and exclude all tests in
the state.broken, state.broken.windows and state.broken.windows.amd
groups.

Does this tally with what you were thinking?



Yes, that is the idea.





What does everyone else think ? Does such a scheme sound reasonable ?


I think so - it seems to cover our current requirements. Thanks for
coming up with this!



Thanks, but I don't see it as final yet really. It would be great to 
prove the worth of this by doing a trial on one of the existing modules, 
ideally something that contains tests that are platform-specific.


Best regards,
George



Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are 
serious problems with this approach. We have started down a track of 
just continually growing the number of test source folders as new 
categories of test are identified and IMHO that is going to bring 
complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as 
things progress it is obvious that we are eventually heading for 
large amounts of related test code scattered or possibly duplicated 
across numerous hard wired source directories. How maintainable is 
that going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and 
simple to re-configure if necessary (pushing whole directories of 
files around the place does not seem a particularly lightweight 
approach); and something that is not going to potentially mess up 
contributed patches when the file they patch is found to have been 
recently pushed from 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Oliver Deakin

George Harley wrote:

Oliver Deakin wrote:

George Harley wrote:

SNIP!

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for 
use inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony


So tests are implicitly API unless specified otherwise?

I'm slightly confused by your definition of impl tests as tests that 
are

specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to go on
the bootclasspath.

I think I just need a little clarification...



Hi Oliver,

I was using the definition of implementation-specific tests that we 
currently have on the Harmony testing conventions web page. That is, 
implementation-specific tests are those that are dependent on some 
aspect of the Harmony implementation and would therefore not pass when 
run against the RI or other conforming implementations. It's 
orthogonal to the classpath/bootclasspath issue.


OK, that's what I imagined you meant. IMHO using api and impl
in this way makes the most sense (since, as you say, they do not
really relate to the classpath/bootclasspath issue).

So do we also need a pair of groups for classpath/bootclasspath
tests? I'm assuming that this is how we would handle this distinction,
rather than organising them into separate directories in the file system.





* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)


And the defaults for these are an unbroken state and runs on any 
platform.

That makes sense...

Will the platform ids be organised in a similar way to the platform ids
we've discussed before for organisation of native code [1]?



The actual string used to identify a particular platform can be 
whatever we want it to be, just so long as we are consistent. So, yes, 
the ids mentioned in the referenced email would seem a good starting 
point. Do we need to include a 32-bit/64-bit identifier ?


I cannot immediately think of any obvious 32/64-bit specific tests that we
might require in the future (although Id be interested to know if anyone
can think of any!). However, if the need did arise, then I would
suggest that this is incorporated as another tag on the end of the
group name e.g. os.linux.ppc.32.






So all tests are, by default, in an all-platforms (or shared) group.
If a test fails on all Windows platforms, it is marked with
state.broken.windows.
If a test fails on Windows but only on, say, amd hardware,
it is marked state.broken.windows.amd.



Yes. Agreed.



Then when you come to run tests on your windows amd machine,
you want to include all tests in the all-platform (shared) group,
os.windows and os.windows.amd, and exclude all tests in
the state.broken, state.broken.windows and state.broken.windows.amd
groups.

Does this tally with what you were thinking?



Yes, that is the idea.





What does everyone else think ? Does such a scheme sound reasonable ?


I think so - it seems to cover our current requirements. Thanks for
coming up with this!



Thanks, but I don't see it as final yet really. It would be great to 
prove the worth of this by doing a trial on one of the existing 
modules, ideally something that contains tests that are 
platform-specific.


Thanks for volunteering... ;)

...but seriously, do any of our modules currently contain platform 
specific tests?

Have you attempted a TestNG trial on any of the modules (with or
without platform specific tests) and, if so, was it 
simpler/harder/better/worse

than our current setup?

Regards,
Oliver



Best regards,
George



Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught 
my attention as I have been mulling over this issue for a little 
while now. I think that it is a good time for us to return to the 
topic of class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at 
the recent changes to the LUNI module tests (where the layout 
guidelines were applied) I have a real concern that there are 
serious problems with this approach. We have started down a track 
of just continually growing the number of test source folders as 
new categories of test are identified and IMHO that is going to 
bring complexity and maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Andrew Zhang

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:


Oliver Deakin wrote:
 George Harley wrote:
 SNIP!

 Here the annotation on MyTestClass applies to all of its test methods.

 So what are the well-known TestNG groups that we could define for use
 inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 So tests are implicitly API unless specified otherwise?

 I'm slightly confused by your definition of impl tests as tests that
are
 specific to Harmony. Does this mean that impl tests are only
 those that test classes in org.apache.harmony packages?
 I thought that impl was our way of saying tests that need to go on
 the bootclasspath.

 I think I just need a little clarification...


Hi Oliver,

I was using the definition of implementation-specific tests that we
currently have on the Harmony testing conventions web page. That is,
implementation-specific tests are those that are dependent on some
aspect of the Harmony implementation and would therefore not pass when
run against the RI or other conforming implementations. It's orthogonal
to the classpath/bootclasspath issue.


 * state.broken.platform id  --  tests bust on a specific platform

 * state.broken  --  tests broken on every platform but we want to
 decide whether or not to run from our suite configuration

 * os.platform id  --  tests that are to be run only on the
 specified platform (a test could be member of more than one of these)

 And the defaults for these are an unbroken state and runs on any
 platform.
 That makes sense...

 Will the platform ids be organised in a similar way to the platform ids
 we've discussed before for organisation of native code [1]?


The actual string used to identify a particular platform can be whatever
we want it to be, just so long as we are consistent. So, yes, the ids
mentioned in the referenced email would seem a good starting point. Do
we need to include a 32-bit/64-bit identifier ?


 So all tests are, by default, in an all-platforms (or shared) group.
 If a test fails on all Windows platforms, it is marked with
 state.broken.windows.
 If a test fails on Windows but only on, say, amd hardware,
 it is marked state.broken.windows.amd.


Yes. Agreed.


 Then when you come to run tests on your windows amd machine,
 you want to include all tests in the all-platform (shared) group,
 os.windows and os.windows.amd, and exclude all tests in
 the state.broken, state.broken.windows and state.broken.windows.amd
 groups.

 Does this tally with what you were thinking?


Yes, that is the idea.




 What does everyone else think ? Does such a scheme sound reasonable ?

 I think so - it seems to cover our current requirements. Thanks for
 coming up with this!


Thanks, but I don't see it as final yet really. It would be great to
prove the worth of this by doing a trial on one of the existing modules,
ideally something that contains tests that are platform-specific.



Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO module.
:)

The assert statements are commented out in these tests, with FIXME mark.

Furthurmore, I also find some platform dependent behaviours of FileChannel.
If TestNG is applied on NIO, I will supplement new tests for FileChannel and
fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!

Best regards,

George


 Regards,
 Oliver

 [1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL
 PROTECTED]



 Thanks for reading this far.

 Best regards,
 George



 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. After looking at
 the recent changes to the LUNI module tests (where the layout
 guidelines were applied) I have a real concern that there are
 serious problems with this approach. We have started down a track of
 just continually growing the number of test source folders as new
 categories of test are identified and IMHO that is going to bring
 complexity and maintenance issues with these tests.

 Consider the dimensions of tests that we have ...

 API
 Harmony-specific
 Platform-specific
 Run on classpath
 Run on bootclasspath
 Behaves different between Harmony and RI
 Stress
 ...and so on...


 If you weigh up all of the different possible permutations and then
 consider that the above list is highly likely to be extended as
 things progress it is obvious that we are eventually heading for
 large amounts of related test code scattered or possibly duplicated
 across numerous hard wired source directories. How maintainable is
 that going to be ?

 If we want to run different tests in 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Alexei Zakharov

Hi,

George wrote:

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing modules,
 ideally something that contains tests that are platform-specific.


I volunteer to do this trial for beans module. I'm not sure that beans
contains any platform-specific tests, but I know for sure it has a lot
failed tests - so we can try TestNG with the real workload. I also
like to do the same job with JUnit 4.0 and compare the results -
exactly what is simpler/harder/better etc. In the real.

If Andrew does the same job for nio we will have two separate
experiences that help us to move further in choosing the right testing
framework.
Any thoughts, objections?

Thanks,

2006/7/18, Andrew Zhang [EMAIL PROTECTED]:

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test methods.
 
  So what are the well-known TestNG groups that we could define for use
  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests that
 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to go on
  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That is,
 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not pass when
 run against the RI or other conforming implementations. It's orthogonal
 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific platform
 
  * state.broken  --  tests broken on every platform but we want to
  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of these)
 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the platform ids
  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be whatever
 we want it to be, just so long as we are consistent. So, yes, the ids
 mentioned in the referenced email would seem a good starting point. Do
 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) group.
  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and state.broken.windows.amd
  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound reasonable ?
 
  I think so - it seems to cover our current requirements. Thanks for
  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing modules,
 ideally something that contains tests that are platform-specific.


Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO module.
:)

The assert statements are commented out in these tests, with FIXME mark.

Furthurmore, I also find some platform dependent behaviours of FileChannel.
If TestNG is applied on NIO, I will supplement new tests for FileChannel and
fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!

Best regards,
 George


  Regards,
  Oliver
 
  [1]
 
 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL 
PROTECTED]
 
 
 
  Thanks for reading this far.
 
  Best regards,
  George
 
 
 
  George Harley wrote:
  Hi,
 
  Just seen Tim's note on test support classes and it really caught my
  attention as I have been mulling over this issue for a little while
  now. I think that it is a good time for us to return to the topic of
  class library test layouts.
 
  The current proposal [1] sets out to segment our different types of
  test by placing them in different file locations. After looking at
  the recent changes to the LUNI module tests (where the layout
  guidelines were applied) I 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Andrew Zhang

On 7/18/06, Alexei Zakharov [EMAIL PROTECTED] wrote:


Hi,

George wrote:
  Thanks, but I don't see it as final yet really. It would be great to
  prove the worth of this by doing a trial on one of the existing
modules,
  ideally something that contains tests that are platform-specific.

I volunteer to do this trial for beans module. I'm not sure that beans
contains any platform-specific tests, but I know for sure it has a lot
failed tests - so we can try TestNG with the real workload. I also
like to do the same job with JUnit 4.0 and compare the results -
exactly what is simpler/harder/better etc. In the real.



Alexei, great! :)

If Andrew does the same job for nio we will have two separate

experiences that help us to move further in choosing the right testing
framework.



So shall we move next step now? That is to say, integrate TestNG, define the
annotation(George has given the first version:) ) .

If no one objects, I volunteer to have a try on nio module. :)
Thanks!

Any thoughts, objections?


Thanks,

2006/7/18, Andrew Zhang [EMAIL PROTECTED]:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:
 
  Oliver Deakin wrote:
   George Harley wrote:
   SNIP!
  
   Here the annotation on MyTestClass applies to all of its test
methods.
  
   So what are the well-known TestNG groups that we could define for
use
   inside Harmony ? Here are some of my initial thoughts:
  
  
   * type.impl  --  tests that are specific to Harmony
  
   So tests are implicitly API unless specified otherwise?
  
   I'm slightly confused by your definition of impl tests as tests
that
  are
   specific to Harmony. Does this mean that impl tests are only
   those that test classes in org.apache.harmony packages?
   I thought that impl was our way of saying tests that need to go on
   the bootclasspath.
  
   I think I just need a little clarification...
  
 
  Hi Oliver,
 
  I was using the definition of implementation-specific tests that we
  currently have on the Harmony testing conventions web page. That is,
  implementation-specific tests are those that are dependent on some
  aspect of the Harmony implementation and would therefore not pass when
  run against the RI or other conforming implementations. It's
orthogonal
  to the classpath/bootclasspath issue.
 
 
   * state.broken.platform id  --  tests bust on a specific platform
  
   * state.broken  --  tests broken on every platform but we want to
   decide whether or not to run from our suite configuration
  
   * os.platform id  --  tests that are to be run only on the
   specified platform (a test could be member of more than one of
these)
  
   And the defaults for these are an unbroken state and runs on any
   platform.
   That makes sense...
  
   Will the platform ids be organised in a similar way to the platform
ids
   we've discussed before for organisation of native code [1]?
  
 
  The actual string used to identify a particular platform can be
whatever
  we want it to be, just so long as we are consistent. So, yes, the ids
  mentioned in the referenced email would seem a good starting point. Do
  we need to include a 32-bit/64-bit identifier ?
 
 
   So all tests are, by default, in an all-platforms (or shared) group.
   If a test fails on all Windows platforms, it is marked with
   state.broken.windows.
   If a test fails on Windows but only on, say, amd hardware,
   it is marked state.broken.windows.amd.
  
 
  Yes. Agreed.
 
 
   Then when you come to run tests on your windows amd machine,
   you want to include all tests in the all-platform (shared) group,
   os.windows and os.windows.amd, and exclude all tests in
   the state.broken, state.broken.windows and state.broken.windows.amd
   groups.
  
   Does this tally with what you were thinking?
  
 
  Yes, that is the idea.
 
 
  
  
   What does everyone else think ? Does such a scheme sound reasonable
?
  
   I think so - it seems to cover our current requirements. Thanks for
   coming up with this!
  
 
  Thanks, but I don't see it as final yet really. It would be great to
  prove the worth of this by doing a trial on one of the existing
modules,
  ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
module.
 :)

 The assert statements are commented out in these tests, with FIXME
mark.

 Furthurmore, I also find some platform dependent behaviours of
FileChannel.
 If TestNG is applied on NIO, I will supplement new tests for FileChannel
and
 fix the bug of source code.

 What's your opnion? Any suggestions/comments?

 Thanks!

 Best regards,
  George
 
 
   Regards,
   Oliver
  
   [1]
  
 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL
 PROTECTED]
  
  
  
   Thanks for reading this far.
  
   Best regards,
   George
  
  
  
   George Harley wrote:
   Hi,
  
   Just seen Tim's note on test support classes and it really caught
my
  

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread George Harley

Oliver Deakin wrote:

George Harley wrote:

Oliver Deakin wrote:

George Harley wrote:

SNIP!

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for 
use inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony


So tests are implicitly API unless specified otherwise?

I'm slightly confused by your definition of impl tests as tests 
that are

specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to go on
the bootclasspath.

I think I just need a little clarification...



Hi Oliver,

I was using the definition of implementation-specific tests that we 
currently have on the Harmony testing conventions web page. That is, 
implementation-specific tests are those that are dependent on some 
aspect of the Harmony implementation and would therefore not pass 
when run against the RI or other conforming implementations. It's 
orthogonal to the classpath/bootclasspath issue.


OK, that's what I imagined you meant. IMHO using api and impl
in this way makes the most sense (since, as you say, they do not
really relate to the classpath/bootclasspath issue).

So do we also need a pair of groups for classpath/bootclasspath
tests? I'm assuming that this is how we would handle this distinction,
rather than organising them into separate directories in the file system.



Hi Oliver,


I guess that would be possible but, given that the intended classloader 
of a test is probably not going to vary much, I would have no objection 
to that distinction being made in separate directories in the file 
system. In other words, I see the benefits of a suite-driven system 
(TestNG or whatever) as being more applicable to those attributes of 
tests that we identify as being more susceptible to change. Examples 
like currently only works on platform X or broken on platform Y or 
only works when run against Harmony seem like good candidates, 
especially when the time comes to grow Harmony on other platforms. Right 
now, I don't see the runtime classloader fitting into that category. 
Maybe I'm wrong.







* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)


And the defaults for these are an unbroken state and runs on any 
platform.

That makes sense...

Will the platform ids be organised in a similar way to the platform ids
we've discussed before for organisation of native code [1]?



The actual string used to identify a particular platform can be 
whatever we want it to be, just so long as we are consistent. So, 
yes, the ids mentioned in the referenced email would seem a good 
starting point. Do we need to include a 32-bit/64-bit identifier ?


I cannot immediately think of any obvious 32/64-bit specific tests 
that we

might require in the future (although Id be interested to know if anyone
can think of any!). However, if the need did arise, then I would
suggest that this is incorporated as another tag on the end of the
group name e.g. os.linux.ppc.32.




Right.





So all tests are, by default, in an all-platforms (or shared) group.
If a test fails on all Windows platforms, it is marked with
state.broken.windows.
If a test fails on Windows but only on, say, amd hardware,
it is marked state.broken.windows.amd.



Yes. Agreed.



Then when you come to run tests on your windows amd machine,
you want to include all tests in the all-platform (shared) group,
os.windows and os.windows.amd, and exclude all tests in
the state.broken, state.broken.windows and state.broken.windows.amd
groups.

Does this tally with what you were thinking?



Yes, that is the idea.





What does everyone else think ? Does such a scheme sound reasonable ?


I think so - it seems to cover our current requirements. Thanks for
coming up with this!



Thanks, but I don't see it as final yet really. It would be great to 
prove the worth of this by doing a trial on one of the existing 
modules, ideally something that contains tests that are 
platform-specific.


Thanks for volunteering... ;)

...but seriously, do any of our modules currently contain platform 
specific tests?

Have you attempted a TestNG trial on any of the modules (with or
without platform specific tests) and, if so, was it 
simpler/harder/better/worse

than our current setup?

Regards,
Oliver



Yes, I believe that there are platform specific tests out there. NIO and 
auth are two examples that spring to mind. My own experiments with 
TestNG have so far not been with our modules but in separate projects. 
Running the tests didn't seem all that different to what we currently 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread George Harley

Andrew Zhang wrote:

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:


Oliver Deakin wrote:
 George Harley wrote:
 SNIP!

 Here the annotation on MyTestClass applies to all of its test 
methods.


 So what are the well-known TestNG groups that we could define for use
 inside Harmony ? Here are some of my initial thoughts:


 * type.impl  --  tests that are specific to Harmony

 So tests are implicitly API unless specified otherwise?

 I'm slightly confused by your definition of impl tests as tests that
are
 specific to Harmony. Does this mean that impl tests are only
 those that test classes in org.apache.harmony packages?
 I thought that impl was our way of saying tests that need to go on
 the bootclasspath.

 I think I just need a little clarification...


Hi Oliver,

I was using the definition of implementation-specific tests that we
currently have on the Harmony testing conventions web page. That is,
implementation-specific tests are those that are dependent on some
aspect of the Harmony implementation and would therefore not pass when
run against the RI or other conforming implementations. It's orthogonal
to the classpath/bootclasspath issue.


 * state.broken.platform id  --  tests bust on a specific platform

 * state.broken  --  tests broken on every platform but we want to
 decide whether or not to run from our suite configuration

 * os.platform id  --  tests that are to be run only on the
 specified platform (a test could be member of more than one of these)

 And the defaults for these are an unbroken state and runs on any
 platform.
 That makes sense...

 Will the platform ids be organised in a similar way to the platform 
ids

 we've discussed before for organisation of native code [1]?


The actual string used to identify a particular platform can be whatever
we want it to be, just so long as we are consistent. So, yes, the ids
mentioned in the referenced email would seem a good starting point. Do
we need to include a 32-bit/64-bit identifier ?


 So all tests are, by default, in an all-platforms (or shared) group.
 If a test fails on all Windows platforms, it is marked with
 state.broken.windows.
 If a test fails on Windows but only on, say, amd hardware,
 it is marked state.broken.windows.amd.


Yes. Agreed.


 Then when you come to run tests on your windows amd machine,
 you want to include all tests in the all-platform (shared) group,
 os.windows and os.windows.amd, and exclude all tests in
 the state.broken, state.broken.windows and state.broken.windows.amd
 groups.

 Does this tally with what you were thinking?


Yes, that is the idea.




 What does everyone else think ? Does such a scheme sound reasonable ?

 I think so - it seems to cover our current requirements. Thanks for
 coming up with this!


Thanks, but I don't see it as final yet really. It would be great to
prove the worth of this by doing a trial on one of the existing modules,
ideally something that contains tests that are platform-specific.



Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO 
module.

:)

The assert statements are commented out in these tests, with FIXME 
mark.


Furthurmore, I also find some platform dependent behaviours of 
FileChannel.
If TestNG is applied on NIO, I will supplement new tests for 
FileChannel and

fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!



Hi Andrew,

That sounds like a very good idea. If there is agreement in the project 
that 5.0 annotations are the way to go (as opposed to the pre-5.0 
Javadoc comment support offered by TestNG) then to the best of my 
knowledge all that is stopping us from doing this trial is the lack of a 
5.0 VM to run the Harmony tests on. Hopefully that will be addressed 
soon. When it is I would be happy to get stuck into this trial.


Best regards,
George



Best regards,

George


 Regards,
 Oliver

 [1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





 Thanks for reading this far.

 Best regards,
 George



 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. After looking at
 the recent changes to the LUNI module tests (where the layout
 guidelines were applied) I have a real concern that there are
 serious problems with this approach. We have started down a track of
 just continually growing the number of test source folders as new
 categories of test are identified and IMHO that is going to bring
 complexity and maintenance issues with these tests.

 Consider the dimensions of tests that we have ...

 API
 Harmony-specific
 Platform-specific
 Run on 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Oliver Deakin

George Harley wrote:

Oliver Deakin wrote:

George Harley wrote:

Oliver Deakin wrote:

George Harley wrote:

SNIP!

Here the annotation on MyTestClass applies to all of its test 
methods.


So what are the well-known TestNG groups that we could define for 
use inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony


So tests are implicitly API unless specified otherwise?

I'm slightly confused by your definition of impl tests as tests 
that are

specific to Harmony. Does this mean that impl tests are only
those that test classes in org.apache.harmony packages?
I thought that impl was our way of saying tests that need to go on
the bootclasspath.

I think I just need a little clarification...



Hi Oliver,

I was using the definition of implementation-specific tests that we 
currently have on the Harmony testing conventions web page. That is, 
implementation-specific tests are those that are dependent on some 
aspect of the Harmony implementation and would therefore not pass 
when run against the RI or other conforming implementations. It's 
orthogonal to the classpath/bootclasspath issue.


OK, that's what I imagined you meant. IMHO using api and impl
in this way makes the most sense (since, as you say, they do not
really relate to the classpath/bootclasspath issue).

So do we also need a pair of groups for classpath/bootclasspath
tests? I'm assuming that this is how we would handle this distinction,
rather than organising them into separate directories in the file 
system.




Hi Oliver,


I guess that would be possible but, given that the intended 
classloader of a test is probably not going to vary much, I would have 
no objection to that distinction being made in separate directories in 
the file system. In other words, I see the benefits of a suite-driven 
system (TestNG or whatever) as being more applicable to those 
attributes of tests that we identify as being more susceptible to 
change. Examples like currently only works on platform X or broken 
on platform Y or only works when run against Harmony seem like good 
candidates, especially when the time comes to grow Harmony on other 
platforms. Right now, I don't see the runtime classloader fitting into 
that category. Maybe I'm wrong.




Right - the only thing that prompted me to ask about it was the 
possibility of
uniting tests for a particular class into a single test class. With a 
separate
directory structure for classpath and bootclasspath tests there are 
often two

test classes for each class-under-test - making this distinction with
groups allows us to keep all tests for a class in a single file.

However, I don't feel strongly about this - as long as the distinction 
is clear

and easy to maintain then I'm happy. Also, I seem to remember a
discussion about splitting the tests up into separate classpath and
bootclasspath directories so that IDEs could compile them into
separate bin directories, and run them with the right config when not
using the Ant scripts [1]. This seems like a decent reason to go
with separate directories.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]








* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to 
decide whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the 
specified platform (a test could be member of more than one of these)


And the defaults for these are an unbroken state and runs on any 
platform.

That makes sense...

Will the platform ids be organised in a similar way to the platform 
ids

we've discussed before for organisation of native code [1]?



The actual string used to identify a particular platform can be 
whatever we want it to be, just so long as we are consistent. So, 
yes, the ids mentioned in the referenced email would seem a good 
starting point. Do we need to include a 32-bit/64-bit identifier ?


I cannot immediately think of any obvious 32/64-bit specific tests 
that we

might require in the future (although Id be interested to know if anyone
can think of any!). However, if the need did arise, then I would
suggest that this is incorporated as another tag on the end of the
group name e.g. os.linux.ppc.32.




Right.





So all tests are, by default, in an all-platforms (or shared) group.
If a test fails on all Windows platforms, it is marked with
state.broken.windows.
If a test fails on Windows but only on, say, amd hardware,
it is marked state.broken.windows.amd.



Yes. Agreed.



Then when you come to run tests on your windows amd machine,
you want to include all tests in the all-platform (shared) group,
os.windows and os.windows.amd, and exclude all tests in
the state.broken, state.broken.windows and state.broken.windows.amd
groups.

Does this tally with what you were thinking?



Yes, that is the idea.





What 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread George Harley

Andrew Zhang wrote:

On 7/18/06, Alexei Zakharov [EMAIL PROTECTED] wrote:


Hi,

George wrote:
  Thanks, but I don't see it as final yet really. It would be great to
  prove the worth of this by doing a trial on one of the existing
modules,
  ideally something that contains tests that are platform-specific.

I volunteer to do this trial for beans module. I'm not sure that beans
contains any platform-specific tests, but I know for sure it has a lot
failed tests - so we can try TestNG with the real workload. I also
like to do the same job with JUnit 4.0 and compare the results -
exactly what is simpler/harder/better etc. In the real.



Alexei, great! :)

If Andrew does the same job for nio we will have two separate

experiences that help us to move further in choosing the right testing
framework.



So shall we move next step now? That is to say, integrate TestNG, 
define the

annotation(George has given the first version:) ) .

If no one objects, I volunteer to have a try on nio module. :)
Thanks!

Any thoughts, objections?



Hi Andrew,

I thought that Oliver had volunteered me to do it :-)

It would be terrific if you were happy to proceed with this trial on 
NIO. Please note that if you intend to use the TestNG annotations 
approach then you will need to wait for a 5.0 VM for Harmony.


Best regards,
George




Thanks,

2006/7/18, Andrew Zhang [EMAIL PROTECTED]:
 On 7/18/06, George Harley [EMAIL PROTECTED] wrote:
 
  Oliver Deakin wrote:
   George Harley wrote:
   SNIP!
  
   Here the annotation on MyTestClass applies to all of its test
methods.
  
   So what are the well-known TestNG groups that we could define for
use
   inside Harmony ? Here are some of my initial thoughts:
  
  
   * type.impl  --  tests that are specific to Harmony
  
   So tests are implicitly API unless specified otherwise?
  
   I'm slightly confused by your definition of impl tests as tests
that
  are
   specific to Harmony. Does this mean that impl tests are only
   those that test classes in org.apache.harmony packages?
   I thought that impl was our way of saying tests that need to 
go on

   the bootclasspath.
  
   I think I just need a little clarification...
  
 
  Hi Oliver,
 
  I was using the definition of implementation-specific tests that we
  currently have on the Harmony testing conventions web page. That is,
  implementation-specific tests are those that are dependent on some
  aspect of the Harmony implementation and would therefore not pass 
when

  run against the RI or other conforming implementations. It's
orthogonal
  to the classpath/bootclasspath issue.
 
 
   * state.broken.platform id  --  tests bust on a specific 
platform

  
   * state.broken  --  tests broken on every platform but we want to
   decide whether or not to run from our suite configuration
  
   * os.platform id  --  tests that are to be run only on the
   specified platform (a test could be member of more than one of
these)
  
   And the defaults for these are an unbroken state and runs on any
   platform.
   That makes sense...
  
   Will the platform ids be organised in a similar way to the 
platform

ids
   we've discussed before for organisation of native code [1]?
  
 
  The actual string used to identify a particular platform can be
whatever
  we want it to be, just so long as we are consistent. So, yes, the 
ids
  mentioned in the referenced email would seem a good starting 
point. Do

  we need to include a 32-bit/64-bit identifier ?
 
 
   So all tests are, by default, in an all-platforms (or shared) 
group.

   If a test fails on all Windows platforms, it is marked with
   state.broken.windows.
   If a test fails on Windows but only on, say, amd hardware,
   it is marked state.broken.windows.amd.
  
 
  Yes. Agreed.
 
 
   Then when you come to run tests on your windows amd machine,
   you want to include all tests in the all-platform (shared) group,
   os.windows and os.windows.amd, and exclude all tests in
   the state.broken, state.broken.windows and 
state.broken.windows.amd

   groups.
  
   Does this tally with what you were thinking?
  
 
  Yes, that is the idea.
 
 
  
  
   What does everyone else think ? Does such a scheme sound 
reasonable

?
  
   I think so - it seems to cover our current requirements. Thanks 
for

   coming up with this!
  
 
  Thanks, but I don't see it as final yet really. It would be great to
  prove the worth of this by doing a trial on one of the existing
modules,
  ideally something that contains tests that are platform-specific.


 Hello George, how about doing a trial on NIO module?

 So far as I know, there are several platform dependent tests in NIO
module.
 :)

 The assert statements are commented out in these tests, with FIXME
mark.

 Furthurmore, I also find some platform dependent behaviours of
FileChannel.
 If TestNG is applied on NIO, I will supplement new tests for 
FileChannel

and
 fix the bug of source code.

 What's your opnion? Any suggestions/comments?

 Thanks!

 Best 

Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread George Harley

Alexei Zakharov wrote:

Hi,

George wrote:

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing 
modules,

 ideally something that contains tests that are platform-specific.


I volunteer to do this trial for beans module. I'm not sure that beans
contains any platform-specific tests, but I know for sure it has a lot
failed tests - so we can try TestNG with the real workload. I also
like to do the same job with JUnit 4.0 and compare the results -
exactly what is simpler/harder/better etc. In the real.

If Andrew does the same job for nio we will have two separate
experiences that help us to move further in choosing the right testing
framework.
Any thoughts, objections?

Thanks,



Hi Alexei,

Thank you very much for volunteering.

I know that I've mentioned it a few times already in this thread but we 
are dependent on a 5.0 VM being available to run Harmony on. Hopefully 
that will materialise sometime soon.


Best regards,
George



2006/7/18, Andrew Zhang [EMAIL PROTECTED]:

On 7/18/06, George Harley [EMAIL PROTECTED] wrote:

 Oliver Deakin wrote:
  George Harley wrote:
  SNIP!
 
  Here the annotation on MyTestClass applies to all of its test 
methods.

 
  So what are the well-known TestNG groups that we could define 
for use

  inside Harmony ? Here are some of my initial thoughts:
 
 
  * type.impl  --  tests that are specific to Harmony
 
  So tests are implicitly API unless specified otherwise?
 
  I'm slightly confused by your definition of impl tests as tests 
that

 are
  specific to Harmony. Does this mean that impl tests are only
  those that test classes in org.apache.harmony packages?
  I thought that impl was our way of saying tests that need to go on
  the bootclasspath.
 
  I think I just need a little clarification...
 

 Hi Oliver,

 I was using the definition of implementation-specific tests that we
 currently have on the Harmony testing conventions web page. That is,
 implementation-specific tests are those that are dependent on some
 aspect of the Harmony implementation and would therefore not pass when
 run against the RI or other conforming implementations. It's 
orthogonal

 to the classpath/bootclasspath issue.


  * state.broken.platform id  --  tests bust on a specific platform
 
  * state.broken  --  tests broken on every platform but we want to
  decide whether or not to run from our suite configuration
 
  * os.platform id  --  tests that are to be run only on the
  specified platform (a test could be member of more than one of 
these)

 
  And the defaults for these are an unbroken state and runs on any
  platform.
  That makes sense...
 
  Will the platform ids be organised in a similar way to the 
platform ids

  we've discussed before for organisation of native code [1]?
 

 The actual string used to identify a particular platform can be 
whatever

 we want it to be, just so long as we are consistent. So, yes, the ids
 mentioned in the referenced email would seem a good starting point. Do
 we need to include a 32-bit/64-bit identifier ?


  So all tests are, by default, in an all-platforms (or shared) group.
  If a test fails on all Windows platforms, it is marked with
  state.broken.windows.
  If a test fails on Windows but only on, say, amd hardware,
  it is marked state.broken.windows.amd.
 

 Yes. Agreed.


  Then when you come to run tests on your windows amd machine,
  you want to include all tests in the all-platform (shared) group,
  os.windows and os.windows.amd, and exclude all tests in
  the state.broken, state.broken.windows and state.broken.windows.amd
  groups.
 
  Does this tally with what you were thinking?
 

 Yes, that is the idea.


 
 
  What does everyone else think ? Does such a scheme sound 
reasonable ?

 
  I think so - it seems to cover our current requirements. Thanks for
  coming up with this!
 

 Thanks, but I don't see it as final yet really. It would be great to
 prove the worth of this by doing a trial on one of the existing 
modules,

 ideally something that contains tests that are platform-specific.


Hello George, how about doing a trial on NIO module?

So far as I know, there are several platform dependent tests in NIO 
module.

:)

The assert statements are commented out in these tests, with FIXME 
mark.


Furthurmore, I also find some platform dependent behaviours of 
FileChannel.
If TestNG is applied on NIO, I will supplement new tests for 
FileChannel and

fix the bug of source code.

What's your opnion? Any suggestions/comments?

Thanks!

Best regards,
 George


  Regards,
  Oliver
 
  [1]
 
 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 


 
 
 
  Thanks for reading this far.
 
  Best regards,
  George
 
 
 
  George Harley wrote:
  Hi,
 
  Just seen Tim's note on test support classes and it really 
caught my
  attention as I have been mulling over this issue for a little 
while
  now. I think that 

Re: [classlib] Testing conventions - a proposal

2006-07-17 Thread George Harley

Andrew Zhang wrote:

On 7/14/06, George Harley [EMAIL PROTECTED] wrote:


Hi,

If annotations were to be used to help us categorise tests in order to
simplify the definition of test configurations - what's included and
excluded etc - then a core set of annotations would need to be agreed by
the project. Consider the possibilities that the TestNG @Test
annotation offers us in this respect.

First, if a test method was identified as being broken and needed to be
excluded from all test runs while awaiting investigation then it would
be a simple matter of setting its enabled field like this:

   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be
left in its original class and we do not have to refer to it in any
suite configuration (e.g. in the suite xml file).

If a test method was identified as being broken on a specific platform
then we could make use of the groups field of the @Test type by making
the method a member of a group that identifies its predicament.
Something like this:

   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then specifically
exclude any test method (or class) that was a member of that group.

Making a test method or type a member of a well-known group (well-known
in the sense that the name and meaning has been agreed within the
project) is essentially adding some descriptive attributes to the test.
Like adjectives (the groups) and nouns (the tests) in the English
language.



It's the same in the Chinese language. :)

To take another example, if there was a test class that

contained methods only intended to be run on Windows and that were all
specific to Harmony (i.e. not API tests) then  one could envisage the
following kind of annotation:


@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use
inside Harmony ? Here are some of my initial thoughts:


* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to decide
whether or not to run from our suite configuration

* os.platform id  --  tests that are to be run only on the specified
platform (a test could be member of more than one of these)



Hi George, I have a small question.
Is os.platform id duplicate of state.broken.platform id?
Does os.platform.id equals state.broken.other platform ids? ( os.
platform.ids = all platforms - state.broken.platform ids)


Hi Andrew,


The way I see it, membership of the group os.platform id (e.g. 
os.win.IA32) means that the annotated type - class or method - is 
being flagged as specific to the identified platform. So, for instance 
the following annotation on a test class means it should only be run on 
the Win 32 platform:


@Test(groups={os.win.IA32})
public class Foo {
...various test methods...
}


The annotation state.broken.platform id (e.g. state.broken.win.IA32) 
flags a test or entire test class as being broken *only* on the 
identified platform. The test is intended to be run everywhere but has 
been identified as failing on certain platforms. So, for instance, if 
there was a test in the NIO module that that should be run everywhere 
but failed on Linux then that test method could be annotated as:


@Test(groups={state.broken.linux.IA32})
public void testBaa() {
...
}

It is then simple to ensure that when the NIO tests are run on Windows 
that the above test *is* included but when run on Linux it is

specifically *excluded* because it is broken on that platform.

To my mind, there is a big distinction between having platform-specific 
tests (membership of a group os.*) and having tests that are intended to 
be run everywhere but are in fact broken on one or more platforms 
(membership of a group state.broken.*).



Best regards,
George






What does everyone else think ? Does such a scheme sound reasonable ?



+0.02$.  I can't wait to see these annotations in Harmony. :)

I also have found some platform-dependent tests in NIO module.  Looking
forward to deal with them when TestNG is integrated. :)


Thanks for reading this far.


Best regards,
George



George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. After looking at the
 recent changes 

Re: [classlib] Testing conventions - a proposal

2006-07-14 Thread George Harley

Hi,

If annotations were to be used to help us categorise tests in order to 
simplify the definition of test configurations - what's included and 
excluded etc - then a core set of annotations would need to be agreed by 
the project. Consider the possibilities that the TestNG @Test 
annotation offers us in this respect.


First, if a test method was identified as being broken and needed to be 
excluded from all test runs while awaiting investigation then it would 
be a simple matter of setting its enabled field like this:


   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be 
left in its original class and we do not have to refer to it in any 
suite configuration (e.g. in the suite xml file).


If a test method was identified as being broken on a specific platform 
then we could make use of the groups field of the @Test type by making 
the method a member of a group that identifies its predicament. 
Something like this:


   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then specifically 
exclude any test method (or class) that was a member of that group.


Making a test method or type a member of a well-known group (well-known 
in the sense that the name and meaning has been agreed within the 
project) is essentially adding some descriptive attributes to the test. 
Like adjectives (the groups) and nouns (the tests) in the English 
language. To take another example, if there was a test class that 
contained methods only intended to be run on Windows and that were all 
specific to Harmony (i.e. not API tests) then  one could envisage the 
following kind of annotation:



@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use 
inside Harmony ? Here are some of my initial thoughts:



* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to decide 
whether or not to run from our suite configuration


* os.platform id  --  tests that are to be run only on the specified 
platform (a test could be member of more than one of these)



What does everyone else think ? Does such a scheme sound reasonable ?

Thanks for reading this far.

Best regards,
George



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at the 
recent changes to the LUNI module tests (where the layout guidelines 
were applied) I have a real concern that there are serious problems 
with this approach. We have started down a track of just continually 
growing the number of test source folders as new categories of test 
are identified and IMHO that is going to bring complexity and 
maintenance issues with these tests.


Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as things 
progress it is obvious that we are eventually heading for large 
amounts of related test code scattered or possibly duplicated across 
numerous hard wired source directories. How maintainable is that 
going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and simple 
to re-configure if necessary (pushing whole directories of files 
around the place does not seem a particularly lightweight approach); 
and something that is not going to potentially mess up contributed 
patches when the file they patch is found to have been recently pushed 
from source folder A to B.


To connect into another recent thread, there have been some posts 
lately about handling some test methods that fail on Harmony and have 
meant that entire test case classes have been excluded from our test 
runs. I have also been noticing some API test methods that pass fine 
on Harmony but fail when run against the RI. Are the different 
behaviours down to errors in the 

Re: [classlib] Testing conventions - a proposal

2006-07-14 Thread Andrew Zhang

On 7/14/06, George Harley [EMAIL PROTECTED] wrote:


Hi,

If annotations were to be used to help us categorise tests in order to
simplify the definition of test configurations - what's included and
excluded etc - then a core set of annotations would need to be agreed by
the project. Consider the possibilities that the TestNG @Test
annotation offers us in this respect.

First, if a test method was identified as being broken and needed to be
excluded from all test runs while awaiting investigation then it would
be a simple matter of setting its enabled field like this:

   @Test(enabled=false)
   public void myTest() {
   ...
   }

Temporarily disabling a test method in this way means that it can be
left in its original class and we do not have to refer to it in any
suite configuration (e.g. in the suite xml file).

If a test method was identified as being broken on a specific platform
then we could make use of the groups field of the @Test type by making
the method a member of a group that identifies its predicament.
Something like this:

   @Test(groups={state.broken.win.IA32})
   public void myOtherTest() {
   ...
   }

The configuration for running tests on Windows would then specifically
exclude any test method (or class) that was a member of that group.

Making a test method or type a member of a well-known group (well-known
in the sense that the name and meaning has been agreed within the
project) is essentially adding some descriptive attributes to the test.
Like adjectives (the groups) and nouns (the tests) in the English
language.



It's the same in the Chinese language. :)

To take another example, if there was a test class that

contained methods only intended to be run on Windows and that were all
specific to Harmony (i.e. not API tests) then  one could envisage the
following kind of annotation:


@Test(groups={type.impl, os.win.IA32})
public class MyTestClass {

   public void testOne() {
   ...
   }

   public void testTwo() {
   ...
   }

   @Test(enabled=false)
   public void brokenTest() {
   ...
   }
}

Here the annotation on MyTestClass applies to all of its test methods.

So what are the well-known TestNG groups that we could define for use
inside Harmony ? Here are some of my initial thoughts:


* type.impl  --  tests that are specific to Harmony

* state.broken.platform id  --  tests bust on a specific platform

* state.broken  --  tests broken on every platform but we want to decide
whether or not to run from our suite configuration

* os.platform id  --  tests that are to be run only on the specified
platform (a test could be member of more than one of these)



Hi George, I have a small question.
Is os.platform id duplicate of state.broken.platform id?
Does os.platform.id equals state.broken.other platform ids? ( os.
platform.ids = all platforms - state.broken.platform ids)



What does everyone else think ? Does such a scheme sound reasonable ?



+0.02$.  I can't wait to see these annotations in Harmony. :)

I also have found some platform-dependent tests in NIO module.  Looking
forward to deal with them when TestNG is integrated. :)


Thanks for reading this far.


Best regards,
George



George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. After looking at the
 recent changes to the LUNI module tests (where the layout guidelines
 were applied) I have a real concern that there are serious problems
 with this approach. We have started down a track of just continually
 growing the number of test source folders as new categories of test
 are identified and IMHO that is going to bring complexity and
 maintenance issues with these tests.

 Consider the dimensions of tests that we have ...

 API
 Harmony-specific
 Platform-specific
 Run on classpath
 Run on bootclasspath
 Behaves different between Harmony and RI
 Stress
 ...and so on...


 If you weigh up all of the different possible permutations and then
 consider that the above list is highly likely to be extended as things
 progress it is obvious that we are eventually heading for large
 amounts of related test code scattered or possibly duplicated across
 numerous hard wired source directories. How maintainable is that
 going to be ?

 If we want to run different tests in different configurations then
 IMHO we need to be thinking a whole lot smarter. We need to be
 thinking about keeping tests for specific areas of functionality
 together (thus easing maintenance); we need something quick and simple
 to re-configure if necessary (pushing whole directories of files
 around the place does not seem a particularly lightweight approach);
 and something that is not going to potentially 

Re: Re: [classlib] Testing conventions - a proposal

2006-07-11 Thread Alexei Zakharov

Hi Alex,

It's a pitty what you didn't find common sense in my post. Probably I
was not clear enough. My key points are:
1. JUnit is much like a standard of unit testing today
2. We are using JUnit already, have thousands of tests
3. May be I was not correct about bugs in TestNG - I assume that it
can turn out to be a marvellous tool. I don't have much TestNG
experience yet. But IMHO we still need the strong motivation to start
such a serious migration process from JUnit to TestNG. Are we really
dissatisfied with JUnit so much?
4. Healthy conservatism may have common sense sometimes :)

With best regards,.

2006/7/11, Alex Blewitt [EMAIL PROTECTED]:

On 10/07/06, Alexei Zakharov [EMAIL PROTECTED] wrote:
 Hi George,

  For the purposes of this discussion it would be fascinating to find out
  why you refer to TestNG as being an unstable test harness. What is
  that statement based on ?

 My exact statement was referring to TestNG as probably unstable
 rather than simply unstable. ;)  This statement was based on posts
 from Richard Liang about the bug in the TestNG migration tool and on
 common sense. If the project has such an obvious bug in one place it
 may probably have other bugs in other places. JUnit is quite famous
 and widely used toolkit that proved to be stable enough. TestNG is
 neither famous nor widely used.

Unfortunately, this isn't terribly correct :-) The purpose of the
JUnit migration tool is to automatically add annotations to an
existing source, and is independent from the test harness itself.
TestNG is also quite famous, and indeed, a lot of what's available in
JUnit4 was based on ideas from TestNG. It's also been around since
August 2004, so it's had almost a couple of years out in the field
(and as the download page [1] shows, it's pretty active). There's also
plugins for any build system you care to name (ant, maven, Eclipse,
IntelliJ) and it's pretty widely adopted in certain areas.

In short, not a terribly common-sense post :-)

Alex.

[1] http://testng.org/doc/download.html


--
Alexei Zakharov,
Intel Middleware Product Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

George Harley wrote:


Hi,

Just seen Tim's note on test support classes and it really caught my
attention as I have been mulling over this issue for a little while
now. I think that it is a good time for us to return to the topic of
class library test layouts.

The current proposal [1] sets out to segment our different types of
test by placing them in different file locations. 
  

ok - on closer reading of this document, I have a few gripes...

First, what happened to the Maven layout we agreed on a while ago?



Maven layout?  We were doing that layout in Jakarta projects long
before maven
  


Interesting - I hadn't realised that was the case. However, it still
doesn't explain the missing java directory ;)


This is a fun thread.  I plan to read it from end to end later today and
comment.

Initial thoughts are that I've been wanting to use TestNG for months
(hence my resistance to any JUnit deps more than we needed to) and
second, annotations won't solve our problems.  More later :)
  


No, annotations will not solve *all* our problems - but, as you probably
already know, they may solve some of those recently discussed on this
list when used in conjunction with TestNG (such as platform specific
tests, test exclusions etc.).

Regards,
Oliver



geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Alexei Zakharov

Actually, there's a very valid benefit for using TestNG markers (=
annotations/JavaDoc) for grouping tests; the directory structure is a
tree, whereas the markers can form any slice of tests, and the sets


Concerning TestNG vs JUnit. I just like to pay your attention on the
fact what it is possible to achieve the same level of test
grouping/slicing with JUnit TestSuites. You may define any number of
intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
XXXWinSpecificSuite or whatever. Without necessity of migrating to new
(probably unstable) test harness.
Just my two cents.


2006/7/8, Alex Blewitt [EMAIL PROTECTED]:

On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:

 So while I like the annotations, and expect we can use them effectively,
 I have an instinctive skepticism of annotations right now because in
 general (in general in Java), I'm not convinced we've used them enough
 to grok good design patterns.

There's really no reason to get hung up on the annotations. TestNG
works just as well with JavaDoc source comments; annotations are only
another means to that end. (They're probably a better one for the
future, but it's just an implementation detail.)

 Now since I still haven't read the thread fully, I'm jumping to
 conclusions, taking it to the extreme, etc etc, but my thinking in
 writing the above is that if we bury everything about our test
 'parameter space' in annotations, some of the visible organization we
 have now w/ on-disk layout becomes invisible, and the readable
 summaries of aspects of testing that we'd have in an XML metadata
 document (or whatever) also are hard because you need to scan the
 sources to find all instances of annotation X.

I'm hoping that this would be just as applicable to using JavaDoc
variants, and that the problem's not with annotations per se.

In either case, both are grokkable with tools -- either
annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
hard to configure one of those to periodically scan the codebase to
generate reports. Furthermore, as long as the annotation X is well
defined, *you* don't have to scan it -- you leave it up to TestNG to
figure it out.

Actually, there's a very valid benefit for using TestNG markers (=
annotations/JavaDoc) for grouping tests; the directory structure is a
tree, whereas the markers can form any slice of tests, and the sets
don't need to be strict subsets (with a tree, everything has to be a
strict subset of its parents). That means that it's possible to define
a marker IO to run all the IO tests, or a marker Win32 to run all the
Win32 tests, and both of those will contain IO-specific Win32 tests.
You can't do that in a tree structure without duplicating content
somewhere along the line (e.g. /win/io or /io/win). Neither of these
scale well, and every time you add a new dimension, you're doubling
the structure of the directory, but merely adding a new marker with
TestNG. So if you wanted to have (say) boot classpath tests vs api
tests, then you'd ahve to have /api/win/io and /boot/win/io (or
various permutations as applicable).

Most of the directory-based arguments seem to be along the lines of
/api/win/io is better! No, /win/io/api is better!. Just have an
'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
ones to run. You can then even get specific, and only run the Windows
IO API tests, if you really want -- but if you don't, you get the
benefit of being able to run all IO tests (both API and boot).

There doesn't seem to be any benefit to having a strict tree-like
structure to the tests when it's possible to have a multi-dimensional
matrix of all possible combinations that's managed by the tool.

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





--
Alexei Zakharov,
Intel Middleware Product Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread George Harley

Alexei Zakharov wrote:

Actually, there's a very valid benefit for using TestNG markers (=
annotations/JavaDoc) for grouping tests; the directory structure is a
tree, whereas the markers can form any slice of tests, and the sets


Concerning TestNG vs JUnit. I just like to pay your attention on the
fact what it is possible to achieve the same level of test
grouping/slicing with JUnit TestSuites. You may define any number of
intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
XXXWinSpecificSuite or whatever. Without necessity of migrating to new
(probably unstable) test harness.
Just my two cents.




Hi Alexei,

You are quite correct that JUnit test suites are another alternative 
here. If I recall correctly, their use was discussed in the very early 
days of this project but it came to nothing and we instead went down the 
route of using exclusion filters in the Ant JUnit task. That approach 
does not offer much in the way of fine grain control and relies on us 
pushing stuff around the repository. Hence the kicking off of this thread.


For the purposes of this discussion it would be fascinating to find out 
why you refer to TestNG as being an unstable test harness. What is 
that statement based on ?


Best regards,
George



2006/7/8, Alex Blewitt [EMAIL PROTECTED]:

On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:

 So while I like the annotations, and expect we can use them 
effectively,

 I have an instinctive skepticism of annotations right now because in
 general (in general in Java), I'm not convinced we've used them enough
 to grok good design patterns.

There's really no reason to get hung up on the annotations. TestNG
works just as well with JavaDoc source comments; annotations are only
another means to that end. (They're probably a better one for the
future, but it's just an implementation detail.)

 Now since I still haven't read the thread fully, I'm jumping to
 conclusions, taking it to the extreme, etc etc, but my thinking in
 writing the above is that if we bury everything about our test
 'parameter space' in annotations, some of the visible organization we
 have now w/ on-disk layout becomes invisible, and the readable
 summaries of aspects of testing that we'd have in an XML metadata
 document (or whatever) also are hard because you need to scan the
 sources to find all instances of annotation X.

I'm hoping that this would be just as applicable to using JavaDoc
variants, and that the problem's not with annotations per se.

In either case, both are grokkable with tools -- either
annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
hard to configure one of those to periodically scan the codebase to
generate reports. Furthermore, as long as the annotation X is well
defined, *you* don't have to scan it -- you leave it up to TestNG to
figure it out.

Actually, there's a very valid benefit for using TestNG markers (=
annotations/JavaDoc) for grouping tests; the directory structure is a
tree, whereas the markers can form any slice of tests, and the sets
don't need to be strict subsets (with a tree, everything has to be a
strict subset of its parents). That means that it's possible to define
a marker IO to run all the IO tests, or a marker Win32 to run all the
Win32 tests, and both of those will contain IO-specific Win32 tests.
You can't do that in a tree structure without duplicating content
somewhere along the line (e.g. /win/io or /io/win). Neither of these
scale well, and every time you add a new dimension, you're doubling
the structure of the directory, but merely adding a new marker with
TestNG. So if you wanted to have (say) boot classpath tests vs api
tests, then you'd ahve to have /api/win/io and /boot/win/io (or
various permutations as applicable).

Most of the directory-based arguments seem to be along the lines of
/api/win/io is better! No, /win/io/api is better!. Just have an
'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
ones to run. You can then even get specific, and only run the Windows
IO API tests, if you really want -- but if you don't, you get the
benefit of being able to run all IO tests (both API and boot).

There doesn't seem to be any benefit to having a strict tree-like
structure to the tests when it's possible to have a multi-dimensional
matrix of all possible combinations that's managed by the tool.

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]








-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Geir Magnusson Jr


Oliver Deakin wrote:
 Geir Magnusson Jr wrote:
 Oliver Deakin wrote:
  
 George Harley wrote:

 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations.   
 ok - on closer reading of this document, I have a few gripes...

 First, what happened to the Maven layout we agreed on a while ago?
 

 Maven layout?  We were doing that layout in Jakarta projects long
 before maven
   
 
 Interesting - I hadn't realised that was the case. However, it still
 doesn't explain the missing java directory ;)

Oh, agreed.  We definitely should normalize this.

 
 This is a fun thread.  I plan to read it from end to end later today and
 comment.

 Initial thoughts are that I've been wanting to use TestNG for months
 (hence my resistance to any JUnit deps more than we needed to) and
 second, annotations won't solve our problems.  More later :)
   
 
 No, annotations will not solve *all* our problems - but, as you probably
 already know, they may solve some of those recently discussed on this
 list when used in conjunction with TestNG (such as platform specific
 tests, test exclusions etc.).

Maybe :)

geir

 
 Regards,
 Oliver
 
 
 geir

 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


   
 

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Geir Magnusson Jr


George Harley wrote:
 Alexei Zakharov wrote:
 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets

 Concerning TestNG vs JUnit. I just like to pay your attention on the
 fact what it is possible to achieve the same level of test
 grouping/slicing with JUnit TestSuites. You may define any number of
 intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
 XXXWinSpecificSuite or whatever. Without necessity of migrating to new
 (probably unstable) test harness.
 Just my two cents.


 
 Hi Alexei,
 
 You are quite correct that JUnit test suites are another alternative
 here. If I recall correctly, their use was discussed in the very early
 days of this project but it came to nothing and we instead went down the
 route of using exclusion filters in the Ant JUnit task. That approach
 does not offer much in the way of fine grain control and relies on us
 pushing stuff around the repository. Hence the kicking off of this thread.
 
 For the purposes of this discussion it would be fascinating to find out
 why you refer to TestNG as being an unstable test harness. What is
 that statement based on ?

Yeah!  What he said!  :)

geir

 
 Best regards,
 George
 
 
 2006/7/8, Alex Blewitt [EMAIL PROTECTED]:
 On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 
  So while I like the annotations, and expect we can use them
 effectively,
  I have an instinctive skepticism of annotations right now because in
  general (in general in Java), I'm not convinced we've used them enough
  to grok good design patterns.

 There's really no reason to get hung up on the annotations. TestNG
 works just as well with JavaDoc source comments; annotations are only
 another means to that end. (They're probably a better one for the
 future, but it's just an implementation detail.)

  Now since I still haven't read the thread fully, I'm jumping to
  conclusions, taking it to the extreme, etc etc, but my thinking in
  writing the above is that if we bury everything about our test
  'parameter space' in annotations, some of the visible organization we
  have now w/ on-disk layout becomes invisible, and the readable
  summaries of aspects of testing that we'd have in an XML metadata
  document (or whatever) also are hard because you need to scan the
  sources to find all instances of annotation X.

 I'm hoping that this would be just as applicable to using JavaDoc
 variants, and that the problem's not with annotations per se.

 In either case, both are grokkable with tools -- either
 annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
 hard to configure one of those to periodically scan the codebase to
 generate reports. Furthermore, as long as the annotation X is well
 defined, *you* don't have to scan it -- you leave it up to TestNG to
 figure it out.

 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets
 don't need to be strict subsets (with a tree, everything has to be a
 strict subset of its parents). That means that it's possible to define
 a marker IO to run all the IO tests, or a marker Win32 to run all the
 Win32 tests, and both of those will contain IO-specific Win32 tests.
 You can't do that in a tree structure without duplicating content
 somewhere along the line (e.g. /win/io or /io/win). Neither of these
 scale well, and every time you add a new dimension, you're doubling
 the structure of the directory, but merely adding a new marker with
 TestNG. So if you wanted to have (say) boot classpath tests vs api
 tests, then you'd ahve to have /api/win/io and /boot/win/io (or
 various permutations as applicable).

 Most of the directory-based arguments seem to be along the lines of
 /api/win/io is better! No, /win/io/api is better!. Just have an
 'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
 ones to run. You can then even get specific, and only run the Windows
 IO API tests, if you really want -- but if you don't, you get the
 benefit of being able to run all IO tests (both API and boot).

 There doesn't seem to be any benefit to having a strict tree-like
 structure to the tests when it's possible to have a multi-dimensional
 matrix of all possible combinations that's managed by the tool.

 Alex.

 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




 
 
 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 


Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Alexei Zakharov

Hi George,


For the purposes of this discussion it would be fascinating to find out
why you refer to TestNG as being an unstable test harness. What is
that statement based on ?


My exact statement was referring to TestNG as probably unstable
rather than simply unstable. ;)  This statement was based on posts
from Richard Liang about the bug in the TestNG migration tool and on
common sense. If the project has such an obvious bug in one place it
may probably have other bugs in other places. JUnit is quite famous
and widely used toolkit that proved to be stable enough. TestNG is
neither famous nor widely used. And IMHO it makes sense to be careful
with new exciting tools until we *really* need their innovative
functionality.


2006/7/10, George Harley [EMAIL PROTECTED]:

Alexei Zakharov wrote:
 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets

 Concerning TestNG vs JUnit. I just like to pay your attention on the
 fact what it is possible to achieve the same level of test
 grouping/slicing with JUnit TestSuites. You may define any number of
 intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
 XXXWinSpecificSuite or whatever. Without necessity of migrating to new
 (probably unstable) test harness.
 Just my two cents.



Hi Alexei,

You are quite correct that JUnit test suites are another alternative
here. If I recall correctly, their use was discussed in the very early
days of this project but it came to nothing and we instead went down the
route of using exclusion filters in the Ant JUnit task. That approach
does not offer much in the way of fine grain control and relies on us
pushing stuff around the repository. Hence the kicking off of this thread.

For the purposes of this discussion it would be fascinating to find out
why you refer to TestNG as being an unstable test harness. What is
that statement based on ?

Best regards,
George


 2006/7/8, Alex Blewitt [EMAIL PROTECTED]:
 On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 
  So while I like the annotations, and expect we can use them
 effectively,
  I have an instinctive skepticism of annotations right now because in
  general (in general in Java), I'm not convinced we've used them enough
  to grok good design patterns.

 There's really no reason to get hung up on the annotations. TestNG
 works just as well with JavaDoc source comments; annotations are only
 another means to that end. (They're probably a better one for the
 future, but it's just an implementation detail.)

  Now since I still haven't read the thread fully, I'm jumping to
  conclusions, taking it to the extreme, etc etc, but my thinking in
  writing the above is that if we bury everything about our test
  'parameter space' in annotations, some of the visible organization we
  have now w/ on-disk layout becomes invisible, and the readable
  summaries of aspects of testing that we'd have in an XML metadata
  document (or whatever) also are hard because you need to scan the
  sources to find all instances of annotation X.

 I'm hoping that this would be just as applicable to using JavaDoc
 variants, and that the problem's not with annotations per se.

 In either case, both are grokkable with tools -- either
 annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
 hard to configure one of those to periodically scan the codebase to
 generate reports. Furthermore, as long as the annotation X is well
 defined, *you* don't have to scan it -- you leave it up to TestNG to
 figure it out.

 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets
 don't need to be strict subsets (with a tree, everything has to be a
 strict subset of its parents). That means that it's possible to define
 a marker IO to run all the IO tests, or a marker Win32 to run all the
 Win32 tests, and both of those will contain IO-specific Win32 tests.
 You can't do that in a tree structure without duplicating content
 somewhere along the line (e.g. /win/io or /io/win). Neither of these
 scale well, and every time you add a new dimension, you're doubling
 the structure of the directory, but merely adding a new marker with
 TestNG. So if you wanted to have (say) boot classpath tests vs api
 tests, then you'd ahve to have /api/win/io and /boot/win/io (or
 various permutations as applicable).

 Most of the directory-based arguments seem to be along the lines of
 /api/win/io is better! No, /win/io/api is better!. Just have an
 'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
 ones to run. You can then even get specific, and only run the Windows
 IO API tests, if you really want -- but if you don't, you get the
 benefit of being able to run all IO tests (both API and 

Re: Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Alex Blewitt

On 10/07/06, Alexei Zakharov [EMAIL PROTECTED] wrote:

Hi George,

 For the purposes of this discussion it would be fascinating to find out
 why you refer to TestNG as being an unstable test harness. What is
 that statement based on ?

My exact statement was referring to TestNG as probably unstable
rather than simply unstable. ;)  This statement was based on posts
from Richard Liang about the bug in the TestNG migration tool and on
common sense. If the project has such an obvious bug in one place it
may probably have other bugs in other places. JUnit is quite famous
and widely used toolkit that proved to be stable enough. TestNG is
neither famous nor widely used.


Unfortunately, this isn't terribly correct :-) The purpose of the
JUnit migration tool is to automatically add annotations to an
existing source, and is independent from the test harness itself.
TestNG is also quite famous, and indeed, a lot of what's available in
JUnit4 was based on ideas from TestNG. It's also been around since
August 2004, so it's had almost a couple of years out in the field
(and as the download page [1] shows, it's pretty active). There's also
plugins for any build system you care to name (ant, maven, Eclipse,
IntelliJ) and it's pretty widely adopted in certain areas.

In short, not a terribly common-sense post :-)

Alex.

[1] http://testng.org/doc/download.html

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread George Harley

Alexei Zakharov wrote:

Hi George,


For the purposes of this discussion it would be fascinating to find out
why you refer to TestNG as being an unstable test harness. What is
that statement based on ?


My exact statement was referring to TestNG as probably unstable
rather than simply unstable. ;)  This statement was based on posts
from Richard Liang about the bug in the TestNG migration tool and on
common sense. If the project has such an obvious bug in one place it
may probably have other bugs in other places. JUnit is quite famous
and widely used toolkit that proved to be stable enough. TestNG is
neither famous nor widely used. And IMHO it makes sense to be careful
with new exciting tools until we *really* need their innovative
functionality.



Hi Alexei,

Last I heard, Richard posted saying that there was no bug in the 
migration tool [1]. The command line tool is designed to locate JUnit 
tests under a specified location and add the TestNG annotations to them. 
That's what it does.


You are right to say that it makes sense to be careful in this matter. 
Nobody wants to do anything that affects Harmony in an adverse way.


Best regards,
George

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200607.mbox/[EMAIL PROTECTED]





2006/7/10, George Harley [EMAIL PROTECTED]:

Alexei Zakharov wrote:
 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets

 Concerning TestNG vs JUnit. I just like to pay your attention on the
 fact what it is possible to achieve the same level of test
 grouping/slicing with JUnit TestSuites. You may define any number of
 intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
 XXXWinSpecificSuite or whatever. Without necessity of migrating to new
 (probably unstable) test harness.
 Just my two cents.



Hi Alexei,

You are quite correct that JUnit test suites are another alternative
here. If I recall correctly, their use was discussed in the very early
days of this project but it came to nothing and we instead went down the
route of using exclusion filters in the Ant JUnit task. That approach
does not offer much in the way of fine grain control and relies on us
pushing stuff around the repository. Hence the kicking off of this 
thread.


For the purposes of this discussion it would be fascinating to find out
why you refer to TestNG as being an unstable test harness. What is
that statement based on ?

Best regards,
George


 2006/7/8, Alex Blewitt [EMAIL PROTECTED]:
 On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 
  So while I like the annotations, and expect we can use them
 effectively,
  I have an instinctive skepticism of annotations right now 
because in
  general (in general in Java), I'm not convinced we've used them 
enough

  to grok good design patterns.

 There's really no reason to get hung up on the annotations. TestNG
 works just as well with JavaDoc source comments; annotations are only
 another means to that end. (They're probably a better one for the
 future, but it's just an implementation detail.)

  Now since I still haven't read the thread fully, I'm jumping to
  conclusions, taking it to the extreme, etc etc, but my thinking in
  writing the above is that if we bury everything about our test
  'parameter space' in annotations, some of the visible 
organization we

  have now w/ on-disk layout becomes invisible, and the readable
  summaries of aspects of testing that we'd have in an XML metadata
  document (or whatever) also are hard because you need to scan the
  sources to find all instances of annotation X.

 I'm hoping that this would be just as applicable to using JavaDoc
 variants, and that the problem's not with annotations per se.

 In either case, both are grokkable with tools -- either
 annotation-savy readers or a JavaDoc tag processor, and it 
wouldn't be

 hard to configure one of those to periodically scan the codebase to
 generate reports. Furthermore, as long as the annotation X is well
 defined, *you* don't have to scan it -- you leave it up to TestNG to
 figure it out.

 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets
 don't need to be strict subsets (with a tree, everything has to be a
 strict subset of its parents). That means that it's possible to 
define

 a marker IO to run all the IO tests, or a marker Win32 to run all the
 Win32 tests, and both of those will contain IO-specific Win32 tests.
 You can't do that in a tree structure without duplicating content
 somewhere along the line (e.g. /win/io or /io/win). Neither of these
 scale well, and every time you add a new dimension, you're doubling
 the structure of the directory, but merely adding a new marker with
 TestNG. So if you wanted to 

[classlib] TestNG v. JUnit (was: RE: [classlib] Testing conventions - a proposal)

2006-07-10 Thread Nathan Beyer
Not to add another fire to this topic, but with all things being relative,
so far this topic has been comparison of the TestNG and JUnit v3.8. From
what I understand, the latest JUnit v4.1 provides many of the same
annotation features that TestNG does, as well guaranteed compatibility with
JUnit v3-based tests. 

If we were to compare moving to TestNG with upgrading to JUnit 4.1, would
there still be as much value in the proposition to move to TestNG?

-Nathan

 -Original Message-
 From: George Harley [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 10, 2006 3:57 PM
 To: harmony-dev@incubator.apache.org
 Subject: Re: [classlib] Testing conventions - a proposal
 
 Alexei Zakharov wrote:
  Hi George,
 
  For the purposes of this discussion it would be fascinating to find out
  why you refer to TestNG as being an unstable test harness. What is
  that statement based on ?
 
  My exact statement was referring to TestNG as probably unstable
  rather than simply unstable. ;)  This statement was based on posts
  from Richard Liang about the bug in the TestNG migration tool and on
  common sense. If the project has such an obvious bug in one place it
  may probably have other bugs in other places. JUnit is quite famous
  and widely used toolkit that proved to be stable enough. TestNG is
  neither famous nor widely used. And IMHO it makes sense to be careful
  with new exciting tools until we *really* need their innovative
  functionality.
 
 
 Hi Alexei,
 
 Last I heard, Richard posted saying that there was no bug in the
 migration tool [1]. The command line tool is designed to locate JUnit
 tests under a specified location and add the TestNG annotations to them.
 That's what it does.
 
 You are right to say that it makes sense to be careful in this matter.
 Nobody wants to do anything that affects Harmony in an adverse way.
 
 Best regards,
 George
 
 [1]
 http://mail-archives.apache.org/mod_mbox/incubator-harmony-
 dev/200607.mbox/[EMAIL PROTECTED]
 
 
 
  2006/7/10, George Harley [EMAIL PROTECTED]:
  Alexei Zakharov wrote:
   Actually, there's a very valid benefit for using TestNG markers (=
   annotations/JavaDoc) for grouping tests; the directory structure is
 a
   tree, whereas the markers can form any slice of tests, and the sets
  
   Concerning TestNG vs JUnit. I just like to pay your attention on the
   fact what it is possible to achieve the same level of test
   grouping/slicing with JUnit TestSuites. You may define any number of
   intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
   XXXWinSpecificSuite or whatever. Without necessity of migrating to
 new
   (probably unstable) test harness.
   Just my two cents.
  
  
 
  Hi Alexei,
 
  You are quite correct that JUnit test suites are another alternative
  here. If I recall correctly, their use was discussed in the very early
  days of this project but it came to nothing and we instead went down
 the
  route of using exclusion filters in the Ant JUnit task. That approach
  does not offer much in the way of fine grain control and relies on us
  pushing stuff around the repository. Hence the kicking off of this
  thread.
 
  For the purposes of this discussion it would be fascinating to find out
  why you refer to TestNG as being an unstable test harness. What is
  that statement based on ?
 
  Best regards,
  George
 
 
   2006/7/8, Alex Blewitt [EMAIL PROTECTED]:
   On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
   
So while I like the annotations, and expect we can use them
   effectively,
I have an instinctive skepticism of annotations right now
  because in
general (in general in Java), I'm not convinced we've used them
  enough
to grok good design patterns.
  
   There's really no reason to get hung up on the annotations. TestNG
   works just as well with JavaDoc source comments; annotations are
 only
   another means to that end. (They're probably a better one for the
   future, but it's just an implementation detail.)
  
Now since I still haven't read the thread fully, I'm jumping to
conclusions, taking it to the extreme, etc etc, but my thinking in
writing the above is that if we bury everything about our test
'parameter space' in annotations, some of the visible
  organization we
have now w/ on-disk layout becomes invisible, and the readable
summaries of aspects of testing that we'd have in an XML
 metadata
document (or whatever) also are hard because you need to scan the
sources to find all instances of annotation X.
  
   I'm hoping that this would be just as applicable to using JavaDoc
   variants, and that the problem's not with annotations per se.
  
   In either case, both are grokkable with tools -- either
   annotation-savy readers or a JavaDoc tag processor, and it
  wouldn't be
   hard to configure one of those to periodically scan the codebase to
   generate reports. Furthermore, as long as the annotation X is well
   defined, *you* don't have to scan

Re: [classlib] Testing conventions - a proposal

2006-07-09 Thread Richard Liang



Richard Liang wrote:



Paulex Yang wrote:

Richard Liang wrote:

Hello All,

After read through the document recommended by Alex, I think TestNG 
can really meet our requirement. It provides much flexibility for 
test configuration. ;-)


If we decide to transfer to TestNG, we shall:

1. Identify Harmony testing strategy. (It's not easy)
2. Define TestNG suite/groups to reflect Harmony testing strategy
3. Decide to use Java 5 Annotations or Java 1.4 JavaDoc annotations
Any difference between using 1.4 doclet or 5.0 annotation? If we use 
Java 1.4 so far, can we migrate to annotation easily?
Both 1.4 doclet and 5.0 annotation can provide same support for 
testing configuration. The retention policy of TestNG's 5.0 annotation 
is RUNTIME, that's the TestNG tests should be compiled into 5.0 
classes [1]. I don't think it's easy to migrate from doclet to 
annotation, at least, TestNG does not this support.  Correct me if I'm 
wrong.  ;-)


1. http://testng.org/doc/documentation-main.html#jdk-14

4. Convert all JUnit tests to TestNG tests (TestNG provides a tool 
org.testng.JUnitConverter for migrating from JUnit, but it seems 
that the tool has a bug  :-P )
I'm sorry, but...what the bug looks like? I think it is important 
because we have so many JUnit tests already, it will be a big concern 
of the TestNG solution if we have not tool to migrate.

I can show an example :-)

For a junit tests:

import junit.framework.TestCase;

public class SampleTest extends TestCase{
   public void testMethod1() {
   assertTrue(true);
   }
}

We suppose the corresponding TestNG test is:

import org.testng.annotations.Test;
import static org.testng.AssertJUnit.*;

public class SampleTest{
 @Test
   public void testMethod1() {
   assertTrue(true);
   }
}

But the tool will only add TestNG annotation to junit test methods:

import org.testng.annotations.*;
import junit.framework.TestCase;

public class SampleTest extends TestCase{
 @Test
   public void testMethod1() {
   assertTrue(true);
   }
}
Sorry Paulex, It sounds not a bug if we decide to use TestNG while still 
keeping the flexibility to use JUnit. Any comments? Thanks a lot.


Richard


TestNG Eclipse plugin also provide a way to convert junit test to 
TestNG test[2], unfortunately, it also has bugs. :-( It should be 
static import all the assert methods of org.testng.AssertJUnit, but 
the converter only uses common import.


2. http://testng.org/doc/eclipse.html


5. Choose a module to run a pilot
...

Please correct me if I'm wrong. Thanks a lot.

Best regards,
Richard.

George Harley wrote:

Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you 
please
identify what we shall do to transfer from junit to TestNG? 
Thanks a lot.


Me? I'm just highly opinionated :-)


Hi Alex,

I think we are all pretty much in the TestNG novice category :-)




There's guidelines for migrating from JUnit to TestNG at the home 
page:

http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)



I have done some private experimentation with this command line 
utility and it seems to work well. In the first instance it would 
be good to preserve the JUnit nature of the tests - i.e. still 
have the test classes extend from JUnit TestCase etc - so that 
there is always a backwards migration path. That's me being 
paranoid. Note that the equivalent migration functionality in the 
latest TestNG plug-in for Eclipse did not allow that but, in 
addition to adding in the annotations, insisted on removing the 
inheritance from TestCase.



There's also instructions about how to set it up with an Ant-based 
build:

http://testng.org/doc/ant.html

I'll see if I can migrate the tests I've got in the Pack200 dir to 
use
TestNG, so that you can see what it looks like. Unfortunately, I 
doubt

that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...

Alex.


Although we haven't gotten round to discussing specifics yet, it is 
probably timely to mention here that using the TestNG annotations 
approach (as opposed to the pre-1.5 Javadoc comments approach) will 
not work so long as we are compiling Harmony code with the jsr14 
target. It looked like the annotation metadata did not make it into 
the generated class files (at least this is what I saw in my own 
experiments). If we want to use the annotations approach we will 
have to wait until we move up to compiling for a 1.5 target. 
Hopefully that will not be too long now..


In the meantime you could try out using the Javadoc comments 
approach, just to get a feel for how things run. The downside to 
that is that your test source needs to be available at runtime so 
that the comments are available for the framework to examine.


Best regards,
George




Re: [classlib] Testing conventions - a proposal

2006-07-08 Thread Geir Magnusson Jr


Nathan Beyer wrote:
 
 -Original Message-
 From: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]

 
 This is a fun thread.  I plan to read it from end to end later today and
 comment.

 Initial thoughts are that I've been wanting to use TestNG for months
 (hence my resistance to any JUnit deps more than we needed to) and
 second, annotations won't solve our problems.  More later :)

 
 I find this to be an extremely interesting comment. What value do you see
 TestNG offering? Most of the conversations pushing for TestNG as a solution
 have been all about the annotations.

I meant all our problems :)   I've been suggesting being open to
TestNG for a while now, for the reason that it's second generation,
written by people I know and trust who have had to use it in anger -
they were scratching their own itch.  I like a bunch of small things,
like not having to extend a base class, dependent test methods,
pluggability, parameterized tests,  and the annotations, but I haven't
used it -it's all academic for me so far.

So while I like the annotations, and expect we can use them effectively,
I have an instinctive skepticism of annotations right now because in
general (in general in Java), I'm not convinced we've used them enough
to grok good design patterns.

Now since I still haven't read the thread fully, I'm jumping to
conclusions, taking it to the extreme, etc etc, but my thinking in
writing the above is that if we bury everything about our test
'parameter space' in annotations, some of the visible organization we
have now w/ on-disk layout becomes invisible, and the readable
summaries of aspects of testing that we'd have in an XML metadata
document (or whatever) also are hard because you need to scan the
sources to find all instances of annotation X.

Anyway, I wanted this to be a short note to address your concern, but I
don't seem to be able to do that :)  More later.

geir



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Re: [classlib] Testing conventions - a proposal

2006-07-08 Thread Alex Blewitt

On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:


So while I like the annotations, and expect we can use them effectively,
I have an instinctive skepticism of annotations right now because in
general (in general in Java), I'm not convinced we've used them enough
to grok good design patterns.


There's really no reason to get hung up on the annotations. TestNG
works just as well with JavaDoc source comments; annotations are only
another means to that end. (They're probably a better one for the
future, but it's just an implementation detail.)


Now since I still haven't read the thread fully, I'm jumping to
conclusions, taking it to the extreme, etc etc, but my thinking in
writing the above is that if we bury everything about our test
'parameter space' in annotations, some of the visible organization we
have now w/ on-disk layout becomes invisible, and the readable
summaries of aspects of testing that we'd have in an XML metadata
document (or whatever) also are hard because you need to scan the
sources to find all instances of annotation X.


I'm hoping that this would be just as applicable to using JavaDoc
variants, and that the problem's not with annotations per se.

In either case, both are grokkable with tools -- either
annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
hard to configure one of those to periodically scan the codebase to
generate reports. Furthermore, as long as the annotation X is well
defined, *you* don't have to scan it -- you leave it up to TestNG to
figure it out.

Actually, there's a very valid benefit for using TestNG markers (=
annotations/JavaDoc) for grouping tests; the directory structure is a
tree, whereas the markers can form any slice of tests, and the sets
don't need to be strict subsets (with a tree, everything has to be a
strict subset of its parents). That means that it's possible to define
a marker IO to run all the IO tests, or a marker Win32 to run all the
Win32 tests, and both of those will contain IO-specific Win32 tests.
You can't do that in a tree structure without duplicating content
somewhere along the line (e.g. /win/io or /io/win). Neither of these
scale well, and every time you add a new dimension, you're doubling
the structure of the directory, but merely adding a new marker with
TestNG. So if you wanted to have (say) boot classpath tests vs api
tests, then you'd ahve to have /api/win/io and /boot/win/io (or
various permutations as applicable).

Most of the directory-based arguments seem to be along the lines of
/api/win/io is better! No, /win/io/api is better!. Just have an
'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
ones to run. You can then even get specific, and only run the Windows
IO API tests, if you really want -- but if you don't, you get the
benefit of being able to run all IO tests (both API and boot).

There doesn't seem to be any benefit to having a strict tree-like
structure to the tests when it's possible to have a multi-dimensional
matrix of all possible combinations that's managed by the tool.

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-08 Thread Geir Magnusson Jr


Alex Blewitt wrote:
 On 08/07/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:

 So while I like the annotations, and expect we can use them effectively,
 I have an instinctive skepticism of annotations right now because in
 general (in general in Java), I'm not convinced we've used them enough
 to grok good design patterns.
 
 There's really no reason to get hung up on the annotations. TestNG
 works just as well with JavaDoc source comments; annotations are only
 another means to that end. (They're probably a better one for the
 future, but it's just an implementation detail.)

You don't understand what I meant - one of the elements of this subject
is that we can punt on how we're organizing tests in a directory layout,
and use annotations for that purpose, and I'm worried about going fully
to that extreme.

 
 Now since I still haven't read the thread fully, I'm jumping to
 conclusions, taking it to the extreme, etc etc, but my thinking in
 writing the above is that if we bury everything about our test
 'parameter space' in annotations, some of the visible organization we
 have now w/ on-disk layout becomes invisible, and the readable
 summaries of aspects of testing that we'd have in an XML metadata
 document (or whatever) also are hard because you need to scan the
 sources to find all instances of annotation X.
 
 I'm hoping that this would be just as applicable to using JavaDoc
 variants, and that the problem's not with annotations per se.

Right.

 
 In either case, both are grokkable with tools -- either
 annotation-savy readers or a JavaDoc tag processor, and it wouldn't be
 hard to configure one of those to periodically scan the codebase to
 generate reports. Furthermore, as long as the annotation X is well
 defined, *you* don't have to scan it -- you leave it up to TestNG to
 figure it out.

Maybe.  Tools help, but they have to be universal, lightweight and
pretty transparent, IMO.  Don't force people to boot Eclipse (or
Netbeans or IDEA) to just figure out the general details of a test class...


 
 Actually, there's a very valid benefit for using TestNG markers (=
 annotations/JavaDoc) for grouping tests; the directory structure is a
 tree, whereas the markers can form any slice of tests, and the sets
 don't need to be strict subsets (with a tree, everything has to be a
 strict subset of its parents). That means that it's possible to define
 a marker IO to run all the IO tests, or a marker Win32 to run all the
 Win32 tests, and both of those will contain IO-specific Win32 tests.
 You can't do that in a tree structure without duplicating content
 somewhere along the line (e.g. /win/io or /io/win). Neither of these
 scale well, and every time you add a new dimension, you're doubling
 the structure of the directory, but merely adding a new marker with
 TestNG. So if you wanted to have (say) boot classpath tests vs api
 tests, then you'd ahve to have /api/win/io and /boot/win/io (or
 various permutations as applicable).

I understand this, which is why I think a general human-readable
metadata system will help, preferably one that has the data in one
place, rather than scattered throughout... of course, centralization has
it's downsides too...  it's not an easy problem :)

 
 Most of the directory-based arguments seem to be along the lines of
 /api/win/io is better! No, /win/io/api is better!. Just have an
 'api', 'win', 'io' TestNG marker, and then let TestNG figure out which
 ones to run. You can then even get specific, and only run the Windows
 IO API tests, if you really want -- but if you don't, you get the
 benefit of being able to run all IO tests (both API and boot).

Or a document that has groups/suites that ant then uses...

 
 There doesn't seem to be any benefit to having a strict tree-like
 structure to the tests when it's possible to have a multi-dimensional
 matrix of all possible combinations that's managed by the tool.

Right.  Clearly a directory-only solution won't ever work well, in the
same way an annotation/marker based solution won't either (I'm
guessing).  I think we first have to figure out what we want to achieve
*irrespective* of how it will be done, and then find the right
tools/process/strategy to achieve that, or create them.  This is
important.  Lets not let the directory-tail or the annotation-tail wag
the testing dog.

:D

geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-07 Thread Mark Hindess

On 6 July 2006 at 21:02, Nathan Beyer [EMAIL PROTECTED] wrote:

 I think Tim has a valid point, or at least the point I'm inferring
 seems valid: the testing technology is not the real issue. This
 problem can be solved by either JUnit or TestNG. More specifically,
 this problem can be solved utilizing the grouping of arbitrary tests.

I'm happy with either JUnit or TestNG.  My only concerns about TestNG
are non-technical.  (Can we use TestNG without adding to the burden for
the developer looking at Harmony for the first time?  I think we can
automate the download as we do for other dependencies.  TestNG is under
the Apache License but I'm not sure what the licenses are like for the
third party code it includes.  This may not be immediately important,
but it might be if we wanted to include tests - ready to run - in the
hdk.)

 I'm been playing with reorganizing the 'luni' module using the
 suggested directory layout and it really doesn't seem to provide
 much value.

 Also, I'm a bit partial to the concept of one source directory
 (src/main/java), one test source directory (src/test/java) and
 any number of resource (src/main/resources/*) and test resource
 directories (src/test/resources/*) as defined by the Maven 2 POM.

+1

 The only practical value I saw in the directory layout was that in
 Eclipse I could just select a single folder and run all of the API
 tests against an RI. The same can be said for any of the other test
 folders, but this same feature can also be achieved via TestSuites.

 As such, I'm in alignment with Tim's thoughts on just using TestSuites
 to define the major groupings. I think the proposed naming conventions
 of 'o.a.h.test.module.java.package' are fine. The only addition
 I would make is to at guidelines on class name, so that pure
 API tests, Harmony tests and failing tests can live in the same
 package. Something as trivial as XXXAPITest, XXXImplTest and
 XXXFailingTest would work. Perhaps a similar approach can be used for
 platform-specific tests. These tests would then be grouped, per-module
 into an APITestSuite, an ImplTestSuite, a FailingTestSuite and
 Platform-specificTestSuites.

This is where I think TestNG has the edge.  XXXFailingTest could contain
both failing API and failing Impl tests?  With TestNG these failing
tests would not have to be moved out of the code base, but could simply
be annotated in-place.  I also like the idea of being able to write
tests (API tests) for code that we don't have yet put them in place and
annotate them appropriately until the code is written.  For instance,
when someone raises a JIRA with a test (that passes on RI) but no fix
we can add the test right away.

 In regards to tests that must be on the bootclasspath, I would say
 either just put everything on the bootclasspath (any real harm)

Testing this way means we are doing API testing in a way that is
different from the typical way a user will use the API.  This seems
wrong to me.

 or use pattern sets for bootclasspath tests (80% of the time the
 classes will be java*/*).

That might work.

 In regards to stress tests, performance tests and integration tests, I
 believe these are patently different and should be developed in their
 own projects.

I'm inclined to agree.

Regards,
 Mark.

  -Original Message-
  From: Tim Ellison [mailto:[EMAIL PROTECTED]
  snip/
  
  Considering just the JUnit tests that we have at the moment...
  
  Do I understand you correctly that you agree with the idea of creating
  'suites of tests' using metadata (such as TestNG's annotations or
  whatever) and not by using the file system layout currently being
  proposed?
  
  I know that you are also thinking about integration tests, stress tests,
  performance tests, etc. as well but just leaving those aside at the
  moment.
  
  Regards,
  Tim
  
  
   Thanks
   Mikhail
  
  
   Stress
   ...and so on...
  
  
   If you weigh up all of the different possible permutations and then
   consider that the above list is highly likely to be extended as things
   progress it is obvious that we are eventually heading for large amounts
   of related test code scattered or possibly duplicated across numerous
   hard wired source directories. How maintainable is that going to be ?
  
   If we want to run different tests in different configurations then IMHO
   we need to be thinking a whole lot smarter. We need to be thinking
  about
   keeping tests for specific areas of functionality together (thus easing
   maintenance); we need something quick and simple to re-configure if
   necessary (pushing whole directories of files around the place does not
   seem a particularly lightweight approach); and something that is not
   going to potentially mess up contributed patches when the file they
   patch is found to have been recently pushed from source folder A to B.
  
   To connect into another recent thread, there have been some posts
  lately
   about handling some test methods that fail on Harmony 

Re: [classlib] Testing conventions - a proposal

2006-07-07 Thread Geir Magnusson Jr


Oliver Deakin wrote:
 George Harley wrote:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while
 now. I think that it is a good time for us to return to the topic of
 class library test layouts.

 The current proposal [1] sets out to segment our different types of
 test by placing them in different file locations. 
 
 ok - on closer reading of this document, I have a few gripes...
 
 First, what happened to the Maven layout we agreed on a while ago?

Maven layout?  We were doing that layout in Jakarta projects long
before maven

This is a fun thread.  I plan to read it from end to end later today and
comment.

Initial thoughts are that I've been wanting to use TestNG for months
(hence my resistance to any JUnit deps more than we needed to) and
second, annotations won't solve our problems.  More later :)

geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [classlib] Testing conventions - a proposal

2006-07-07 Thread Nathan Beyer


 -Original Message-
 From: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]
 
 Maven layout?  We were doing that layout in Jakarta projects long
 before maven
 

And I would guess the Maven designers would agree. Much of their
documentation talks about how the conventions inferred in the super POMs
came from Jakarta projects and others.

 This is a fun thread.  I plan to read it from end to end later today and
 comment.
 
 Initial thoughts are that I've been wanting to use TestNG for months
 (hence my resistance to any JUnit deps more than we needed to) and
 second, annotations won't solve our problems.  More later :)
 

I find this to be an extremely interesting comment. What value do you see
TestNG offering? Most of the conversations pushing for TestNG as a solution
have been all about the annotations.

 geir
 
 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Mikhail Loenko

2006/7/5, George Harley [EMAIL PROTECTED]:

Hi,

Just seen Tim's note on test support classes and it really caught my
attention as I have been mulling over this issue for a little while now.
I think that it is a good time for us to return to the topic of class
library test layouts.

The current proposal [1] sets out to segment our different types of test
by placing them in different file locations. After looking at the recent
changes to the LUNI module tests (where the layout guidelines were
applied) I have a real concern that there are serious problems with this
approach. We have started down a track of just continually growing the
number of test source folders as new categories of test are identified
and IMHO that is going to bring complexity and maintenance issues with
these tests.

Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI


BTW, these are Harmony-specific...


I see your point. And I think we need this directory-level split to
make it possible
to run variuous kinds of tests with different frameworks. We were already
discussing that JUnit extensions are not very good for performance testing.
In the future we might find out that some new test contribution is
inconsistent with the framework we have chosen.

So I'm for directory-level separation of the tests. (Probably some categories
should have their own build)

Thanks
Mikhail



Stress
...and so on...


If you weigh up all of the different possible permutations and then
consider that the above list is highly likely to be extended as things
progress it is obvious that we are eventually heading for large amounts
of related test code scattered or possibly duplicated across numerous
hard wired source directories. How maintainable is that going to be ?

If we want to run different tests in different configurations then IMHO
we need to be thinking a whole lot smarter. We need to be thinking about
keeping tests for specific areas of functionality together (thus easing
maintenance); we need something quick and simple to re-configure if
necessary (pushing whole directories of files around the place does not
seem a particularly lightweight approach); and something that is not
going to potentially mess up contributed patches when the file they
patch is found to have been recently pushed from source folder A to B.

To connect into another recent thread, there have been some posts lately
about handling some test methods that fail on Harmony and have meant
that entire test case classes have been excluded from our test runs. I
have also been noticing some API test methods that pass fine on Harmony
but fail when run against the RI. Are the different behaviours down to
errors in the Harmony implementation ? An error in the RI implementation
? A bug in the RI Javadoc ? Only after some investigation has been
carried out do we know for sure. That takes time. What do we do with the
test methods in the meantime ? Do we push them round the file system
into yet another new source folder ? IMHO we need a testing strategy
that enables such problem methods to be tracked easily without
disruption to the rest of the other tests.

A couple of weeks ago I mentioned that the TestNG framework [2] seemed
like a reasonably good way of allowing us to both group together
different kinds of tests and permit the exclusion of individual
tests/groups of tests [3]. I would like to strongly propose that we
consider using TestNG as a means of providing the different test
configurations required by Harmony. Using a combination of annotations
and XML to capture the kinds of sophisticated test configurations that
people need, and that allows us to specify down to the individual
method, has got to be more scalable and flexible than where we are
headed now.

Thanks for reading this far.

Best regards,
George


[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
[2] http://testng.org
[3]
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL
 PROTECTED]


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Alex Blewitt

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


George Harley wrote:
 A couple of weeks ago I mentioned that the TestNG framework [2] seemed
 like a reasonably good way of allowing us to both group together
 different kinds of tests and permit the exclusion of individual
 tests/groups of tests [3]. I would like to strongly propose that we
 consider using TestNG as a means of providing the different test
 configurations required by Harmony.

Will try to study TestNG before I can give comment ;-)


I'd strongly recommend TestNG for this purpose, too. It's possible to
have a limiteless set of annotations for TestNG as well as allowing
different (sub)sets of those tests to be run. You can also set up
dependencies between stages (e.g. to test sockets, you've got to test
the IO ones first) as well as allowing re-running of just failed tests
from the command line (a test run can output markers as to which tests
passed/failed, and then on subsequent runs just re-run the failing
tests).

It would also solve a lot of the problems that we've been seeing for
OS-specific issues; you can mark a test only to be run on Windows, or
on Linux etc.

the best thing is that all of these annotations can be combined in
whatever ways that you want -- as opposed to a directory-based
approach, which is hierarchical and thus not easy to split based on OS
or enviornment alone without an exponetial explosion in the possible
combinations.

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Richard Liang



Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


George Harley wrote:
 A couple of weeks ago I mentioned that the TestNG framework [2] seemed
 like a reasonably good way of allowing us to both group together
 different kinds of tests and permit the exclusion of individual
 tests/groups of tests [3]. I would like to strongly propose that we
 consider using TestNG as a means of providing the different test
 configurations required by Harmony.

Will try to study TestNG before I can give comment ;-)


I'd strongly recommend TestNG for this purpose, too. It's possible to
have a limiteless set of annotations for TestNG as well as allowing
different (sub)sets of those tests to be run. You can also set up
dependencies between stages (e.g. to test sockets, you've got to test
the IO ones first) as well as allowing re-running of just failed tests
from the command line (a test run can output markers as to which tests
passed/failed, and then on subsequent runs just re-run the failing
tests).

It would also solve a lot of the problems that we've been seeing for
OS-specific issues; you can mark a test only to be run on Windows, or
on Linux etc.

the best thing is that all of these annotations can be combined in
whatever ways that you want -- as opposed to a directory-based
approach, which is hierarchical and thus not easy to split based on OS
or enviornment alone without an exponetial explosion in the possible
combinations.

Hello Alex,

It seems that you're very familiar with TestNG.  ;-) So would you please 
identify what we shall do to transfer from junit to TestNG? Thanks a lot.




Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Richard Liang
China Software Development Lab, IBM 




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Alex Blewitt

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you please
identify what we shall do to transfer from junit to TestNG? Thanks a lot.


Me? I'm just highly opinionated :-)

There's guidelines for migrating from JUnit to TestNG at the home page:
http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)

There's also instructions about how to set it up with an Ant-based build:
http://testng.org/doc/ant.html

I'll see if I can migrate the tests I've got in the Pack200 dir to use
TestNG, so that you can see what it looks like. Unfortunately, I doubt
that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Richard Liang



Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you please
identify what we shall do to transfer from junit to TestNG? Thanks a 
lot.


Me? I'm just highly opinionated :-)

There's guidelines for migrating from JUnit to TestNG at the home page:
http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)

There's also instructions about how to set it up with an Ant-based build:
http://testng.org/doc/ant.html


Will read the materials :-) Thanks a lot.

I'll see if I can migrate the tests I've got in the Pack200 dir to use
TestNG, so that you can see what it looks like. Unfortunately, I doubt
that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...


If no objection, we can start from Pack200 (when you're free). ;-)

Alex.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Richard Liang
China Software Development Lab, IBM 




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread George Harley

Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you please
identify what we shall do to transfer from junit to TestNG? Thanks a 
lot.


Me? I'm just highly opinionated :-)


Hi Alex,

I think we are all pretty much in the TestNG novice category :-)




There's guidelines for migrating from JUnit to TestNG at the home page:
http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)



I have done some private experimentation with this command line utility 
and it seems to work well. In the first instance it would be good to 
preserve the JUnit nature of the tests - i.e. still have the test 
classes extend from JUnit TestCase etc - so that there is always a 
backwards migration path. That's me being paranoid. Note that the 
equivalent migration functionality in the latest TestNG plug-in for 
Eclipse did not allow that but, in addition to adding in the 
annotations, insisted on removing the inheritance from TestCase.




There's also instructions about how to set it up with an Ant-based build:
http://testng.org/doc/ant.html

I'll see if I can migrate the tests I've got in the Pack200 dir to use
TestNG, so that you can see what it looks like. Unfortunately, I doubt
that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...

Alex.


Although we haven't gotten round to discussing specifics yet, it is 
probably timely to mention here that using the TestNG annotations 
approach (as opposed to the pre-1.5 Javadoc comments approach) will not 
work so long as we are compiling Harmony code with the jsr14 target. 
It looked like the annotation metadata did not make it into the 
generated class files (at least this is what I saw in my own 
experiments). If we want to use the annotations approach we will have to 
wait until we move up to compiling for a 1.5 target. Hopefully that will 
not be too long now..


In the meantime you could try out using the Javadoc comments approach, 
just to get a feel for how things run. The downside to that is that your 
test source needs to be available at runtime so that the comments are 
available for the framework to examine.


Best regards,
George



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Tim Ellison
Mikhail Loenko wrote:
 2006/7/5, George Harley [EMAIL PROTECTED]:
 Hi,

 Just seen Tim's note on test support classes and it really caught my
 attention as I have been mulling over this issue for a little while now.
 I think that it is a good time for us to return to the topic of class
 library test layouts.

 The current proposal [1] sets out to segment our different types of test
 by placing them in different file locations. After looking at the recent
 changes to the LUNI module tests (where the layout guidelines were
 applied) I have a real concern that there are serious problems with this
 approach. We have started down a track of just continually growing the
 number of test source folders as new categories of test are identified
 and IMHO that is going to bring complexity and maintenance issues with
 these tests.

 Consider the dimensions of tests that we have ...

 API
 Harmony-specific
 Platform-specific
 Run on classpath
 Run on bootclasspath
 Behaves different between Harmony and RI
 
 BTW, these are Harmony-specific...
 
 
 I see your point. And I think we need this directory-level split to
 make it possible
 to run variuous kinds of tests with different frameworks. We were already
 discussing that JUnit extensions are not very good for performance testing.
 In the future we might find out that some new test contribution is
 inconsistent with the framework we have chosen.
 
 So I'm for directory-level separation of the tests. (Probably some
 categories
 should have their own build)

Considering just the JUnit tests that we have at the moment...

Do I understand you correctly that you agree with the idea of creating
'suites of tests' using metadata (such as TestNG's annotations or
whatever) and not by using the file system layout currently being proposed?

I know that you are also thinking about integration tests, stress tests,
performance tests, etc. as well but just leaving those aside at the moment.

Regards,
Tim


 Thanks
 Mikhail
 
 
 Stress
 ...and so on...


 If you weigh up all of the different possible permutations and then
 consider that the above list is highly likely to be extended as things
 progress it is obvious that we are eventually heading for large amounts
 of related test code scattered or possibly duplicated across numerous
 hard wired source directories. How maintainable is that going to be ?

 If we want to run different tests in different configurations then IMHO
 we need to be thinking a whole lot smarter. We need to be thinking about
 keeping tests for specific areas of functionality together (thus easing
 maintenance); we need something quick and simple to re-configure if
 necessary (pushing whole directories of files around the place does not
 seem a particularly lightweight approach); and something that is not
 going to potentially mess up contributed patches when the file they
 patch is found to have been recently pushed from source folder A to B.

 To connect into another recent thread, there have been some posts lately
 about handling some test methods that fail on Harmony and have meant
 that entire test case classes have been excluded from our test runs. I
 have also been noticing some API test methods that pass fine on Harmony
 but fail when run against the RI. Are the different behaviours down to
 errors in the Harmony implementation ? An error in the RI implementation
 ? A bug in the RI Javadoc ? Only after some investigation has been
 carried out do we know for sure. That takes time. What do we do with the
 test methods in the meantime ? Do we push them round the file system
 into yet another new source folder ? IMHO we need a testing strategy
 that enables such problem methods to be tracked easily without
 disruption to the rest of the other tests.

 A couple of weeks ago I mentioned that the TestNG framework [2] seemed
 like a reasonably good way of allowing us to both group together
 different kinds of tests and permit the exclusion of individual
 tests/groups of tests [3]. I would like to strongly propose that we
 consider using TestNG as a means of providing the different test
 configurations required by Harmony. Using a combination of annotations
 and XML to capture the kinds of sophisticated test configurations that
 people need, and that allows us to specify down to the individual
 method, has got to be more scalable and flexible than where we are
 headed now.

 Thanks for reading this far.

 Best regards,
 George


 [1]
 http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html

 [2] http://testng.org
 [3]
 http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL
  PROTECTED]



 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


 
 -
 Terms of use : 

RE: [classlib] Testing conventions - a proposal

2006-07-06 Thread Nathan Beyer
I think Tim has a valid point, or at least the point I'm inferring seems
valid: the testing technology is not the real issue. This problem can be
solved by either JUnit or TestNG. More specifically, this problem can be
solved utilizing the grouping of arbitrary tests.

I'm been playing with reorganizing the 'luni' module using the suggested
directory layout and it really doesn't seem to provide much value. Also, I'm
a bit partial to the concept of one source directory (src/main/java), one
test source directory (src/test/java) and any number of resource
(src/main/resources/*) and test resource directories (src/test/resources/*)
as defined by the Maven 2 POM.

The only practical value I saw in the directory layout was that in Eclipse I
could just select a single folder and run all of the API tests against an
RI. The same can be said for any of the other test folders, but this same
feature can also be achieved via TestSuites.

As such, I'm in alignment with Tim's thoughts on just using TestSuites to
define the major groupings. I think the proposed naming conventions of
'o.a.h.test.module.java.package' are fine. The only addition I would make
is to at guidelines on class name, so that pure API tests, Harmony tests and
failing tests can live in the same package. Something as trivial as
XXXAPITest, XXXImplTest and XXXFailingTest would work. Perhaps a similar
approach can be used for platform-specific tests. These tests would then be
grouped, per-module into an APITestSuite, an ImplTestSuite, a
FailingTestSuite and Platform-specificTestSuites.

In regards to tests that must be on the bootclasspath, I would say either
just put everything on the bootclasspath (any real harm) or use pattern sets
for bootclasspath tests (80% of the time the classes will be java*/*).

In regards to stress tests, performance tests and integration tests, I
believe these are patently different and should be developed in their own
projects.

My 2 cents...

-Nathan

 -Original Message-
 From: Tim Ellison [mailto:[EMAIL PROTECTED]
 snip/
 
 Considering just the JUnit tests that we have at the moment...
 
 Do I understand you correctly that you agree with the idea of creating
 'suites of tests' using metadata (such as TestNG's annotations or
 whatever) and not by using the file system layout currently being
 proposed?
 
 I know that you are also thinking about integration tests, stress tests,
 performance tests, etc. as well but just leaving those aside at the
 moment.
 
 Regards,
 Tim
 
 
  Thanks
  Mikhail
 
 
  Stress
  ...and so on...
 
 
  If you weigh up all of the different possible permutations and then
  consider that the above list is highly likely to be extended as things
  progress it is obvious that we are eventually heading for large amounts
  of related test code scattered or possibly duplicated across numerous
  hard wired source directories. How maintainable is that going to be ?
 
  If we want to run different tests in different configurations then IMHO
  we need to be thinking a whole lot smarter. We need to be thinking
 about
  keeping tests for specific areas of functionality together (thus easing
  maintenance); we need something quick and simple to re-configure if
  necessary (pushing whole directories of files around the place does not
  seem a particularly lightweight approach); and something that is not
  going to potentially mess up contributed patches when the file they
  patch is found to have been recently pushed from source folder A to B.
 
  To connect into another recent thread, there have been some posts
 lately
  about handling some test methods that fail on Harmony and have meant
  that entire test case classes have been excluded from our test runs. I
  have also been noticing some API test methods that pass fine on Harmony
  but fail when run against the RI. Are the different behaviours down to
  errors in the Harmony implementation ? An error in the RI
 implementation
  ? A bug in the RI Javadoc ? Only after some investigation has been
  carried out do we know for sure. That takes time. What do we do with
 the
  test methods in the meantime ? Do we push them round the file system
  into yet another new source folder ? IMHO we need a testing strategy
  that enables such problem methods to be tracked easily without
  disruption to the rest of the other tests.
 
  A couple of weeks ago I mentioned that the TestNG framework [2] seemed
  like a reasonably good way of allowing us to both group together
  different kinds of tests and permit the exclusion of individual
  tests/groups of tests [3]. I would like to strongly propose that we
  consider using TestNG as a means of providing the different test
  configurations required by Harmony. Using a combination of annotations
  and XML to capture the kinds of sophisticated test configurations that
  people need, and that allows us to specify down to the individual
  method, has got to be more scalable and flexible than where we are
  headed now.
 
  

Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Andrew Zhang

On 7/5/06, George Harley [EMAIL PROTECTED] wrote:


Hi,

Just seen Tim's note on test support classes and it really caught my
attention as I have been mulling over this issue for a little while now.
I think that it is a good time for us to return to the topic of class
library test layouts.

The current proposal [1] sets out to segment our different types of test
by placing them in different file locations. After looking at the recent
changes to the LUNI module tests (where the layout guidelines were
applied) I have a real concern that there are serious problems with this
approach. We have started down a track of just continually growing the
number of test source folders as new categories of test are identified
and IMHO that is going to bring complexity and maintenance issues with
these tests.

Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...



At least ... 3(common/win/linux) * 2 (classpath/bootclasspath) * 2(api/impl)
* ... = 12 * ...
Of course, for some module or package, there are only common folder *
classpath * api test, that the best thing.
The count of folders are really terrifc in worst situation.

If you weigh up all of the different possible permutations and then

consider that the above list is highly likely to be extended as things
progress it is obvious that we are eventually heading for large amounts
of related test code scattered or possibly duplicated across numerous
hard wired source directories. How maintainable is that going to be ?



Put my 0.02$ here.  Configuration is a better than physical folder layout,
not matter by which tool (Junit, TestNG, )
In fact, physical layout is also a configuration, which is controlled by
folder directory rather than annotations, xml, ...
I think storing test specific information (e.g. only win) in code is better
than in folder path (e.g **/win/...).
In most cases, annotation by professional test tool is more flexible, and
powerful.

If we want to run different tests in different configurations then IMHO

we need to be thinking a whole lot smarter. We need to be thinking about
keeping tests for specific areas of functionality together (thus easing
maintenance); we need something quick and simple to re-configure if
necessary (pushing whole directories of files around the place does not
seem a particularly lightweight approach); and something that is not
going to potentially mess up contributed patches when the file they
patch is found to have been recently pushed from source folder A to B.

To connect into another recent thread, there have been some posts lately
about handling some test methods that fail on Harmony and have meant
that entire test case classes have been excluded from our test runs. I
have also been noticing some API test methods that pass fine on Harmony
but fail when run against the RI. Are the different behaviours down to
errors in the Harmony implementation ? An error in the RI implementation
? A bug in the RI Javadoc ? Only after some investigation has been
carried out do we know for sure. That takes time. What do we do with the
test methods in the meantime ? Do we push them round the file system
into yet another new source folder ? IMHO we need a testing strategy
that enables such problem methods to be tracked easily without
disruption to the rest of the other tests.

A couple of weeks ago I mentioned that the TestNG framework [2] seemed
like a reasonably good way of allowing us to both group together
different kinds of tests and permit the exclusion of individual
tests/groups of tests [3]. I would like to strongly propose that we
consider using TestNG as a means of providing the different test
configurations required by Harmony. Using a combination of annotations
and XML to capture the kinds of sophisticated test configurations that
people need, and that allows us to specify down to the individual
method, has got to be more scalable and flexible than where we are
headed now.



Maybe another two cents here after learning TestNG. :)


Thanks for reading this far.


Best regards,
George


[1]

http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
[2] http://testng.org
[3]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL
 PROTECTED]


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





--
Andrew Zhang
China Software Development Lab, IBM


Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Richard Liang

Hello All,

After read through the document recommended by Alex, I think TestNG can 
really meet our requirement. It provides much flexibility for test 
configuration. ;-)


If we decide to transfer to TestNG, we shall:

1. Identify Harmony testing strategy. (It's not easy)
2. Define TestNG suite/groups to reflect Harmony testing strategy
3. Decide to use Java 5 Annotations or Java 1.4 JavaDoc annotations
4. Convert all JUnit tests to TestNG tests (TestNG provides a tool 
org.testng.JUnitConverter for migrating from JUnit, but it seems that 
the tool has a bug  :-P )

5. Choose a module to run a pilot
...

Please correct me if I'm wrong. Thanks a lot.

Best regards,
Richard.

George Harley wrote:

Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you 
please
identify what we shall do to transfer from junit to TestNG? Thanks a 
lot.


Me? I'm just highly opinionated :-)


Hi Alex,

I think we are all pretty much in the TestNG novice category :-)




There's guidelines for migrating from JUnit to TestNG at the home page:
http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)



I have done some private experimentation with this command line 
utility and it seems to work well. In the first instance it would be 
good to preserve the JUnit nature of the tests - i.e. still have the 
test classes extend from JUnit TestCase etc - so that there is always 
a backwards migration path. That's me being paranoid. Note that the 
equivalent migration functionality in the latest TestNG plug-in for 
Eclipse did not allow that but, in addition to adding in the 
annotations, insisted on removing the inheritance from TestCase.



There's also instructions about how to set it up with an Ant-based 
build:

http://testng.org/doc/ant.html

I'll see if I can migrate the tests I've got in the Pack200 dir to use
TestNG, so that you can see what it looks like. Unfortunately, I doubt
that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...

Alex.


Although we haven't gotten round to discussing specifics yet, it is 
probably timely to mention here that using the TestNG annotations 
approach (as opposed to the pre-1.5 Javadoc comments approach) will 
not work so long as we are compiling Harmony code with the jsr14 
target. It looked like the annotation metadata did not make it into 
the generated class files (at least this is what I saw in my own 
experiments). If we want to use the annotations approach we will have 
to wait until we move up to compiling for a 1.5 target. Hopefully that 
will not be too long now..


In the meantime you could try out using the Javadoc comments approach, 
just to get a feel for how things run. The downside to that is that 
your test source needs to be available at runtime so that the comments 
are available for the framework to examine.


Best regards,
George



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Richard Liang
China Software Development Lab, IBM 




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-06 Thread Paulex Yang

Richard Liang wrote:

Hello All,

After read through the document recommended by Alex, I think TestNG 
can really meet our requirement. It provides much flexibility for test 
configuration. ;-)


If we decide to transfer to TestNG, we shall:

1. Identify Harmony testing strategy. (It's not easy)
2. Define TestNG suite/groups to reflect Harmony testing strategy
3. Decide to use Java 5 Annotations or Java 1.4 JavaDoc annotations
Any difference between using 1.4 doclet or 5.0 annotation? If we use 
Java 1.4 so far, can we migrate to annotation easily?
4. Convert all JUnit tests to TestNG tests (TestNG provides a tool 
org.testng.JUnitConverter for migrating from JUnit, but it seems 
that the tool has a bug  :-P )
I'm sorry, but...what the bug looks like? I think it is important 
because we have so many JUnit tests already, it will be a big concern of 
the TestNG solution if we have not tool to migrate.

5. Choose a module to run a pilot
...

Please correct me if I'm wrong. Thanks a lot.

Best regards,
Richard.

George Harley wrote:

Alex Blewitt wrote:

On 06/07/06, Richard Liang [EMAIL PROTECTED] wrote:


It seems that you're very familiar with TestNG.  ;-) So would you 
please
identify what we shall do to transfer from junit to TestNG? Thanks 
a lot.


Me? I'm just highly opinionated :-)


Hi Alex,

I think we are all pretty much in the TestNG novice category :-)




There's guidelines for migrating from JUnit to TestNG at the home page:
http://testng.org/doc/migrating.html

Here is a sample use that will convert all the JUnit tests in the
src/ directory to TestNG:

java org.testng.JUnitConverter -overwrite -annotation -srcdir src

:-)



I have done some private experimentation with this command line 
utility and it seems to work well. In the first instance it would be 
good to preserve the JUnit nature of the tests - i.e. still have 
the test classes extend from JUnit TestCase etc - so that there is 
always a backwards migration path. That's me being paranoid. Note 
that the equivalent migration functionality in the latest TestNG 
plug-in for Eclipse did not allow that but, in addition to adding in 
the annotations, insisted on removing the inheritance from TestCase.



There's also instructions about how to set it up with an Ant-based 
build:

http://testng.org/doc/ant.html

I'll see if I can migrate the tests I've got in the Pack200 dir to use
TestNG, so that you can see what it looks like. Unfortunately, I doubt
that I'm going to be able to get to that much before 2 weeks time due
to other outstanding commitments ...

Alex.


Although we haven't gotten round to discussing specifics yet, it is 
probably timely to mention here that using the TestNG annotations 
approach (as opposed to the pre-1.5 Javadoc comments approach) will 
not work so long as we are compiling Harmony code with the jsr14 
target. It looked like the annotation metadata did not make it into 
the generated class files (at least this is what I saw in my own 
experiments). If we want to use the annotations approach we will have 
to wait until we move up to compiling for a 1.5 target. Hopefully 
that will not be too long now..


In the meantime you could try out using the Javadoc comments 
approach, just to get a feel for how things run. The downside to that 
is that your test source needs to be available at runtime so that the 
comments are available for the framework to examine.


Best regards,
George



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]







--
Paulex Yang
China Software Development Lab
IBM



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-05 Thread Richard Liang



George Harley wrote:

Hi,

Just seen Tim's note on test support classes and it really caught my 
attention as I have been mulling over this issue for a little while 
now. I think that it is a good time for us to return to the topic of 
class library test layouts.


The current proposal [1] sets out to segment our different types of 
test by placing them in different file locations. After looking at the 
recent changes to the LUNI module tests (where the layout guidelines 
were applied) I have a real concern that there are serious problems 
with this approach. We have started down a track of just continually 
growing the number of test source folders as new categories of test 
are identified and IMHO that is going to bring complexity and 
maintenance issues with these tests.

Yes, you'll see our ant scripts get more and more complex. :-)



Consider the dimensions of tests that we have ...

API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...


If you weigh up all of the different possible permutations and then 
consider that the above list is highly likely to be extended as things 
progress it is obvious that we are eventually heading for large 
amounts of related test code scattered or possibly duplicated across 
numerous hard wired source directories. How maintainable is that 
going to be ?


If we want to run different tests in different configurations then 
IMHO we need to be thinking a whole lot smarter. We need to be 
thinking about keeping tests for specific areas of functionality 
together (thus easing maintenance); we need something quick and simple 
to re-configure if necessary (pushing whole directories of files 
around the place does not seem a particularly lightweight approach); 
and something that is not going to potentially mess up contributed 
patches when the file they patch is found to have been recently pushed 
from source folder A to B.


To connect into another recent thread, there have been some posts 
lately about handling some test methods that fail on Harmony and have 
meant that entire test case classes have been excluded from our test 
runs. I have also been noticing some API test methods that pass fine 
on Harmony but fail when run against the RI. Are the different 
behaviours down to errors in the Harmony implementation ? An error in 
the RI implementation ? A bug in the RI Javadoc ? Only after some 
investigation has been carried out do we know for sure. That takes 
time. What do we do with the test methods in the meantime ? Do we push 
them round the file system into yet another new source folder ? IMHO 
we need a testing strategy that enables such problem methods to be 
tracked easily without disruption to the rest of the other tests.

It's really worth thinking about our testing strategy...


A couple of weeks ago I mentioned that the TestNG framework [2] seemed 
like a reasonably good way of allowing us to both group together 
different kinds of tests and permit the exclusion of individual 
tests/groups of tests [3]. I would like to strongly propose that we 
consider using TestNG as a means of providing the different test 
configurations required by Harmony. Using a combination of annotations 
and XML to capture the kinds of sophisticated test configurations that 
people need, and that allows us to specify down to the individual 
method, has got to be more scalable and flexible than where we are 
headed now.



Will try to study TestNG before I can give comment ;-)


Thanks for reading this far.

Best regards,
George


[1] 
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html 


[2] http://testng.org
[3] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL PROTECTED] 




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Richard Liang
China Software Development Lab, IBM 




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]