IIRC/AIUI, jtreg only outputs standard output test run information when tests fail. This brings down the amount of test info a lot.

To find the failing case, you just have to grep for:

    ^TEST RESULT:.*FAIL

Brad


On 9/21/2018 5:23 PM, Jamil Nimeh wrote:
Thanks Xuelei.

You make a good point about the debug logs losing information when they get large.  We have a lot of tests, more than a couple I wrote, where we do multiple tests in a single execution.  In most cases it works pretty nicely, but admittedly when the logs get really long then I have to isolate specific tests that are the ones that fail.  Maybe we should revisit those test cases in the future and see if we can do something better.

--Jamil


On 09/21/2018 05:18 PM, Xuelei Fan wrote:

On Sep 21, 2018, at 4:45 PM, Jamil Nimeh <jamil.j.ni...@oracle.com> wrote:

Hi Xuelei,

I started getting into making the one test per run approach - having these controlled from command line args in the run line gets a little weird after a while.  We have different hello messages that are byte arrays, so you have to map them to strings (easy), but then some test cases (in the future, not now) might need SSLContexts created with different alg names, might throw different exceptions, we may want to take slightly different actions based on how the processClientHello reacts to a given message, etc.  Those things are easier to write into the body of the test.

Would you be OK with an approach where the output on stdout clearly indicates a PASS/FAIL for each test it performs?  Then if it fails one only needs to look at stdout to see which test went haywire and go from there.

It would help simplify the failure evaluation.

But there is still a problem that when we run a lot test, the debug log may be swallowed, for example over 5000 lines.  The result is that the failure output may not appear in the debug log.

However, it is a very minor issue.  We can consider make the improvement later when we have more cycles.

I’m fine with the current code.

Thanks,
Xuelei


--Jamil

On 9/21/2018 4:15 PM, Xuelei Fan wrote:
On 9/21/2018 4:00 PM, Jamil Nimeh wrote:
Are you suggesting having multiple run lines or something like that?  I think we could do that.
I would prefer to to the run lines.

I would like to have it run all cases rather than short-circuit on the first failure, as each case doesn't depend on the others.
It should be fine to break earlier as normally the test should be passed.

Let me play around with the run directives and see if we can make it work more along the lines you want.

Thanks!

Xuelei

--Jamil


On 09/21/2018 03:55 PM, Xuelei Fan wrote:
Once there is a test case failed, it may be not straightforward to identify which one is failed. Especially, currently, the testing blog may be swallowed if it is too long.   Would you please consider one case per test? Or break immediately if a test case failed, instead of waiting for all to complete?

Thanks,
Xuelei

On 9/21/2018 2:35 PM, Jamil Nimeh wrote:
Hello all,

This adds a test that lets us send different kinds of client hellos to a JSSE server. It can be extended to handle different kinds of corner cases for client hello extension sets as well as fuzzing test cases in the future.  It also provides some extra test coverage for JDK-8210334 and JDK-8209916.

Webrev: http://cr.openjdk.java.net/~jnimeh/reviews/8210918/webrev.01/
JBS: https://bugs.openjdk.java.net/browse/JDK-8210918

Thanks and have a good weekend,
--Jamil

Reply via email to