[ 
https://issues.apache.org/jira/browse/HADOOP-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12667980#action_12667980
 ] 

Steve Loughran commented on HADOOP-5069:
----------------------------------------

stdout and stderr are really the log4j or commons log levels DEBUG, INFO, WARN, 
ERROR,  and stdout/stderr printlns. In Junit classic XML commons-loggong or 
log4j push things out and Ant's junit runner just catches the text.

Now imagine our own back end that grabs every log event and records the 
timestamp, logger and level. Exceptions that are logged via 
log.error(String,Throwable) and not stringify would even be extractable as more 
XML to work on


{code}
<l l="w" t="345667442" tid="thread-7">no hostname</l>
<ex l="e" t="345667447" tid="thread-7">
 <classname>java.io.IOException</classname>
 <message>Connect: connection refused to 127.0.0.1</message>
 <stack>
  <stackentry>org.apache.hadoop.net.NetUtils.something():45</stackentry>
   ...
 </stack/>
</ex>
{code}

With the timestamps, threads and log levels as separate things to work on, you 
could
# display by thread
# colour code log levels
# make the exception stacks collapsable
# stick it all somewhere for later analysis
All as well as <xslt> it into the classic JUnit format, that contains a subset 
of this information.

> add a Hadoop-centric junit test result listener
> -----------------------------------------------
>
>                 Key: HADOOP-5069
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5069
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: test
>    Affects Versions: 0.21.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> People are encountering different problems with hadoop's unit tests, defects 
> currently being WONTFIX'd
> # HADOOP-5001 : Junit tests that time out don't write any test progress 
> related logs
> # HADOOP-4721 : OOM in .TestSetupAndCleanupFailure
> There is a root cause here, the XmlResultFormatter of Ant buffers everything 
> before writing out a DOM. Too much logged: OOM and no output. Timeout: kill 
> and no output.
> We could add a new logger class to hadoop and then push it back into Ant once 
> we were happy, or keep it separate if we had specific dependencies (like on 
> hadoop-dfs API) that they lacked. 
> Some ideas
> # stream XML to disk. We would have to put the test summary at the end; could 
> use XSL to generate HTML and the classic XML content
> # stream XHTML to disk. Makes it readable as you go along; makes the XSL work 
> afterwards harder.
> # push out results as records to a DFS. There's a problem here in that this 
> needs to be a different DFS from that you are testing, yet it needs to be 
> compatible with the client. 
> Item #3 would be interesting but doing it inside JUnit is too dangerous 
> classpath and config wise. Better to have Ant do the copy afterwards. What is 
> needed then is a way to easily append different tests to the same DFS file in 
> a way that tools can analyse them all afterwards. The copy is easy -add a new 
> Ant resource for that- but the choice of format is trickier.
> Here's some work I did on this a couple of years back; I've not done much 
> since then: 
> http://people.apache.org/~stevel/slides/distributed_testing_with_smartfrog_slides.pdf
> Is anyone else interested in exploring this? 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to