For those interested in unit testing, 
[unittest2](https://github.com/status-im/nim-unittest2/) just got a facelift!

In particular, the output has been cleaned up to give more space to failures 
and less space to successes which makes room for adding timing information and 
other bells and whistles - failures for example are repeated at the end of the 
test run so they don't get lost in the immense amount of success spam that the 
current version prints.

We're also experimenting with a two-phase mode where tests are run separately 
after a discovery phase - apart from allowing progress indicators, nice output 
alignment etc it also paves the way for better test scheduling in the future, 
including running tests in separate / isolated processes.

`unittest2` is "broadly" compatible with std/unittest, but works around many of 
its limitations - in particular, each test is instantiated in a separate `proc` 
meaning that test suites can be arbitrarily large rather than being limited by 
nim's limit on the number of global symbols - this also helps with with stack 
space and a few other technical issues that prevented us from using 
std/unittest at scale.

Other than this, it also comes with JUnit integration and some other bells and 
whistles - give it a try, if you haven't already..

Here's what it looks like:
    
    
    [  20/25] HTTP client testing suite                               
................s..... (14.2s)
    [  21/25] Asynchronous process management test suite              
....................F. (14.5s)
    ===========================
      /home/arnetheduck/status/nimbus-eth2/vendor/nim-chronos/build/testall 
'Asynchronous process management test suite::File descriptors leaks test'
    ---------------------------
        
/home/arnetheduck/status/nimbus-eth2/vendor/nim-chronos/tests/testproc.nim(465, 
27): Check failed: getCurrentFD() == markFD
        getCurrentFD() was 3
        markFD was 5
      
      [FAILED ] (  0.00s) File descriptors leaks test
    
    
    Run

Each `test` gets a `.` unless it's failed/skipped, and we have timing info for 
each suite so as to identify slow runners.

Failure information is retained and printed "later" so that in CI and 
terminals, you can actually find it without scrolling etc.

As a part of this refresh, the parallel test execution features had to go due 
to technical issues in the current implementation rendering them too unstable 
for practical use - they might reappear in the future, though that would have 
required a reimplementation from ground up, so it was easier to just remove 
them.

Give it a go :) 

Reply via email to