#5793: make nofib not suck
--------------------------------------+-------------------------------------
    Reporter:  dterei                 |       Owner:  dterei          
        Type:  task                   |      Status:  new             
    Priority:  normal                 |   Milestone:                  
   Component:  NoFib benchmark suite  |     Version:                  
    Keywords:                         |          Os:  Unknown/Multiple
Architecture:  Unknown/Multiple       |     Failure:  None/Unknown    
  Difficulty:  Unknown                |    Testcase:                  
   Blockedby:                         |    Blocking:  5794            
     Related:                         |  
--------------------------------------+-------------------------------------

Comment(by simonmar):

 I agree working on the benchmarks themselves should be the highest
 priority.

 Some of the benchmarks aren't very amenable to running for longer - we end
 up just repeating the same task many times.  I think for benchmarks where
 we can't come up with a suitable input that keeps the program busy for
 long enough, we should just put these in a separate category and use them
 for regression testing only. Measuring allocation still works reliably
 even for programs that run for a tiny amount of time.

 I said "retire old benchmarks" but on seconds thoughts a better plan is to
 not throw anything away, just keep them all around as regression tests.
 Make an exception only for programs which are broken beyond repair, or are
 definitely not measuring anything worthwhile (I occasionally come across a
 program that has been failing immediately with an error, and somebody
 accepted the output as correct in 1997...).

 I still like to keep the microbenchmarks collected together.  These are
 very useful for spot testing and debugging.

 Keep an eye out for good candidates for a real-world benchmark suite.  I'm
 thinking ~10 or so programs that have complex behaviour, preferably with
 multiple phases or multiple algorithms.

 I think ~10s is perhaps slightly on the high side, but I don't feel too
 strongly about it.  Currently it takes ~20min to run the
 real+spectral+imaginary suites, I think a good target to aim for is less
 than an hour, with the option to have a longer run.

-- 
Ticket URL: <http://hackage.haskell.org/trac/ghc/ticket/5793#comment:11>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler

_______________________________________________
Glasgow-haskell-bugs mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs

Reply via email to