Hi Robert, On 19 February 2017 at 23:33, Robert Helling <[email protected]> wrote:
> Hi, > > I just pushed a branch to Github: https://github.com/atdotde/ > subsurface/tree/latest_update > > One of the patches is supposed to fix an error that made our desaturation > artificially fast. Now, the deco times are slightly more conservative. But > it seems they are too conservative for the benchmark. > > I guess Rick did the benchmark (I don’t have it here). Rick, could you > please look into the source of the benchmark what the assumptions on the > saturation/desaturation rates are there and why the computed times differ > so much? > > Yes, I think all the VPM-B benchmark values were taken by me running the same plans through the VPM-B Fortran program. I recall over email we also compared with other current implementations, but for consistency and repeatability we decided to go with the original as the standard. I don't have the VPM-B input and output files with me at the moment, but they should still be on my laptop at home. In any case, I'm pretty confident I transcribed the runtimes correctly in the testplan.cpp, and the ascent descent rates are the same. In the tests, we test the Subsurface runtime against the VPM-B benchmark, allowing +/- (1% runtime + 1 min) difference from the standard, as well as against they last known Subsurface value with zero tolerance for difference. If the test "fails" against the known Subsurface time, this isn't necessarily a bad thing - it just flags that we've changed something in the code. We don't want the comparison to the standard benchmark fail beyond the tolerance, unless we're really sure we're doing the right thing. Note, we have a subsurface_conservatism_factor variable in deco.c, which is used to increase the critical radii by 1.2%, which increases the conservatism slightly. With the saturation/desaturation values being set to zero, which appears (in your changes to testplan.cpp on GitHub) to increase the deco time, we could try reducing or eliminating the conservatism factor variable. Unfortunately, I have no time for checking this myself at the moment. For Buehlmann planner tests, I think our benchmark value was the Subsurface value at the time we create the tests, as we were satisfied with how the planner was performing, and had nothing "better" to compare it to. Cheers, Rick
_______________________________________________ subsurface mailing list [email protected] http://lists.subsurface-divelog.org/cgi-bin/mailman/listinfo/subsurface
