Hi,

> On 4. Dec 2021, at 13:09, Berthold Stoeger via subsurface 
> <[email protected]> wrote:
> 
> Yes, in particular Robert's new code reads as
> 
> entry->ead = mbar_to_depth(depth_to_mbar(entry->depth, dive) ...);
> 
> depth_to_mbar() returns an int, and 1 bar is 10 m, thus 1 mbar is 10 mm which 
> is precisely the observed difference, no?

I could add easily a double valued version of that function which should solve 
this.

But there is another concern that I have and that I would like to hear people’s 
opinion on: In the previous version of the code, this conversion was simply 
done by adding/subtracting  10000 as everybody is doing this when doing the 
conversion in their head (pressure in bar is depth in meters divided by 10 plus 
1). But our conversion functions are more sophisticated: They take into account 
the actual surface pressure (when known) and the salinity/density of water (if 
know). The latter does not matter here since it cancels out when we convert 
back and forth but the surface pressure not being exactly 1 bar matters 
typically at the percent level.

My question is: Do you think this is the right thing to do? Of course, our 
functions fo a more accurate job at the conversion between depth and ambient 
pressure. But do we really want this? There is a chance that this leads to 
complaints that we are computing wrong values as our value does not agree with 
what our users might compute with the simpler conversion (as we constantly get 
complaints that we are getting gas consumption wrong because people user the 
ideal gas law rather than our more correct real gas version). But even worse, 
despite the „D“ in the name of the numbers standing for „depth“, is it really 
depth that we mean or is it actually ambient pressure (as for the physiological 
effects it is most likely (partial) pressure that matters. Given that the dive 
computer also measures pressure not depth is for example the MOD really the 
„maximum operating depth“ or should it rather be the „maximum operating ambient 
pressure“?

Of course „doesn’t matter“ is a valid answer since the differences are at the 
order of a percent and diving is an inexact science and none of the things that 
happen are known/understood/measured at the 1% level.

A totally different question is if it really makes sense to output these values 
involving floating point numbers (possibly rounded at some point with the 
resulting discontinuities) as a text file or should the test better read the 
resulting csv files and compare numbers with some tolerance? I remember a 
number of years ago, we went though the code and removed all tests of floating 
point numbers for equality and replaced those to a comparison with finite 
margin of error.

Best
Robert
 
_______________________________________________
subsurface mailing list
[email protected]
http://lists.subsurface-divelog.org/cgi-bin/mailman/listinfo/subsurface

Reply via email to