On 5 March 2017 at 09:36, Jérémie Guichard <[email protected]> wrote: > Good morning/afternoon/evening all, > > I've been lately trying to get subsurface tests to pass under windows. In > the current state, TestParse and TestPreferences are the two ones still > failing... > > In short, the error raised by TestParse is caused by a rounding error when > converting dm4 sample depth from float meters to int millimeters. The > inconsistency was raised running the test on windows but same behavior can > be seen on linux for certain float values too. Have a look at the changes in > the pull request here for the details: > https://github.com/Subsurface-divelog/subsurface/pull/233 > > The fix in itself is easy, surround the booean_value * 1000 with a rint: > "rint(booean_value * 1000)" when it is assigned to the integer variable... >
- cur_sample->depth.mm = profileBlob[i] * 1000; + cur_sample->depth.mm = rint(profileBlob[i] * 1000); profileBlob is of type float, so ideally the constant should be 1000.f, in case a compiler gets confused, that is. from there on it depends if the code wants the sample's depth rounded to nearest integer, or rounded towards 0 - i.e. truncated by a cast. that is perhaps a mailing list question. without optimizations in mind, the 1000 might first get cast to a float, then rint() will cast the result from the multiplication to a double, then round that to the nearest whole double, and finally that double will be cast to the type of the depth unit which is int32_t. if rint() rounding is viable, i would write the above as: cur_sample->depth.mm = (int32_t)rintf(profileBlob[i] * 1000.f); - mind the rintf() - while the int32_t cast is something the compiler will be aware of, one might want to put it so that readers of the code *know* about the cast and are reminded of the used types. 2c lubomir -- _______________________________________________ subsurface mailing list [email protected] http://lists.subsurface-divelog.org/cgi-bin/mailman/listinfo/subsurface
