Dirk: fine, I agree that these commits aren't of the highest quality. One of the reasons is that this work is based in huge part on the existing implementation which I had to discover part by part and which has a lot of strange solutions (including units and constants scaled all the time). The other reason is probably my negligence. I hope to have all of this repaired by Friday.
2015-06-30 21:43 GMT+02:00 Robert C. Helling <[email protected]>: > Now for less formal aspects: I computed a few schedules with your code and > they seem to be not totally off (I did not compare them to the output of > other programs that claim to compute vpm-b, yet). I guess you did. Could you > please post those tests (the dives you did your benchmarking on and what the > other programs said)? I will send this in the evening, I used the implementation from github, with the Boyle's law compensation removed. I haven't implemented it yet and it changes results a lot. Tests were pretty simple, I can't say that it's tested on all cases and proved correct. > Something that applied to the old Buehlmann code as well: We should define > the constants in useful units, so we don’t have to convert them before we > can use them. In particular I am thinking of all half-times (including the > regeneration time) which, currently we state in minutes but use (as all > times) in seconds. We could simply add „* 60“ in all the initialisations so > the multiplication is done at compile time rather than at runtime. Agreed. > The plan() function is by now a huge monolith. We should break it into > digestible pieces. The problem is of course all the state held in its local > variables. Maybe those should be put in a struct and a pointer to that > passed around. Yes, something should be done with this. I've been thinking about something similar to the saturation state cache. > There is one thing that I did not expect: I printed the final allowable > gradients and the deeper and longer the dives get, the gradients increase. I > must say I expected the opposite. But maybe that is my flawed understanding > of the model. Without thinking too much: it's fine for the deeper dives to have bigger gradients. The smaller the radius at the start of deco, the bigger the gradient (bigger pressure needed to make the nucleon turn into bubble). Gas saturation in the tissue increases slowly during the descent so if we descent quickly, nucleon shrinks a lot (gas pressure inside the bubble much bigger than saturation), so quick, deep descent -> big gradient. Longer time at the bottom should let the saturation increase, making the nucleon bigger and decreasing the gradient... But besides nuclear regeneration algorithm seems to completely ignore this part of the dive (the saturation increases so the first deco stop is deeper what increases deco time but that's not connected to the gradient). I will test this, maybe I'm missing something. -- Jan Darowski _______________________________________________ subsurface mailing list [email protected] http://lists.subsurface-divelog.org/cgi-bin/mailman/listinfo/subsurface
