Hi. People taking part to this discussion[1] seem to have a hard time being explicit about what they are trying to achieve.
(1) >From information gathered so far, the issue raised seems to have been solved by taking advantage of the fact that the JVM loads classes at first use (i.e methods will not be delayed by the loading of tables that they don't use). At least, this is what I conclude from my tests that compare code with and without preset tables (which differ by less than 50 ms). [This has yet to be confirmed by the initial poster who reported a unexpected difference of 30 microseconds!] (2) A _second_ issue has been bundled in the commits related to the initial problem described above: Instead of computing the tables contents at runtime, they are now set from litteral arrays. In addition to being non-consensual, it is still not clear that this change is a necessary step to fix the reported problem. (3) On a PC, comparing the old "FastMath" code (no IOD, no preset) with the latest version, I get the following timing gain for a single call to "pow" (i.e. a function that _uses_ the tables): 130 ms (preset) 80 ms (no preset) So, indeed, using preset tables does make the first call run faster. [On subsequent calls, the difference is less than 1 microseconds (cf. "FastMathLoadCheck").] The issue is: When do we say that initialization time is too long? On this machine: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz the difference is around 50 ms. Is that too long? This will most probably be swamped in the execution time of any useful application and in my opinion does not justify the workaround currently in trunk. The slowliness reported initially (9 seconds to ~1 minute on a "low-end" device) is indeed excessive. But can we please draw the line at some meaningful value instead of prematurely over-optimizing for a one-shot gain? (4) Can we also lay out rules about what consitutes an acceptable request for a workaround? I don't think that is OK to just say that "FastMath" is too slow. The master argument here was often that one should provide a (realistic) use case. I see that a faster startup time would benefit an application required to be restarted several times per second. But how realistic would that be? And would "FastMath" be the single bottleneck in such a case? Moreover, if there was such need to be able to restart the JVM several times per second, then I'd draw the attention to the fact that "FastMath" is not the right tool: Indeed, for the first call to "pow", it is still about 150 slower than "Math" or "StrictMath". Does that suggest that we must implement some way so that users are able to select whether CM will use "Math" or "FastMath"? (5) On Sun, Sep 11, 2011 at 02:51:31PM +0100, sebb wrote: > [...] > > I don't think minimising the class source file size is nearly as > important as the startup time. > First, it's not only about source size, but also code versus tables. The former is self-descriptive. Second, not only source file is larger, but so is bytecode size. Without the preset tables, the ".class" file for was 38229 bytes long. With all the changes to accomodate preset tables, there are now 5 ".class" files with the following sizes: 8172 FastMathCalc.class 34671 FastMath.class 35252 FastMath$ExpFracTable.class 49944 FastMath$ExpIntTable.class 39328 FastMath$lnMant.class For the same functionality, this results in more than a four-fold increase in bytecode size.[2] > [...] Regards, Gilles [1] https://issues.apache.org/jira/browse/MATH-650 [2] In CM, the average size of a ".class" file is about 5885 bytes. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org