Am Sat, 17 Oct 2015 16:27:06 +0000 schrieb Sean Kelly <[email protected]>:
> On Saturday, 17 October 2015 at 16:14:01 UTC, Andrei Alexandrescu > wrote: > > On 10/17/15 6:43 PM, Sean Kelly wrote: > >> If this is the benchmark I'm remembering, the bulk of the time > >> is spent > >> parsing the floating point numbers. So it isn't a test of JSON > >> parsing > >> in general so much as the speed of scanf. > > > > In many cases the use of scanf can be replaced with drastically > > faster methods, as I discuss in my talks on optimization > > (including Brasov recently). I hope they'll release the videos > > soon. -- Andrei > > Oh absolutely. My issue with the benchmark is just that it claims > to be a JSON parser benchmark but the bulk of CPU time is > actually spent parsing floats. I'm on my phone though so perhaps > this is a different benchmark--I can't easily check. The one I > recall came up a year or so ago and was discussed on D.general. 1/4 to 1/3 of the time is spent parsing numbers in highly optimized code. You see that in a profiler the number parsing shows up on top, but the benchmark also exercises the structural parsing a lot. It is not a very broad benchmark though, lacking serialization, UTF-8 decoding, validation of results etc. I believe the author didn't realize how over time it became the go-to performance test. The author of RapidJSON has a very in-depth benchmark suite, but it would be a bit of work to get something non-C++ integrated: https://github.com/miloyip/nativejson-benchmark It includes conformance tests as well. -- Marco
