On Mar 5, 11:49 am, "Jon Skeet <[email protected]>" <[email protected]> wrote: > > Before any other settings are tried, it would be worth benchmarking it > > with -server as it can make a large difference when compared to - > > client. The default varies based on OS and machine specification so it > > makes sense to use an explicit setting to make it clear what JIT was > > used. > > Right. Somewhat embarrassingly, this laptop doesn't actually *have* > the server JIT installed. I'm mostly working on the C# code at the > moment, but I'll come back and rerun the test with the server JVM when > I've got a bit more time.
I've just installed JDK 1.6.0u12, and after tweaking some path settings I've got the server VM running. Wow - what a difference! It over *doubles* the performance: Benchmarking benchmarks.GoogleSize$SizeMessage1 with file google_message1.dat Serialize to byte string: 2581851 iterations in 30.16s; 18.613789MB/s Serialize to byte array: 2583547 iterations in 29.842s; 18.824497MB/s Serialize to memory stream: 2210320 iterations in 30.125s; 15.953759MB/ s Deserialize from byte string: 3356517 iterations in 30.088s; 24.256632MB/s Deserialize from byte array: 3356517 iterations in 29.958s; 24.361889MB/s Deserialize from memory stream: 2618821 iterations in 29.821s; 19.094952MB/s Benchmarking benchmarks.GoogleSpeed$SpeedMessage1 with file google_message1.dat Serialize to byte string: 17068518 iterations in 29.978s; 123.802124MB/ s Serialize to byte array: 17520066 iterations in 30.043s; 126.802376MB/ s Serialize to memory stream: 7736665 iterations in 30.076s; 55.93307MB/ s Deserialize from byte string: 16123669 iterations in 30.073s; 116.57947MB/s Deserialize from byte array: 16082453 iterations in 30.109s; 116.14243MB/s Deserialize from memory stream: 7496968 iterations in 30.03s; 54.283176MB/s Benchmarking benchmarks.GoogleSize$SizeMessage2 with file google_message2.dat Serialize to byte string: 6266 iterations in 30.034s; 16.826494MB/s Serialize to byte array: 6246 iterations in 30.027s; 16.776697MB/s Serialize to memory stream: 6042 iterations in 29.916s; 16.288969MB/s Deserialize from byte string: 4675 iterations in 29.819s; 12.644595MB/ s Deserialize from byte array: 4694 iterations in 30.093s; 12.580387MB/s Deserialize from memory stream: 4544 iterations in 29.579s; 12.389998MB/s Benchmarking benchmarks.GoogleSpeed$SpeedMessage2 with file google_message2.dat Serialize to byte string: 39562 iterations in 30.055s; 106.16416MB/s Serialize to byte array: 39715 iterations in 30.178s; 106.14035MB/s Serialize to memory stream: 34161 iterations in 30.032s; 91.74085MB/s Deserialize from byte string: 36934 iterations in 29.794s; 99.98019MB/ s Deserialize from byte array: 37191 iterations in 29.915s; 100.26867MB/ s Deserialize from memory stream: 36237 iterations in 29.846s; 97.92251MB/s There's still an optimisation I want to make in terms of unknown field sets, which made a difference to the C# code, but this result is really amazing... Jon --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~----------~----~----~----~------~----~------~--~---
