On 06/11/2015 01:46 PM, Mike Bayer wrote:
I am firmly in the "let's use items()" camp. A 100 ms difference for
a totally not-real-world case of a dictionary 1M items in size is no
kind of rationale for the Openstack project - if someone has a
dictionary that's 1M objects in size, or even 100K, that's a bug in
and of itself.
the real benchmarks we should be using, if we are to even bother at
all (which we shouldn't), is to observe if items() vs. iteritems() has
*any* difference that is at all measurable in terms of the overall
execution of real-world openstack use cases. These nano-differences
in speed are immediately dwarfed by all those operations surrounding
them long before we even get to the level of RPC overhead.
Lessons learned in the trenches:
* The best code is the simplest [1] and easiest to read.
* Code is write-once, read-many; clarity is a vital part of the read-many.
* Do not optimize until functionality is complete.
* Optimize only after profiling real world use cases.
* Prior assumptions about what needs optimization are almost always
proven wrong by a profiler.
* I/O latency vastly overwhelms most code optimization making obtuse
optimization pointless and detrimental to long term robustness.
* The amount of optimization needed is usually minimal, restricted to
just a few code locations and 80% of the speed increases occur in just
the first few tweaks after analyzing profile data.
[1] Compilers can optimize simple code best, simple code is easy to
write and easier to read while at the same time giving the tool chain
the best chance of turning your simple code into efficient code. (Not
sure how much this applies to Python, but it's certainly true of other
compiled languages.)
John
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev