I'm interested in the specs of the machine these benchmarks were run on. The 
CPU architecture can have a huge impact on this kind of thing.

**@erikenglund**: Perhaps you would be willing to do a full write-up for this 
proposal? Something as significant as changing the default `int` size of a 
language is not something to be taken likely. My recommendation is writing a 
benchmark comparing the performance and memory efficiency of 32 bit vs 64 bit 
integers in Nim. Then, run the benchmark on the following architectures:

  * AMD 32 bit
  * Intel 32 bit
  * AMD 64 bit
  * Intel 64 bit



You will also want to ensure that each CPU has the exact same clock rate and 
number of cores. You may need to enlist some help unless you just have multiple 
machines lying around.

I believe you will be surprised by the results. Modern CPUs are complex 
creatures. I'm no expert, but I do believe that different modern CPUs perform 
all sorts of cache optimizations and such that aren't exactly transparent even 
at the assembly level.

My opinion on the subject is that my proposed experiment above is completely 
unnecessary. If you are writing something in Nim and find that 32 bit integers 
perform better for your use case, you should use 32 bit integers. In the vast 
majority of use cases, the performance difference is not a concern. In fact, in 
many use cases, the added precision is more important than the performance. On 
top of this, as explained multiple times earlier in the thread, integer 
performance in 32 bit vs 64 bit size is highly dependent on the CPU 
architecture. For instance, on a 32 bit CPU, 64 bit integers are very likely to 
have much worse performance than 32 bit integers. Vice versa may or may not be 
true(actually, if someone could explain how 32 bit integers work on a 64 bit 
chip, that would be awesome). This is why Nim chooses to use the CPU default 
int size for ints.

Reply via email to