This string test uses s.add('x') instead of s = s & x for Nim, and s += 'x' for 
Python. 
    
    
    ms:nim jim$ cat str1.nim
    var
      s: string
    
    for i in 0..100_000_000:
      s.add('x')
    echo len(s)
    
    ms:nim jim$ nim c -d:danger str1
    Hint: 14210 LOC; 0.275 sec; 15.977MiB peakmem; Dangerous Release build; 
proj: /Users/jim/nim/str1; out: /Users\
    /jim/nim/str1 [SuccessX]
    
    ms:nim jim$ /usr/bin/time -l ./str1
    100000001
            0.68 real         0.56 user         0.10 sys
     326627328  maximum resident set size
         79753  page reclaims
             8  page faults
             1  voluntary context switches
             6  involuntary context switches
    
    ms:nim jim$ cat str1.py
    s = ''
    for i in xrange(100000000):
      s += 'x'
    print len(s)
    
    ms:nim jim$ /usr/bin/time -l py str1.py
    100000000
           20.74 real        20.67 user         0.06 sys
     105099264  maximum resident set size
         25834  page reclaims
             9  involuntary context switches
    
    
    Run

Nim blows Python out of the water on this, though it uses 326M of RAM to create 
a 100M string.

Python's memory use is good, only 105M for a 100M string, but it's slow.

For these tests, I'm not so much looking to find the best way to create a 100M 
string in Nim or Python. I'm comparing the two to find out where there may be 
large performance differences, hopefully in Nim's favor, and to get a better 
understanding of how Nim works.

Reply via email to