My experience so far with Nim is that there are a lot of "sharp and unexpected 
edges". Some examples:

  * 100M loops of s = s & 'x' is 8100x slower than s.add('x'). Response is 
"nobody writes this way". If something is 8100x slower, IMO it shouldn't be 
allowed at all.
  * 100M loops of s.add('x') uses 325MB of RAM
  * Nim stdlib code is not compiled in production mode unless my code is also 
in production mode, so even simple timing results like hashing data in a loop 
make no sense
  * tables are documented to require a power of 2 size _in the current 
implementation_. The documentation for initTable says to use nextPowerOfTwo or 
rightSize to specify the default size. What if the next implementation of 
tables requires the table size to be a prime number? The only correct, portable 
way to specify a table size is with rightSize, and IMO, that should be done 
inside initTable rather than expecting users to know implementation details. 
Users only know how many things they want to put in a table.
  * if you use a table size not a power of 2, it asserts - fine. But if you 
compile with -d:danger, it loops forever. Another reason initTable should 
always use rightSize on the initialSize arg.
  * everything having value semantics "for consistency" is a little on the 
strange side IMO. My guess is that when someone uses arrays, tables, and 
sequences, it is because they have a largish collection of data. If that gets 
passed around, it is being copied every time. I don't think most people would 
know or expect this, and since the program still works, their conclusion might 
be "Nim is slow".
  * a related problem: I wrote an array sort test with a small 100-element 
array. Very simple, worked great. Then I changed the array size to 10M and it 
fails with "Illegal instruction: 4", I'm guessing because of a stack overflow. 
Changed it to a ref array, added a new stmt, then the high function no longer 
worked (changed that to a constant), then the sort function no longer works 
(have to add a cmp proc), etc. It feels like a bit much just because an array 
size was increased.
  * the standard db_sqlite module is 30% slower than Python on a small test 
script because it doesn't cache prepared SQL. Another guy (tiny_sqlite) added 
that recently and his lib runs 3x faster than Python for the same test. Users 
of a language need to feel like the tools the language provides are "best in 
class" rather than providing minimal required facilities.
  * the SomeInteger type is compatible with all integers. But an int64 can't be 
passed to an int type, even on a 64-bit system.
  * requiring 10'i64 or 8'i32 on integer constants is unwieldy and ugly, and 
it's in Nim system code too.



I'll admit people (including myself) are not great at writing or understanding 
the results of benchmarks. If the language environment has rough edges like 
this, it turns every small test into a head-scratching exercise. And after a 
few weeks of this, it leaves one with the feeling "If I can't even get simple 
10-20 line programs to do what I want, does it make sense to use this for a 
200K line program?" It makes me feel a bit like "to drive this car, I have to 
become a mechanic and know how to rebuild it first or I'll crash".

When Nim works, is fast, and uses reasonable amounts of memory, it's fantastic 
and I love it. But getting there seems more difficult than it should be.

Reply via email to