I'm comparing Python, D, Nim and Go. Here's one small Nim test program (it
works):
import tables
var
map = initTable[int, int]()
for i in 0..<10_000_000:
map[i] = i
for i in 0..<3:
for j in 0..<10_000_000:
map[j] = map[j] + 1
Run
Runs really fast, 1.2 seconds using 805MB (Python is 8.76s 1098MB, D is 7.17s
881MB, Go is 4.0s 627MB). Since I don't need 64-bit integers in the table, I
tried this to get the space down:
import tables
var
map = initTable[int32, int32]()
for i in 0..<10_000_000:
map[i] = i
for i in 0..<3:
for j in 0..<10_000_000:
map[j] = map[j] + 1
p3int32fail.nim(7, 6) Error: type mismatch: got <Table[system.int32,
system.int32], int, int>
but expected one of: (list of type matches)
Run
I'm guessing this is because the subrange type is int..int. It works if I add
i32 to the 10M constants. It also works if i and j are 32-bit but the map is
64-bit like the original, so Nim will automatically promote from int32 to int
without a type error. However, also confusingly, it gives a type error if I add
u32 to the 10M constants.
My suggestion is that instead of always having int..int types, subranges have
the smallest type that will contain the subrange. So a subrange of 0..10 would
be of type uint8. To make that work, Nim would also have to promote uintx to
int the same way it promotes intx to int. I can't see a reason why it will
promote int32 to int but not uint32 to int.
I'm just getting started with Nim so may be totally confused about how things
work.
Test results: using an int32 map reduces Nim memory usage to 537M (349M for Go,
no change for D). Specifying the initial table size as 10000000 brings Nim
memory usage down to 269M (188 for Go). But Go takes 3 seconds while Nim is
only 0.89s.