A firm operational boundary in this space is: what you know when. @mratsim's 
example is the one I always give in relation to 
[nio](https://github.com/c-blake/nio): file sizes / contents (e.g. headers) are 
intrinsically something you only know at run-time. Full stop. `nio` has to do 
that whole run-time type system like NumPy (which @elcritch usually means 
realizing some dynamic-dispatch to big-vector/matrix SIMD handling routines).

The moment you decide to go compile-time, you limit that as well as inducing 
re-compiles which can really bother REPL-head-data science folk. For very small 
scales like 2-vectors to 4-vectors one can imagine having shared libraries with 
the basic Cartesian product of "types & sizes" expanded, but as Araq observed 
already, different regimes need different approaches. This Cart product 
"argument" is one reason why.

I can maybe add more in this comment by saying there is an even more general, 
unaddressed that I know of problem related to memory allocation. Besides the 
CPU & GPU which you are forced to confront doing GPU programming, there is also 
file-based memory allocation as in `nio`. When I was doing the `adix/btree` 
module, I was contemplating how best to model allocator-agnostic coding styles 
in Nim. I wanted to be able to let people use GC-types like `string` as keys, 
but then also allocate on-disk under one umbrella interface.

Zig has this pattern where they receive & pass around allocators explicitly in 
every call. That language doesn't even have nice named-parameters with default 
values like Nim. So, Nim's PL features always struck me as ripe for some nice 
modeling of this stuff. To be useful to the most people it would probably have 
to be some kind of stdlib entity. A good solution might well be able to be a 
different way to manage multiple memory management strategies in the same 
program - better than the almost `--define:thisKindaGC` global setting we have 
now. And in that comparison probably lies gotchas &| solutions Araq has long 
thought about. :-) { Also, for all I know, Arraymancer solved this in its 
GPU-CPU work. Pardon my ignorance. }

Reply via email to