* more willingness to breaking changes in the stdlib, for sure, as others 
have said, and hunt down inconsistencies/reduce the number of "alternate 
approaches" that exist. consider moving certain libraries over to fusion and 
take fusion more seriously. for instance, just off the top of my head: if i 
have a need for a higher-order function, should i choose the sequtils functions 
(which copy needlessly anyway when doing something like 
x.filterIt().mapIt()...) or should i use sugar.collect? as another example 
there are a lot of parser libraries in the stdlib, and the relationship between 
them all can be confusing to newcomers.
  * LLVM backend, maybe made by default. afaik last i heard this is a work in 
progress (there was the unofficial one, nlvm, but i think there is supposed to 
be work on an official one too? may be misremembering). there are many people i 
know who find nim's choice to compile to C/C++ dubious at best. for me i 
recognize that it has certain advantages esp. when compiling to niche 
platforms, but i know there are a lot of people out there that would prefer 
direct LLVM compilation.
  * integrating LLVM into the compiler would open up certain feature 
possibilities. for instance, we could integrate clang in a similar manner to 
`zig cc` (and also potentially make `{.emit.}`, etc. still work in certain 
circumstances), but what i'm really looking for here is runtime code 
generation. nimscript never seemed to gain much traction as an embeddable 
product: tying nimscript to and from native AOT code seems very unmaintained. 
what i'd like to see is
    1. a more stable and easy-to-use interface for embedding the language and
    2. a JIT runtime (probably based on LLVM mcJIT). obviously, this would not 
be available on every platform but what is Nim if not flexible
    3. the wildcard (but would be cool) option: go in the direction of 
[Scopes'](https://scopes.readthedocs.io/en/latest/about/) "on-demand JIT system"
  * less flashy, might be implemented w/o needing it as a language feature but 
it's a want i've had for a while but haven't had time to poke around to 
ascertain its feasibility: some kind of way of profiling memory usage "by 
subsystems" that can be declared, which can be very useful. in, for instance, 
game engines, it is popular to tag heap allocations by the "general discipline" 
to which something belongs, for instance, a dynamic array might be allocated by 
the "audio system," a hashmap maybe by the "rendering system," something else 
by "networking" etc. from which this data can be queried by the application 
which can be actually quite valuable in determining the "hogs" in a more 
general, view-from-above manner. in some apps i've seen, this is done by 
augmenting the heap collection types with an enum or whatever that associates 
allocation <-> subsystem but this requires applying a wrecking ball to all your 
existing heap-backed types, which is messy and annoying. i'm rambling here, but 
what i figure could be done is an interface that kind of resembles the old 
deprecated stack-based region allocator (just not really as an allocation 
strategy)


    
    
    var core, audio, rendering: MemoryStats
    memstat core:
      while true:
         memstat audio:
            processAudio()
         memstat rendering:
            render()
    
    Run

Reply via email to