This message is mostly taken from conversation in telegram group, edited for 
better appearance/structure & some grammatical errors are fixed. No new 
arguments (compared to telegram discussion) are introduced, althoug I might've 
edited wording.

* * *

I don't have expertise to comment on "Control over memory ordering." in nim, 
but I commented on things that I'm relatively experienced in

Again, I'm really not qualified to comment on things related to memory 
managementt, move semantics and other related things, so I mostly skipped since 
I'm afraid to write something wrong/misleading

# Comments on article

> Have fundamental design with shared concepts and minimum exclusive cases to 
> keep in mind.

Due to easy creation of DSL it is not necessary to introduce custom 
keywords/features to the language. Core development team is mostly working on 
fundamental features like memory management, concepts etc. Everything else - 
async, custom trait derivation, pattern matching, support for 
whatever-programming-paradigm you find interesting this month can be 
implemented as DSL with minimal friction.

UFCS and style-insensetivity also helps on this one since it completely 
eliminates syntactic difference between OOP-style and procedural code. Support 
for custom operators is built into the language ad as easy as `proc +(a, 
b:string): string = a & b` \- if you want to use `+` for string concatenation 
for example (a lot of people complain about it for some reason). Remember drama 
about `:=` operator in python? `<=>` operator in C++?

I think the macros and UFCS actually is _the_ thing in nim. You don't hardcoded 
features into the language, instead you just write macro

All other languages have features hardcoded. When you hardcode something into 
your code everyone thinks it is a 'bad practice' and you should not do this. 
When something is hardcoded into compiler everyone is perfectly fine with 
spending hundreds of hours talking about this, writing RFCs and writing 
articles on this one. I mean. of course some things just cannot be generalized 
(just as you can't write generic code that does everything at once at runtime) 
but still, the less features you have to manually add & maintain the better it 
would be in the long run.

Why lisp ideas are still useful today begin so old? Because there is no 
enforcement of particular coding style (well, assuming you want to deal with 
`(((((((((())))))))))`. I don't partiularly like lisp itself (not a fan of 
dynamic typing etc.) but that's just part that is hardcoded in concrete 
implementation.

> Guarantee safe, defined behavior by default, but still provide tools to write 
> and abstract away unsafe code when needed.

Nim has raw pointers for interfacing with C-level code

> Long-term maintainability of programs written in the language. This includes 
> problems of inheritance, function overloading and so on.

Has single inheritance and function overloading

> Package management.

Has standard centralized package management

> API documentation format.

Documentation comments are a part of AST, compiler comes with built-in 
documentation generator

> Onyx introduces powerful macros written in Lua. It allows to re-use existing 
> Lua code and have full access to the compilation context thanks to easy 
> debugging with Lua.

This might be easier from implementation perspective, but I think writing the 
whole program in one language is superior soltuion due to lack of switching. 
Nim macros are written in subset of nim (quite large btw: you have pointer 
semantics, callbacks, almost all oop features (can't do casting though))

> Classes may have a finalizer defined and thus have automatic resource control.

Nim has destructor hook for objects - `proc =destroy(obj: var T) = ...`

> Onyx implements real traits as composable units of behaviour thanks to 
> powerful function management tools like aliasing, implementation 
> transferring, un-declaring and renaming.

There is no automatic `#[derive()]` like in rust (although it can be 
implemented as a library using macros (my proof-of-concept one 
<https://github.com/haxscramper/nimtraits)>) and language has concepts 
<https://nim-lang.org/docs/manual_experimental.html#concepts> that are 
currently being reworked to make use easier - 
<https://github.com/nim-lang/RFCs/issues/168> (currently == literally right now)

Classes and traits together impose object-oriented capabilities of the language.

> Onyx has a concept of generics. Specializations of generic types may have 
> different members and evaluate delayed macros. Specializations of functions 
> with generic arguments may return different values and also evaluate delayed 
> macros.

Nim has support for generics - typeclasses, concepts and type inference. Nim 
macros can be invoked _after_ generic instantion and have access to the AST of 
type definition. You can, for example, get code for implementation of the type 
passed as generic parameter and iterate over all it's fields

> Functions may be overloaded by arguments and return values.

Nim can only overload by arguments, not return valueus, but I would argue that 
overload on return types is actually counter-intuitive and does not fit into 
'mostly imperative' programming paradigm of nim

> Onyx has a concept of annotations, which may be applied to variables and 
> functions.

You can use custom pragma annotation in form of `{.yourAnnotation.}` that can 
be used by other macros. Pragma itself might be a macro - for example 
`{.async.}` (support for asynchronous programming in nim) is implemented as 
macro <https://nim-lang.org/docs/asyncdispatch.html#async.m%2Cuntyped> and 
allows to write almost the same code for both syncrhonous and asynchronous code

> The language defines a set of now-commonly used arithmetic types, including 
> SIMD vectors, matrices and tensors, floating and fixed binary and decimal 
> numbers, brain and tensor floats, ranges and ratios.

Nim does not have support for this, but in my opinion this is a library 
feature. If we try to drag things not-commonly-used things into the language we 
just end with feature creep. Thanks to UFCS (yes, again) and macros you can 
integrate absolutely anything into the language and make it feel completely 
native

> Onyx contains several utility types, such as unions, variants, tuples, 
> anonymous structs, lambdas and runnable blocks of code.

Nim has tagged unions (case objects), they also serve as sum types. Or you can 
have typeclasses in form of type `Hello = float | int | string`. Nim has 
regular tuples `(int, char)` and anonymous structs (also called tuples) - 
`tuple[fld1: int, fld2: char]`.

More on lambdas/maps:

Lambdas - yes - actually there is little to no difference between semantics of 
regular procs and lambdas. If you want to write callbacks for example:
    
    
    import sugar
    
    # using regular proc
    echo @[2, 3, 4].map(proc(a: int): string = $a & "ee")
    
    # using `sugar.=>`
    echo @[2, 3, 4].map(a => $a & "ee")
    
    Run
    
    
    @["2ee", "3ee", "4ee"]
    @["2ee", "3ee", "4ee"]
    @["2ee", "3ee", "4ee"]
    
    Run

<https://nim-lang.org/docs/sugar.html#%3D%3E.m%2Cuntyped%2Cuntyped>

Another possible alternative to function calls in this case is a simple 
template `mapIt`. I adapted implementation from stdlib but made it simpler for 
this example.
    
    
    template mapIt*(s: typed, op: untyped): untyped =
      type Out = type((var it{.inject.}: type(items(s)); op))
      
      var result: seq[Out] = @[]
      for it {.inject.} in s:
        result.add(op)
      result
    
    echo @[2, 3, 4].mapIt($it & "ee")
    
    Run

You can easily write own template for filtering/mapping/folding etc. without 
requring user to only write expression like `$it & "ee"` in the map body.

The mapIt expands into regular for loop to avoid function calls (it does not 
look really pretty though as it uses hygienic variable names to avoid 
collision). MapIt is actually a template - simplest form of codegen, close to C 
preprocessor (but without it's drawbacks like inability to take `(1, 2)` as a 
single argument, lack of hygienic variables and so on)
    
    
    echo [
      type
        Out`gensym3646038 = type(
          var it: type(items(@[2, 3, 4]))
          $it & "ee")
      var result`gensym3646039: seq[Out`gensym3646038] = @[]
      for it in items(@[2, 3, 4]):
        add(result`gensym3646039, $it & "ee")
      result`gensym3646039]
    
    Run

# Comment on macro implementation

It is not clear what macro actually operates on: tokens, ast, or something else 
- since there is no type annotations on macro signature. And the whole things 
is just _dynamically typed_ lua embedded in other program which is written into 
_statically typed_ language.

I don't think it is much cleaner than simply AST. Since you accept token you 
basically have to either use very primitive ones (like integers for example) or 
just reparse whole argument body (if you want to have some kind of DSL for a 
macro).

And now programmer has to use two languages at once - one untyped, second one 
is typed, they have different semantics, different manuals. For a large project 
you have to maintain codebase in two languages. What if a macros needs to be 
more complex and it's implementation is split into several modules? Now you 
have two different module systems in the same project. I think concept of 
get-ast-return-ast is much, _much_ cleaner. You write everything in one 
language. If you know how to split string in compiled nim then you know how to 
do it in macro. If you need to declare a type you do it the same way. Same 
syntax, same semantics, same results.

Reimplementation of compile-time fibonacchi:
    
    
    import macros
    
    macro fib(a: typed): untyped =
      a.expectKind(nnkIntLit)
      proc aux(arg: int64): int64 =
        if arg < 2:
          arg
        else:
          aux(arg - 2) + aux(arg - 1)
      
      return newLit(aux(a.intVal))
    
    echo fib(20)
    
    Run
    
    
    6765
    
    Run

# Notes on syntax/grammar

 **NOTE** this is my persona opinion for the most part.

I actually think it is a good decistion to make language easy to parse & work 
with too. Except for style insensetivity nim as pretty simple grammar. This 
makes writing own tools much easier - if you wanted you could roll your own 
parser. Grammar does not require to have type resolution like in C++ for 
example. Which means it is not a good thing (IMO) to allow multiple different 
languages int the same file. 

Reply via email to