@andrea I 100% agree with your first point. It is one of the main reasons I 
can't use Nim as my main language, but use it as my main hobby language. The 
community just needs to grow, in all senses. We need more projects, more 
libraries, more users, and more marketing. These are hard things, and trying to 
grow too fast can be very detrimental, so I respect the situation.

This is your personal opinion of course, but I have to disagree with your other 
points about the new runtime, as I feel that they misrepresent Nim.

First, you seem to be implying that the GC is going to be completely removed 
for 1.0. AFAIK, this is definitely **not the case**. The Newruntime and the GC 
are going co-exist for a long time. (Core maintainers correct me if I am wrong 
here.)

You don't like the new lifetime annotations. This is personal preference, and I 
respect that. I personally have been burned by Rust and find this syntax easier 
to reason about than Rust (by a large margin), while still giving many of the 
same benefits. But that is just my opinion.

> Finally, the new runtime doe not seem to be based on sound research on formal 
> type systems... I think if one wants to really follow such an approach, it 
> must be tried on paper and proved correct with an actual demonstration before 
> jumping to the implementation.

I think is a completely unreasonable thing to say. By this reasoning, the only 
language you should use are languages that have been formally verified like Ada 
Spark, Idris, or APL.

There are many counter examples to your argument:

You talk about Beacon and Dingle like they are bad things. Rust took a very 
similar approach, and is highly successful.

The Rust borrow checker was based on Cyclone, an _extremely_ dead academic 
language. Rust took that idea and improved it by fixing many edge cases. A 
project that took years. People still see Rust as a "safe language based on 
sound research".

The Midori project at Microsoft is a "dead" project that produced important 
research on async systems, that inspired many mainstream async concepts.

Haskell was implementing state of the art lazy evaluation and Garbage 
Collection algorithms based on research paper drafts (not final publications) 
back in the 80's. Haskell is explicitly a research language that incorporates 
unfinished research even today. This is something Haskell is very proud of, Yet 
Haskell is still very popular in some industries, and is considered a 
fundamentally safe language based on "sound research".

The Nim GC, as well as the allocator under the GC, are based on research papers 
that were considered cutting edge when Araq implemented them. The Nim GC is not 
just some generic GC based on Java from the 90's. It is actually quite advanced.

**After several years of real world use** of the GC algorithm, serious flaws 
were discovered in the GC with regards to multi-threaded programs. Instead of 
trying to "hack" the GC to fix the problem, Araq searched the latest peer 
reviewed research to find a better solution to the problem. This is an 
extremely principled approach!

Academia is not some "magic place" that researchers go to commune with the 
computer science gods, and come down with the perfect answer to a problem. That 
is the worst case of "waterfall" software development. This is a known 
anti-pattern, that does not work well.

>From what I have seen, out of the "new languages" craze of the last few years, 
>Nim is one of the most principled when it comes to making decisions based on 
>academic papers. Araq spends much of his time keeping up with the latest 
>research. Compared to many of these other "new languages" that base their 
>design decisions purely on hype, by committee, or by uneducated personal 
>hunches, I personally find this very comforting.

Reply via email to