I said this earlier in the thread, but I'll reiterate: I think this idea is a waste of time and resources.
Hear me out. Code is read far more often than it is written. This is objective fact. Therefore, a programming language should be optimized for easy reading. A user with a beginner level understanding of Nim should be able to look at any piece of Nim source and be able to grasp some tenuous understanding of what it does and why it does it. Fracturing the ecosystem into multiple and wildly different syntaxes would make it very difficult for beginners. What the code does is far more important than how it looks. At first glance, this may seem to be counter-intuitive to my previous point. However, it is not. Any syntax can be translated into any executable given the correct compiler. The entire reason that source code exists is to abstract away things like bit flipping and reading from the cpu cache explicitly. Imagine writing a piece of software in pure binary. How to I tell an x86 cpu to read from cache and compute the sum of two integers in binary? Does the cpu actually have any concept of what an integer is? I mean an 8-bit sequence can be interpreted as many different things. How do you enforce type safety with something as ambiguous as binary? Everything above binary level is an abstraction. Hell, even binary itself is a sort of abstraction on the physical switching of transistors. Therefore, we need abstraction to bring various operations closer to the realm of simple human conception. This becomes much easier when you have a strict set of rules that explicitly define the exact character sequences required to perform a specific operation. By having multiple different syntactic structures able to call the same operation we muddy the waters. We put another abstraction on top of an already deep stack of abstraction. In some cases this can be good(programming languages in general). In other cases, this can be bad(syntax on top of syntax). An abstraction just for abstraction's sake is never a good idea. Clearly defining an explicit syntax to complete any desired operation is the way to go. The entire "skins" idea seems predicated on the supposed truth that people choose programming languages for projects based on syntax. I will say that in some situations this is true. A good example would be C# vs Java. Some people find Java to be too verbose and opt for C#. Some people find that C# is not verbose enough and choose Java. The thing is, none of that really matters when you consider real world constraints. If your software will only be running on windows systems, C# is probably an acceptable choice. Java would be as well, but then you would need your users to install a Java runtime to use your software. Maybe C# would be the better choice in that situation regardless of syntax concerns. If you are building a multi-platform piece of software, Java is the obvious victor considering that it is much more portable than C#. Another good example is Python vs Ruby. Both have relatively similar runtimes and portability. Both are relatively equal in execution speed. However, Python has a much wider and more mature ecosystem of third party libraries and support. Therefore, for a real world application Python is most likely the wiser choice regardless of whether or not your devs like the end keyword. Developers who argue the merits of a language based on petty syntax concerns like use of braces are usually not very good developers. I will say that syntax is important as it can improve readability and make a language easier to learn. If a language is more readable and easier to learn, that makes it easier to onboard new hires. It makes it easier to conform to style guidelines. However, multiple different syntaxes for what is essentially the same language makes this point null and void. It introduces the problem of [overchoice](https://en.wikipedia.org/wiki/Overchoice). Increasing cognitive load on developers is always bad. Programming by its very nature already puts a heavy cognitive load on its practitioners. No need to add to that weight. Lastly, Nim already has very pleasing syntax. There really is no need to change it. The only reason someone would need to change Nim's syntax is to suit their own personal preferences. That really is not a good enough reason to change how a language handles, say, scope blocks. "Skins", while a cool idea, are not needed. Working on this functionality for Nim would be a waste of time. What the Nim community needs to focus on in order to gain wider adoption is the following: * Stabilization * Larger third party package ecosystem * Sponsorship * Web technologies(frameworks, database software, etc) * Mobile support * More tutorials * Better documentation(it's very thorough, but could be organized better) * Open source evangelism * Illustrating that garbage collection can be a good thing Look at Python for a great example of how a grassroots programming language can become successful. Python didn't really do anything too crazy. It's a very straight forward language. It held a very firm stance on what Python syntax is. Python now has multiple implementations that all have almost the exact same syntax. I would even argue that Python sticking to whitespace delimiting even in the face of great dissent is part of what made it so popular. In short, if people don't like that Nim uses whitespace delimiting and uses proc instead of func or def, that's totally fine. They can go find another language with fantastic metaprogramming support, fantastic C interoperability, great performance, great type safety, ease of development, built in templating, and multi-paradigm capability. Oh, wait, they can't.