> i've only been learning since about a year plus, and started with c#, and 
> have also recently picked up c++, even though it has been highly discouraged 
> to do so. i still want to learn it anyway because so much stuff is built with 
> c++ and also because i just want to. i like both languages and i think im 
> somewhat aware of the controversy around these languages.

The risk is being frustrated and giving up. But doing something you're 
intrinsically curious and motivated about also give you strength to overcome 
the obstacles. Though don't hesitate to take a break if you're stuck, you can 
try a new language or a small project or do something else before coming back.

> ive a bit of an obsession with performance and cleanliness. especially the 
> type of software im interested in working is performance-critical. i have a 
> interest in writing low level stuff. nim seems ideal. but i wonder how c# 
> stacks up to nim

In my experience, performance reachable in C is easily reachable in Nim, by 
running c2nim on the code. Performance reachable in Assembly is reachable in 
Nim as well with inline assembly. This is something I did multiple times in 
high-performance computing and also high-speed cryptography:

  * OpenBLAS is a reference implementation with 90% assembly code: 
<https://github.com/xianyi/OpenBLAS>, see example float32 matrix multiplication 
i.e. doing `C[i, j] = C[i, k] * B[k, j]` 
<https://github.com/xianyi/OpenBLAS/blob/develop/kernel/x86_64/sgemm_kernel_16x4_skylakex.S>.
 This code generator here 
<https://github.com/mratsim/weave/blob/7682784/benchmarks/matmul_gemm_blas/gemm_pure_nim/common/gemm_ukernel_avx512.nim#L12-L76>
 generates something faster, and supporting integer as well. The actual code 
generator implementation is here: 
<https://github.com/mratsim/weave/blob/7682784/benchmarks/matmul_gemm_blas/gemm_pure_nim/common/gemm_ukernel_generator.nim#L149-L159>
 as you can see, I use prefetch, restrict and alignment to squeeze the utmost 
performance I can.
  * For high-speed cryptography, i use Nim macros to create an assembler to 
improve performance by 70% over pure Nim/pure C by using specific instructions 
that compilers cannot generate, MUL, ADCX and ADOX: 
<https://github.com/mratsim/constantine/blob/1cb6c3d/constantine/math/arithmetic/assembly/limbs_asm_mul_x86_adx_bmi2.nim#L90-L111>.
 For example my SHA256 code is faster than OpenSSL



and significantly more redable, see OpenSSL assembly codegen: 
<https://github.com/openssl/openssl/blob/12ad22d/crypto/sha/asm/sha256-586.pl#L516-L653>
 and my code: 
<https://github.com/mratsim/constantine/blob/1cb6c3d/constantine/hashes/sha256/sha256_x86_shaext.nim#L38-L130>

> but some test programs ive recently written, particularly string heavy ones,

Strings are very particular beasts. First of all the slowest language for 
string processing is C because it doesn't even store the length so all string 
processing starts by scanning the whole string to get it's length. Besides, 
naively using strings will allocate memory over and over for intermediate 
buffers. If performance critical instead you want to mutate the same buffer. 
The problem is the same in C++, Nim strings are the same as those in C++: a 
buffer, the length and the buffer max capacity. In garbage-collected languages 
though, the garbage collector can be highly tuned to string allocations and 
deallocations since it's pretty common _cough Perl_

> anyway long story short i was wondering why should a c# programmer, with all 
> the recent performance improvements that come with .net 7, consider more 
> seriously committing to nim?

If you're a game developer, Unity, one of the major frameworks allows C# 
scripts so it's pretty compelling to stay there. If performance is so critical 
that you consider assembly, Nim is actually a viable alternative to just code 
selected hot paths in inline assembly. But if you use windows, there is no 
inline assembly so _shrug_. Now if you want to have fun, I personally find Nim 
enjoyable to code in.

Reply via email to