Re: Casting basics (a few questions)

2020-03-12 Thread yglukhov
Alternative: let mybuf: ptr uint8 = ... let mybufLenInBytes: int = ... var s = newSeq[uint8](mybufLenInBytes) if mybufLenInBytes != 0: copyMem(addr s[0], mybuf, mybufLenInBytes) Run

Re: Unknown performance pitfall in tables?

2020-03-12 Thread adnan
https://raw.githubusercontent.com/dolph/dictionary/master/enable1.txt

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Well, maybe and via the `let` could be. I'm not sure the slice assignment is optimized down to a `copyMem`, though which works fine for your string data. Could be slice compare and `copyMem` in the if clause is best. Those Table use and how-to-compile modifications may well pack much more of a

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Do you have an example input file somewhere you could link to?

Re: Unknown performance pitfall in tables?

2020-03-12 Thread adnan
I was under the impression that slices are CoW.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Oh, and one other number - your original code with PGO on that same input - 72 ms. So, the double/triple table lookup/slice stuff is really all only costing around 72/66 =~ 9%.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
So, with your original version doing only `-d:release` I get 1030 ms (I.e. 1.03 sec) while on the same machine with my version doing gcc PGO and `--gc:arc` I get 66 ms, about 15x speed-up.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread adnan
Sorry if I sound incoherent but what is PGO and how can I enable it and why is it not on my default

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Besides @Stefan_Salewski's comments, it also bears mentioning that A) you need to be careful to compile the Nim with `-d:danger` before making Nim performance conclusions, B) `--gc:arc` may also help but may or may not be in your particular version of Nim and C) the algorithm in your Nim case

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
BTW, I haven't tried "every version of the code in between" to isolate all the various causes & effects which might be instructive, but my version with only `-d:release` takes 630 ms while `-d:danger` takes 446 ms while `-d:danger --gc:arc` takes 255 ms, and then that 66 ms with PGO. So, in

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Anyway, my code with no PGO and `-d:danger --gc:arc` only takes 90 ms. My conclusion from this is that the Nim code can be 211/90 or 211/66 =~ 2.3x or 3.2x faster than the Go, depending on what you think is fare to compare. BUT you need to manually guide inlining, and if you want that last

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Actually, just taking your original code and adding `{.inline.}` to the `func` and compiling with `-d:danger --gc:arc` makes it only 118ms. So, that inlining must be the biggest boost the PGO is discovering.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Anyway, once you reproduce my times you should maybe tell those Reddit people how fast Nim code can be. :-) Perusing the thread, I didn't see people doing PGO on the C/C++ solutions. (It may well make less of a difference for those solutions, too.)

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
PGO = profile-guided optimization. I mentioned how to use it already.. `-fprofile-generate...` . It's not on by default because how it works is gcc instruments your code, counts what happens based on your sample program, then uses this information to make good judgements (unlike usual

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
It also wouldn't be crazy for the `nim c` / `nim cpp` compiler drivers to accept some sample program invocation and do the program - sample-run - re-compile three stage process more automatically, but that isn't in place either. My `nim-pgo` script just assumes the usual `~/.cache/nim/r`

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Oh, and just as a sanity check that I'm using the right data/algo, my output is that `@["estop", "pesto", "stope", "topes"]` which I think is right reading a little more of that original reddit thread you posted.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
Oh, and just for reproducibility's sake: myshell$ nim --version Nim Compiler Version 1.1.1 [Linux: amd64] Compiled at 2020-03-12 Copyright (c) 2006-2019 by Andreas Rumpf git hash: bbc231f8e06c18fe640fa56a927df476c2426ccb active boot switches: -d:release

Re: Unknown performance pitfall in tables?

2020-03-12 Thread Stefan_Salewski
> let slice = repeated[i .. i + arg.high()] My first guess is that this is not really optimal for performance, as it may allocate a new string for each loop iteration. You may make slice a string variable outside of the loop, and in the loop set first len to zero and then append the chars one

Re: Multithreading: still .running after joinThreads()

2020-03-12 Thread mratsim
If your function is execProcess You can just use execProcesses instead to have multi-processing. Your `Thread[(string,string,string,uint16,uint16)]` seems dangerous due to the strings not being GC-safe.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread mratsim
Nim strings have value semantics so your slice always trigger an allocation. In the future this RFC should help tremendously: [https://github.com/nim-lang/RFCs/issues/178](https://github.com/nim-lang/RFCs/issues/178) It would allow assigning slices (openarrays) to variables or store them in

Re: Unknown performance pitfall in tables?

2020-03-12 Thread Araq
A factor of 2 difference between programming languages for a tiny program doesn't mean there are "performance pitfalls", what a silly headline...

VapourSynth - now we do video ;oP

2020-03-12 Thread mantielero
I have managed to wrap VapourSynth. Just the bare minimum to get something working. It has been a good exercise to learn Nim and to start loving the language. Thank you all for your support. The code is crappy (my code I mean), but the result really surprised me. VapourSynth aims to have

Template retrieving variading type of argument (compatible with C vaarg)

2020-03-12 Thread Lachu
I have problem with this code type va_list *{.pure.} = ptr object template va_arg*[T](list: var va_list): T = result = (cast[ptr T](va_list))[] inc[ptr T](list) var args: va_list var c =

Re: Introducing Norm: a Nim ORM

2020-03-12 Thread moigagoo
I'm proud to announce the first release of **Norman** , the migration manager for Norm: [https://moigagoo.github.io/norman/norman.html](https://moigagoo.github.io/norman/norman.html) With Norman, you can keep your DB config separate from the models in a project config and generate, apply and

Re: Interlanguage communication

2020-03-12 Thread bobalus
Take a look at solutions like nanomsg, redis pubsub, zeromq.

Re: Casting basics (a few questions)

2020-03-12 Thread mantielero
Thank you both. I got it working.

Re: Interlanguage communication

2020-03-12 Thread mratsim
Sockets are the default way to do interprocess communication, use them or any abstraction on top like those @bobalus mentioned.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread cblake
It was a tad melodramatic, but at least it had a '?'. And it even turned out to be 2-3x in the other direction (consistent with other Go table benchmarks I've tried).

Re: Nim for Beginners Video Series

2020-03-12 Thread Kiloneie
#20 is live at: [https://youtu.be/vfWxJH8Futo](https://youtu.be/vfWxJH8Futo) Module Except Keyword.

Re: Unknown performance pitfall in tables?

2020-03-12 Thread Lecale
It's interesting to see how to improve performance

Re: IntelliJ / Netbeans plugin for Nim

2020-03-12 Thread machineko
I would love some Jetbrains ide support/good plugin cause I'm just vs code hater and i cant effectively write any nim program in this ide :(