Re: Help binding the nghttp2 to something consumable by Nim!
Here's a working wrapper for nghttp2. It will download the code from github, build it with `cmake/make` or `configure/make` and link the lib. You can pick static or dynamic linking. To compile and link dynamically: `nim c -d:nghttp2Git nghttp2.nim` To statically link add: `-d:nghttp2Static` To pick a specific git tag: `-d:nghttp2SetVer=v1.40.0` import os import nimterop/[build, cimport] const baseDir = currentSourcePath.parentDir() / "nghttp2Lib" getHeader( header = "nghttp2.h", giturl = "https://github.com/nghttp2/nghttp2;, outdir = baseDir, cmakeFlags = "-DENABLE_LIB_ONLY=ON -DENABLE_STATIC_LIB=ON", conFlags = "--enable-lib-only" ) static: cDebug() cSkipSymbol(@["NGHTTP2_PROTO_ALPN"]) cIncludeDir(baseDir / "buildcache" / "lib" / "includes") when isDefined(nghttp2Static): cImport(nghttp2Path) else: cImport(nghttp2Path, dynlib="nghttp2LPath") Run I have only tested on Linux but it should work cross-platform if nghttp2 compiles smoothly on those platforms.
Re: Newbie - trying to compile for macos from windows
I have never cross-compiled from Windows/Linux => OSX but given my experience going the other way, I will recommend first finding an existing C/C++ compiler setup that does that for you before trying within Nim. [https://github.com/tpoechtrager/osxcross](https://github.com/tpoechtrager/osxcross) is the only real link I have found and it is only for Linux => OSX. Source: [https://stackoverflow.com/questions/693952/how-to-compile-for-os-x-in-linux-or-windows/19891283#19891283](https://stackoverflow.com/questions/693952/how-to-compile-for-os-x-in-linux-or-windows/19891283#19891283)
Re: Does --gc:arc remove dependency on NimRtl.dll?
All the code lives in src/plugin.nim. Plugins use src/pluginapi.nim and the plugin folder has many examples. Some stuff is in src/globals.nim that will need to get pulled out. src/sci.nim shows how to start the plugin system. It should be easy to extract and use it for some simple tests. Feud is Windows only today but I have tested the plugin system on Linux and it works fine, should be cross platform. Also, it is only functional and tested on \--gc:boehm since it manages memory correctly across threads and DLLs. \--gc:arc will do the same, per Araq, but it has not been tested. If there is real interest, I am willing to pull out the plugin system from feud and make it an independent package that can be installed with nimble.
Re: Comparison Rust vs Nim binary sizes for IOT applications (just an FYI if you're interested)
My impression is that after doing a release build, optimizing for size, and stripping, you get diminishing returns regarding binary size in rust. [https://jamesmunns.com/blog/tinyrocket](https://jamesmunns.com/blog/tinyrocket)/ and [https://github.com/johnthagen/min-sized-rust](https://github.com/johnthagen/min-sized-rust) describe some additional techniques you could try. I expect the nim binary would still be smaller, but it could be worth attempting to give rust its best showing.
Re: Does --gc:arc remove dependency on NimRtl.dll?
Hm, FEUD seems a big project, so at first glance, looks like it may be tough to crunch...? also, if the main app is already in Nim, I'd assume nimrtl.dll probably isn't much of an issue in this case? But I'll try to keep in mind to ask @shashlick if I have time to drop by IRC at some point in future. Thanks for the heads up!
Re: Can I use IOCP / async on startProcess?
Awesome, huge thanks to both of you! I'll try to put the pieces of the puzzle together over the next few days, really hope it'll work! :)
Re: Can I use IOCP / async on startProcess?
> how can I simultaneously wait over 2 AsyncPipes? It's a bit low level, but you can use or from asyncFutures [https://nim-lang.org/docs/asyncfutures.html#or%2CFuture%5BT%5D%2CFuture%5BY%5D](https://nim-lang.org/docs/asyncfutures.html#or%2CFuture%5BT%5D%2CFuture%5BY%5D) This is rough pseudo code, but you would use it something like this: proc doingAsyncStuff {.async.} = let pipeFuture1 = someOtherAsyncProc() # you would replace this with your async pipes. let pipeFuture2 = someOtherAsyncProc() # this will await until one of the futures is ready. (You can also use `and` if you want to wait for both) await (pipeFuture1 or pipeFuture2) # This is the low level stuff. # we don't know which future actually finished, so we have to check manually. if pipeFuture1.finished() : let val = pipeFuture1.read() # do stuff else if pipeFuture2.finished() : let val = pipeFuture2.read() # do other stuff Run
Re: Does --gc:arc remove dependency on NimRtl.dll?
Maybe of interest to you, but FEUD did exactly this. It implements a successful plugin system using DLLS. [https://github.com/genotrance/feud](https://github.com/genotrance/feud)/ It's not super actively developed anymore, but the code might provide some good inspiration. Also the author is active here and on irc (he goes by @shashlick on irc). He might have some good tips.
Re: Can I use IOCP / async on startProcess?
Oh wow, looks super interesting, thanks! One thing I'm still missing (assuming I get those to work), is, how can I simultaneously wait over 2 AsyncPipes? I.e. wait until one of them has some data ready, when I don't know which one will it be? Is there some kind of feature/macro I could use for this?
Re: Does --gc:arc remove dependency on NimRtl.dll?
Hm; so, assuming that the master app always calls "destructors" correctly from the DLL that created the objects, I might already not need nimrtl.dll even with the regular (non-arc) gc? (I definitely want more than one DLL - I'm generally interested in writing plugins for some "master app".) Or do I misunderstand?
Re: Can I use IOCP / async on startProcess?
Not with the stdlib. You might have some luck using [https://github.com/cheatfate/asynctools/blob/master/asynctools/asyncproc.nim](https://github.com/cheatfate/asynctools/blob/master/asynctools/asyncproc.nim) though.
Re: Nython: Seamless Nim Extension Modules for Python
godot-nim uses nimGC_setStackBottom too which I also ran into trying to run a game with godot-nim. But I have no real knowledge about stuff like that (yet) :(.
Re: Nython: Seamless Nim Extension Modules for Python
gc:arc has no operation nimGC_setStackBottom, which nimpy is expecting. There's a long weekend coming up, perhaps I'll finally peak behind the curtain of nim and nimpy.
Re: Nython: Seamless Nim Extension Modules for Python
I just tried to compile a my example code with d:danger gc:arc and it fails with an error from nimpy. So definitively no, no gc:arc yet
Re: Nython: Seamless Nim Extension Modules for Python
Currently no, it's using mark and sweep for no other reason than that is what faster-than-requests had in its build process, which is what nython is largely based on. But it's planned to add more configurable options to pass to the nim compiler! And looking at nimpy specifically, it handles all gc related things on it's own, but it looks like it runs into some issues when there is more than one nimpy module in a project at a time. I imagine gc:arc will mitigate that, but the nimpy author will have to chime in with some more details. nython is really just the last mile package to make nimpy deployable to the broader python community. nimpy itself is where the rubber meets the road and @yglukhov is the master of that.
Re: Nython: Seamless Nim Extension Modules for Python
Does this use `gc:arc`? Better interop with Python was one of its design goals. :-)
Re: Does --gc:arc remove dependency on NimRtl.dll?
1. If you use `nim --gc:arc -d:useMalloc` it's all delegated to your malloc implementation which might be ready for DLLs, it depends on your libc implementation. 2. In general, nimrtl.dll is not required when you write a DLL, it is however a good idea to use nimrtl.dll if you want to produce more than one DLL. 3. The real issue with DLLs is the heap and how to share it. If DLL A calls `alloc`, can DLL B call `dealloc` with the same pointer? If both DLLs use nimrtl.dll then the answer is yes.
Re: Newbie - trying to compile for macos from windows
Hi, I'm still new to Nim and this is my first post. Although Zig is really nice, I'm not sure why you would need it. I was able to somewhat cross-compile from Mac to windows, but I can't remember off-hand the problem I had with compiling with Linux. I ended up finding a solution that I was probably going to use later. I just recently installed a new system and I haven't restored my backups yet. I had found there are some other compiler options for, but I can only think of one that I found. I'm posting a couple links to them. The cross-compile in Zig is nice, but I really liked the way Red/Rebol cross-compiles. You should be able to look at and/or use Nim cross. [https://github.com/arnetheduck/nlvm](https://github.com/arnetheduck/nlvm) [https://github.com/miyabisun/docker-nim-cross](https://github.com/miyabisun/docker-nim-cross)
Does --gc:arc remove dependency on NimRtl.dll?
If I write a DLL with --gc:arc, does it no longer require NimRtl.dll?
Can I use IOCP / async on startProcess?
Can I use IOCP support from Nim's asyncdispatch module together with startProcess (from osproc), to simultaneously listen on stderr and stdout of the subprocess? such that I can detect immediately when it is writing on one of the streams, and distinguish on which?
Re: Newbie - trying to compile for macos from windows
If you're up to some experimenting, you could try using Nim with Zig as your C cross-compiler - see: * [https://ziglang.org/#Zig-is-also-a-C-compiler](https://ziglang.org/#Zig-is-also-a-C-compiler) * [https://ziglang.org/#Cross-compiling-is-a-first-class-use-case](https://ziglang.org/#Cross-compiling-is-a-first-class-use-case) * [https://ziglang.org/#Zig-ships-with-libc](https://ziglang.org/#Zig-ships-with-libc) I don't guarantee it will work, but there's a chance. I believe I managed to get it to cross-compile from Windows to Android once.
Re: Newbie - trying to compile for macos from windows
Hi, thank you so much for the reply. I have since installed Virtual Box and installed latest UBUNTU desktop iso... trying to fire up Visual Studio Code, Nim and the LLVM binaries ... struggling to set the path to CLANG. If it IS in fact possible to cross compile to MACOSX, ANDROID etc. FROM windows 10 i would like to stick to Windows 10 - purely due to my decades of working primary on windows ... This is 'greek' to me at this stage "compile toolchain for GCC setup to make that configuration work." Is it possible to compile for MAC OSX on windows 10 when compiling a NIM program - or must i install UBUNTU?
Nython: Seamless Nim Extension Modules for Python
[https://github.com/sstadick/nython](https://github.com/sstadick/nython) I would love some people to try this out and see what breaks! Basically this runs compileToC, and then wraps that C code up in way setuptools likes so you can create python wheels that can be installed with no Nim deps. This is hugely reliant on nimpy and wouldn't exist without it. The goal is to make it really easy to fit Nim code into python projects, hopefully slowly displacing more and more python code till everything is Nim all the way down.
Re: Suggestions for optimization?
To clarify how @cdome's comment differs from @SolitudeSF's and @lagerratrobe's own observation, the intent was always for `-d:danger` to imply `-d:release`, but I guess there are several versions of Nim out there for which this implication does not hold. You have to define both for those Nim versions to disable all checking. Going forward (from `devel` and probably from version 1.2) that will be redundant upon simply `-d:danger`. Another thing you can do along the lines of just how you build like `-d:release -d:danger`, though is using GCC's profile-guided optimization in the C backend. I've seen up to 2X speed boosts for Nim generated code with that. I suspect there is much more improvement to be had by the various algorithmic improvements I mentioned, though, and some are pretty easy. They do start to seem more like "cheating" in cross-programming language comparisons, especially moving all the hard work to compile-time. So, it depends if you're actually doing anything real with this code.
Re: Suggestions for optimization?
-d:release -d:danger might give extra speed
Re: Suggestions for optimization?
BTW, the Unicode version of this with 65536 to 2e6 "letters" (depending upon if you can restrict to "one Unicode plane") is definitely trickier. You might think a 2 pass radix sort would help, but since most words are short most of your sorts are _very_ small. Indeed, even insertion sort might well beat the `algorithm.sort` merge sort in some average sense for this problem. Insertion sort tends to beat almost everything for N < 16..32 on modern CPUs due to cache effects. Almost any dictionary will have average, median, and mode word lengths below 16 just because long words are unpopular. So, just having a "wrapper sort" that switches to insertion for N < 20 and falls back to `algorithm.sort` for bigger is probably best. It would also be possibly valuable to the community if you looked into the stdlib `algorithm.sort` and had it switch to insertion at small N. It doesn't right now. The trouble you run into with most array/table based approaches is that the alphabet is so much larger than the word length. So, a Fenwick Tree or anything with "implicit order" from the indices is just too big to iterate over effectively. Without implicit order you wind up still having to sort "used letters" and letter repeats are not usually very dramatic. About the only thing I can think of that _might_ help Unicode (beyond just in-place/d-danger tricks) in that counting-sort-style approach is another structure from Preston Briggs in a 1993 Rice University tech report/thesis. That sparse-dense thing can sometimes be useful to manage iteration over very sparse subsets of small-ish universes like this Unicode code point space. It's conceivable you could use that as a Unicode histogram, but you would still have to sort a small array (number of _unique_ letters). So, it's hard to say it'd be faster than the insertion-or-merge idea, but it's obscure enough to be worth mentioning as also neglected. Of course, **if you find yourself running this calculation a lot on a small-ish fixed set of basically static dictionaries** , the single biggest optimization you might do is to simply **save the signature index** to a file. You could extend @juancarlospaco 's idea and have a `const Table` built into the binary executable on a per dictionary basis (e.g. 1 program file per dict). You would absolutely have to time several hundred or even thousand anagram queries to get a reading. Or if you wanted a generic program to work with many dictionary files then instead of a `Table` you could sort the signatures themselves paired with their words. Then you could just save that sorted list to a file and do lookup via binary search on the file with at most O(lg(Nwords)) disk probes per anagram query (plus time to open/mmap the file). You could also hash-structure that file to get that down to 1 probe at substantial code baggage. ([https://github.com/c-blake/suggest](https://github.com/c-blake/suggest)/ has a fully worked out example of a much more complex persistent hash-structure store along those lines.) You _could_ try using Nim's `marshal` module to save your `Table` to disk and load it back whenever you want it, but in this case I think the loading of the table would be no faster than just building it from scratch. You could also "save in memory" by having a long-running server that processes each dictionary just once and then answers anagram queries over a local network or pipe or something. Personally, I think _all_ of the above are good coding exercises.
Re: Suggestions for optimization?
const file = staticRead("dict.txt").strip.splitLines
Re: Suggestions for optimization?
Besides @SolitudeSF's important suggestion, an algorithmic improvement you can use is to replace `algorithm.sorted(lower, system.cmp)` with a hand-written "counting sort". For ASCII (i.e. 1 byte textual characters) this is (very?) easy and should be quite a bit faster than `algorithm.sorted`. Something like this: proc sortByLetter(word: string): string = var cnt: array[26, int8] for ch in word: #simple counting sort cnt[ord(ch) - ord('A')].inc for i, c in cnt: for n in 0 ..< c: result.add chr(ord('A') + i) Run You probably need some `toUpper` in there if your dictionary is stored in lowercase (although that could also be done prior to entry to the above `proc` or you could also just change `A` to `a`). That `int8` type covers words up to 255 characters long, but I think the longest real word in any spoken language with dictionaries is less than that. You could always count and print a warning or switch that to `int16` or `int32` which could help if chemical elements or really crazy stuff is possibly in play. The same idea could be adapted to unicode but then you would probably want a `Table`, not an `array` which would slow it down more than the above minor modifications. I actually think this is a good problem context in which to introduce the oft neglected counting sort which can also be used with (unlike here) non-empty satellite data (at some slightly increased code baggage).
Re: Suggestions for optimization?
use -d:danger if you need performance