ok to merge "overload resolution" label into "Symbol Resolution" label in github issues?
[https://github.com/nim-lang/Nim/issues?q=label%3A%22symbol+resolution%22](https://github.com/nim-lang/Nim/issues?q=label%3A%22symbol+resolution%22)+ [https://github.com/nim-lang/Nim/issues?q=label%3A%22overload+resolution%22](https://github.com/nim-lang/Nim/issues?q=label%3A%22overload+resolution%22)\+
Re: Change Nim colour on GitHub
Thanks, I changed the color to the one you found for [https://github.com/github/linguist/pull/4900](https://github.com/github/linguist/pull/4900) :)
Re: Mysterious compile error "system module needs: nimDestroyAndDispose" with --gc:orc
I don't think `--gc:orc` on 1.2.2 is usable, you need to either use devel or wait for 1.4
Re: Copy-on-write container
Did you watch Andreas's talk on ARC and ORC from Saturday's NimConf? He has a running example where he builds a toy `seq` from scratch, including the ref-counting. Basically you can implement your own ref-counting, if your object is a non-`ref`. You give it a `ptr` to the shared state, and put a refcount in the shared state, and then implement the ARC hooks like `=destroy` and `=sink`.
Re: New garbage collector --gc:orc is a joy to use.
Cool! I'm about to embark on multithreading, now that I've gotten my async networking code working on a single thread. Trying to switch over to gc:orc but [having a few problems](https://forum.nim-lang.org/t/6485). > Now you can just pass deeply nested ref objects between threads and it all > works. Is it really that simple? Because as @araq has stated, ARC's retain/release are _not_ atomic. That implies to me that a `ref` object can never be used concurrently on multiple threads. So I think by "pass" you mean "move" — the way you've described your code, it sounds like the work queues need to use move semantics, so the "push" operation takes an object as a `sink` parameter. Is that accurate?
Mysterious compile error "system module needs: nimDestroyAndDispose" with --gc:orc
My code is working well, and I'm trying out `--gc:orc` now. So I added that flag to my `nim.cfg` and ran `nimble build`. OK, it compiles. But then `nimble test` fails as soon as it compiles anything. In fact, if I run `nim c tests/.nim`, where "" is anything, _whether or not such a file exists_ , I get this error: /Users/snej/.choosenim/toolchains/nim-1.2.2/lib/system/assertions.nim(22, 11) template/generic instantiation of `sysFatal` from here /Users/snej/.choosenim/toolchains/nim-1.2.2/lib/system/fatal.nim(49, 5) Error: system module needs: nimDestroyAndDispose Run My other source directories don't have this problem. The only thing special about `tests/` is that it has a `config.nims` file, containing the line `switch("path", "$projectDir/../src")`. I believe this was created automatically by `nimble init`. What's going on here??
Re: Change Nim colour on GitHub
Well PR already submitted : [https://github.com/github/linguist/pull/4900](https://github.com/github/linguist/pull/4900) and pass the tests. This is the new proposed color: [https://www.color-hex.com/color/deb012](https://www.color-hex.com/color/deb012)
Re: Copy-on-write container
So ... It's been a while, and an awful lot has happened on the arc/orc front; so hopefully it's OK to bump this up: * What's a good strategy for implementing copy-on-write with --gc:arc and/or --gc:orc? * Is it possible to inspect/access the refcounter, implicit or explicit, from --gc:arc and/or --gc:orc? @mratsim switched away from CoW, but it is put to great use in APL / J / K implementations; In general, they simulate a "value only" system by using only references and reference counts; Anything that is only referenced once is just modified in place when you want to modify it. So you get easy to debug value semantics, and (most of the time, with very little care required) reference performance. This is in contrast with e.g. Clojure's persistent vector implementation which clones a limb on _every_ modification and thus generates a lot of garbage; or with R (v3, haven't looked at the v4 changes) using inaccurate refcounts, which requires some care to not generate too much garbage. As --gc:arc and --gc:orc already maintain refcounts (some in memory, some in the AST), sane access to them would greatly simplify such CoW schemes - which otherwise duplicate all the refcounts (or otherwise has use ptrs instead ...) Any ideas / docs / pointer?
Re: NvP: s = s & 'x'
Thanks, I tried your fusedAppend template but it didn't work (compiled, but didn't change anything). I think for whatever reason it isn't getting used because I added echo a as a 2nd line and nothing was displayed. It may be true that no one uses s = s + t (I doubt that), but if a 2-line template can change this into s &= t, I'd suggest it's worth adding that to the compiler for a 250x speed increase. This was a very unexpected speed bump to me. Here's a test comparing s.add('x') with s &= 'x'. It seems like these should have identical performance, but: ms:nim jim$ cat str1.nim var s: string for i in 0..100_000_000: s.add('x') echo len(s) ms:nim jim$ /usr/bin/time -l ./str1 10001 0.88 real 0.73 user 0.14 sys 440184832 maximum resident set size 107485 page reclaims 1 block output operations 3 involuntary context switches ms:nim jim$ cat str1c.nim proc main() = var s: string for i in 0..100_000_000: s &= 'x' echo len(s) main() ms:nim jim$ nim c -d:danger str1c Hint: 14213 LOC; 0.587 sec; 16.016MiB peakmem; Dangerous Release build; proj: /Users/jim/nim/str1c; out: /User\ s/jim/nim/str1c [SuccessX] ms:nim jim$ /usr/bin/time -l ./str1c 10001 0.54 real 0.42 user 0.11 sys 326619136 maximum resident set size 79751 page reclaims 8 page faults 1 voluntary context switches 5 involuntary context switches Run
Re: NvP: s.add('x') 100M times
@HashBackupJim \- `newSeqOfCap[T](someLen)` also exists and, yes, pre-sizing can help a lot in Nim (and almost any lang that supports it). Profile-guided-optimization at the gcc level can also help Nim run timings a lot..In this case 1.6x to 1.9x for various `gc` modes. [https://forum.nim-lang.org/t/6295](https://forum.nim-lang.org/t/6295) explains how. LTO also helps since most of the boost of PGO is probably from well chosen inlining. @sschwartzer \- not only string benchmarks...Interpreter start-up/etc. Anyway, this isn't a Python forum, and benchmarking "always depends". :-) :-) Someone else should reproduce my `--gc:arc` uses more memory than `gc:none` for the original str1.nim or one with a `main()` (or both). I think this kicking the tires has probably uncovered a real problem.
Re: NvP: s.add('x') 100M times
Thanks for the tip. I knew about this sizing trick for tables, and it did save a lot of RAM ina small test (large table) because it avoided resizes, but wasn't aware strings had a similar thing. I read the Nim manual, but stuff only sticks when doing a lot of coding in a new language and I'm not there yet.
Re: NvP: s.add('x') 100M times
The reason I suggest comparing against Python 3 is that Python 2 is no longer supported by the CPython project. Also, by far most of the people who start with Python will use Python 3. If Python 2 is faster in many string benchmarks that's most likely because the default string type in Python 2 is simpler (just bytes) vs. Python 3 (code points). If you see your data as just bytes and want to compare on these grounds, compare with Python 3's `bytes` type. Now, when benchmarking Nim vs. Python, should you use a Python version and/or code style because it's more similar in implementation to Nim or should you use a Python version and/or code style because that's how most people would use Python? :-) By the way, I think it's similar to the question: When benchmarking Nim, should you use the fastest implementation or the most idiomatic/straightforward implementation? I guess it depends.
wbt_nim: A Nim API for the WhiteboxTools geospatial data analysis library
I just finished writing and publishing [wbt_nim]([https://github.com/jblindsay/wbt_nim](https://github.com/jblindsay/wbt_nim)) a Nim-based API for the [WhiteboxTools]([https://jblindsay.github.io/wbt_book/preface.html](https://jblindsay.github.io/wbt_book/preface.html)) geospatial data analysis library, which we develop in my research lab. WhiteboxTools has more than 440 tools for analyzing raster, vector, and LiDAR data. This includes common GIS, image processing, hydrological, geomorphometric (DEM), geostatistical, and LiDAR analysis routines. This API is really just a convenience wrapping around the WhiteboxTools CLI. We're really starting to use and love Nim in my lab, the Geomorphometry and Hydrogeomatics Research Group, and this library allows us to interact with WBT through nice Nim scripts to automate complex workflows. I hope you also find a use for it. As always feedback is welcome.
Re: Change Nim colour on GitHub
I tested to see how #ffc200 would look on github (changing manually the color using inspect element). You see it on the left above. It might be a bit too light against github white (I kind of see it out-of-focus). A slightly darker option (on the right) could be [https://www.color-hex.com/color/ebb800](https://www.color-hex.com/color/ebb800) Another advantage of the #e **BB8** 0 option is that it contains name of [https://en.wikipedia.org/wiki/BB-8](https://en.wikipedia.org/wiki/BB-8)
Re: NvP: s.add('x') 100M times
I also usually find Py2 much faster than Py3. Pypy usually helps. Cython more so, but with much more work. Anyway, the obvious better way to do it in Nim (which I always assumed was "never the point") is var s = newStringOfCap(100_000_001) # or whatever for i in 0..100_000_000: s.add('x') echo len(s) Run which runs 2x as fast as otherwise and uses exactly the right amount of memory. I mention it just in case @HashBackupJim was unaware.
Re: NvP: s.add('x') 100M times
Thanks for the tips. As I mentioned, I am kicking the tires with Nim to see how it behaves. My goal isn't to find the fastest way to create a 100M string containing all x's in either Python or Nim, but rather for me to get a feel for how one behaves vs the other. If I run stupid tests like these and they all come out great with Nim, fantastic! That gives me a lot of confidence in it. If I get unexpected results, I'd like to understand why. I have a 200K line Python 2.7 app. When I have run small tests comparing Python 2 vs 3, Python 2 is often 50% faster (I don't need or want Unicode). Maybe if the whole app was running in Py3 it would overall be faster, but based on what I've seen, that seems uncertain. So for my particular situation, I don't care how Nim compares to Py3. For grins, I ran the s = s + 'x' string test on Python2 and Python3.6.8 (all I have), and Py3 was 44% slower.
Re: New garbage collector --gc:orc is a joy to use.
There is this: * [https://nim-lang.org/docs/gc.html](https://nim-lang.org/docs/gc.html) * [https://www.youtube.com/watch?v=aUJcYTnPWCg](https://www.youtube.com/watch?v=aUJcYTnPWCg) * [https://www.youtube.com/watch?v=yA32Wxl59wo](https://www.youtube.com/watch?v=yA32Wxl59wo) * [https://nim-lang.org/araq/destructors.html](https://nim-lang.org/araq/destructors.html) * [https://nim-lang.org/araq/ownedrefs.html](https://nim-lang.org/araq/ownedrefs.html)
Re: Norm 2.0.0
> I can imagine migrations that don't change the schema. Data migrations can be done with `select` and `update`. Although data migrations are generally unwanted.
Re: NvP: s.add('x') 100M times
For the record: In Python 3, "some string" is a unicode string where the items are code points. The model more similar in semantics to the Nim version is the `bytes` type. That said, I get the same time for multiplying `b"x"` (`bytes`) as for `"x"` (`str`).
Re: Norm 2.0.0
I can imagine migrations that don't change the schema. For example, you have a `varchar` field and change the format from `part1 - part2` to `part2 - part1`, i. e. swap two strings in the same string attribute.
Re: NvP: s.add('x') 100M times
Two things about the Python version: * Using `xrange` tells me you're on Python 2. I suggest you use a current/recent Python 3 version for your benchmarks. * The recommended way to concatenate a big number of strings is with `separator.join(iterable)`. So you could use: `s = "".join(("x" for _ in range(100_000_000)))`, - but the Pythonic version would actually be `s = 100_000_000 * "x"`.
Re: NvP: s.add('x') 100M times
I don't see how your linked algo explains deltas across gcs if that `3 div 2` growth happens for all of them. The memory `gc:arc` uses here seems more like the sum of all prior allocations, not "up to 1.5x what's needed". Actually, `gc:arc` uses 1.25x more mem than `gc:none` (312MB) in a test I just tried.
Re: Norm 2.0.0
> I'm not sure about the removal of migrations though, that will make it very > hard to create any libraries/"plugins" as there won't be a standardized way > of moving between versions. You still can run migrations, you just write them in pure SQL. Migrations are hard. Even if we take the most trivial case, adding a column, how do you populate its value for existing records? Should there be a default value? What if that value should be calculated for each record? Should it be NULL? Should it be the SQL representation of the default value of the respective Nim type? If we want to cover that with a proc, inevitably, calling this proc will look something like this: dbConn.addColumn(newUser(), "newColumn", default = "0") Run which, I believe, is just a harder to read version of this: dbConn.exec("ALTER TABLE User ADD COLUMN newColumn DEFAULT 0") Run > My experience is only really with Django's ORM though, so I might be missing > something Django ORM generates migrations automatically, which is really awesome. However, this is not the only way. In RoR, for example, you just write migrations in SQL. Generating migrations automatically is possible, but it's also a lot of work. I don't think I can do that.
Re: Naming conventions - need leading underscore
I'm interested in the following use case: type MyType* = object x: int ... proc x*(mt: MyType): int = # This is also the user API for the type. # Have some side effect, for example update a cache. ... mt.x var mt = MyType(x: 7) # Is there a way to call the proc `x` instead of accessing the field `x`? echo mt.x Run For me the question is not so much about the compiler enforcing anything (I suppose we have the asterisk to declare a field "private" :) ), but as a naming convention to distinguish between the direct field access and the call to the accessor proc. As I said earlier in this thread, you can write `mt.x()` to denote you want the `proc` call, but that's rather subtle and probably error-prone as you may forget to write the brackets. Also, it's confusing that `mt.x` inside the above module will access the field whereas the same `mt.x` in code that imports the above module will access the proc. I understand why that is, but it still may cause confusion during maintenance. > And further, when a leading underscore should indicate private symbols, then > the compiler had to enforce it, so that the visible appearance is always > correct. I don't see a need for this. For comparison, [NEP 1](https://nim-lang.org/docs/nep1.html) suggests a lot of naming conventions, but none of them is enforced by the compiler. Still, I think the conventions are useful. For clarification, I don't ask specifically for making a leading underscore significant. It just would be nice to have _some_ "non-cumbersome" way to distinguish the two uses of `x` in the above example. I don't mind that much _how_ this would be achieved, as long as it doesn't clutter the code too much. I'm open for suggestions. If we have a way that already works with the current compiler, for example a good naming convention, that would be escpecially nice. Actually I'd prefer if we did _not_ need a compiler extension for the distinction. :-)
Re: New garbage collector --gc:orc is a joy to use.
Are there detailed documents about the gc options in Nim, specifically `arc` and `orc`?
Re: Norm 2.0.0
> it is helpful to have a tracking of which schema is in the database and which > one is expected by the code. [Norman](https://moigagoo.github.io/norman/norman.html) is a migration manager for Norm. It lets you apply and undo migrations, preserving order. I have to update it for Norm 2, possibly rewriting it since Norm has changed a lot. As of now, it just stores the name of the last applied migration in a file called `.last`. I'd rather store that information in the DB, but that requires more work. Also, I find your idea with hashes really interesting.
Re: Norm 2.0.0
> Doesn't this just waste performance creating a duplicate object that's going > to be discarded? To insert a row, you must first instantiate a model to hold the data for that row. So, either way, you're creating an object, you can't avoid that. The problem is that sometimes you don't care about that object. You need it just to do this one DB operation, like insert. And if you create those objects with `var`, they persist after your operation is done. This code sample demonstrates how you can get the object without introducing throwaway variables. Also, I don't think it has any negative effect on performance. It does allocate memory for the object, but, as I said. there's no way around that.
Re: How to convert openarray[byte] to string?
> @oswjk solution is correct. It would be nice to have this in the standard library, so that one doesn't have to unleash `unsafeAddr` just to do a simple conversion. > openarray[byte] are not nul-terminated unlike strings and that would cause > issues if you are interfacing with C code that expect nul-terminated cstring. I'm not. The problem is in pure Nim -- the cast returns a garbage `string` object, as shown in the above example. It appears to be misinterpreting the raw bytes in the `openarray` as if they were a `string` object, so e.g. the string's length is the first bytes of the array interpreted as a little-endian int. Again, I don't know the exact semantics of Nim's `cast[]`, so this might just be misuse of it. But it's dangerous that it works with one type (`seq`) but fails with a conceptually similar type.
Re: NvP: s.add('x') 100M times
I don't know much about Python but it seems strings are immutable. Meaning each time you add it allocates a new string with `len+1`, which explains why memory usage is about 100MB and its slow. In Nim on the other hand strings are mutable. They are [resized](https://github.com/nim-lang/Nim/blob/devel/lib/system/seqs_v2.nim#L103) only when len+1 becames bigger than capacity. And the new cap is follows this [algo](https://github.com/nim-lang/Nim/blob/devel/lib/system/sysstr.nim#L25) Explains the extra space and why it's faster.
Re: NvP: s.add('x') 100M times
Yup. Just what I was seeing, @b3liever. No `main()`-difference to the RSS delta, and a very noticable delta in a non-intuitive direction. So, either our intuitions are wrong in a way which should be clarified or there's a problem which should be fixed. Maybe a github issue?
Re: NvP: s.add('x') 100M times
nim -v Compiled at 2020-06-23 git hash: c3459c7b14 With `nim c -d:danger --panics:on --gc:arc`: Maximum resident set size (kbytes): 395352 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 99524 Voluntary context switches: 1 Involuntary context switches: 20 With `nim c -d:danger --panics:on` (default gc) Maximum resident set size (kbytes): 282964 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 70471 Voluntary context switches: 1 Involuntary context switches: 10 Both tests are with a main function and it makes no difference.
Re: NvP: s.add('x') 100M times
@cumulonimbus \- I tried that. Didn't alter the behavior I was seeing. If this behavior was not always there then my guess is that some arc bug was causing a crash, got fixed, and now the fix causes this. Regardless of whether it was always there or appeared by bug-jello-squishing accident as I theorize, we should probably have a little suite of "memory use regression" tests to prevent stuff like the scenario I described. Such a suite would be a kind of "correctness testing" for deterministic memory management. Could have a "fuzzy/ball park compare". Maybe we have such already, perhaps informally? If so, we should add this `str1` to it. If not, it can be the first test. :-)
Re: NvP: s.add('x') 100M times
Possibly something to do with this being main() and not inside a function? Can't think of a reason why for this one, but many benchmarks change significantly (for the better) when put inside a function
Re: Norm 2.0.0
Congrats on the release, very interesting to hear about the changes. I'm not sure about the removal of migrations though, that will make it very hard to create any libraries/"plugins" as there won't be a standardized way of moving between versions. My experience is only really with Django's ORM though, so I might be missing something
Re: Change Nim colour on GitHub
Nice! I must say, I do like that colour.
New garbage collector --gc:orc is a joy to use.
Nim has a new garbage collector called Orc (enabled with --gc:orc). It’s a reference counting mechanism with cycle direction. Most important feature of --gc:orc is much better support for threads by sharing the heap between them. Now you can just pass deeply nested ref objects between threads and it all works. My threading needs are pretty pedestrian. I basically have a work queue with several work threads and I need work done. I need to pass large nested objects to the workers and the workers produce large nested data back. The old way to do that is with channels, but channels copy their data. Copying data can actually be better and faster with “share nothing” concurrency. But it’s really bad for my use case of passing around large nested structures. Another way was to use pointers but then I was basically writing C with manual allocations and deallocations not nim! This is why the new --gc:orc works so much better for me. You still need to use and understand locks. But it’s not that bad. I just use two locks for input queue and output queue. They try to acquire and release - hold the locks - for as little as possible. No thread holds more than 1 lock at a time. See my threaded work example here: [https://gist.github.com/treeform/3e8c3be53b2999d709dadc2bc2b4e097](https://gist.github.com/treeform/3e8c3be53b2999d709dadc2bc2b4e097) (Feedback on how to make it better welcome.) Before creating objects and passing them between threads was a big issue. Default garbage collector (--gc:refc) gives each thread its own heap. With the old model objects allocated on one thread had to be deallocated on the same thread. This restriction is gone now! Another big difference is that it’s more deterministic and supports distructors. Compilers can also infer where the frees will happen and optimize many allocations and deallocations with move semantics (similar to Rust). Sadly it can’t optimize all of them a way that is why reference counting exists. Also the cycle detector will try to find garbage cycles and free them as well. This means I do not have to change the way I write code. I don’t have to mark my code in any special way and I don’t really have to worry about cycles. The new Orc GC is simply better. This makes the new garbage collector--gc:orc a joy to use. (If there are any factual errors about the GC let me know.)
Re: NvP: s.add('x') 100M times
I don't disagree. Might need delving into the generated C to figure out, but I'm guessing my results are not hard to reproduce. If they are let me know how I can best help.
Re: NvP: s.add('x') 100M times
Just did a non-PGO regular `-d:danger` run. Times went up 1.9x but memory usage patterns were the same with `gc:arc` using much more RSS than `gc:boehm` or `gc:markAndSweep`. It's a pretty tiny program.
Re: NvP: s.add('x') 100M times
For this particular benchmark `--gc:boehm` uses the least memory and time for me on nim 28510a9da9bf2a6b02590ba27b64e951a208b23d with gcc-10.1 and PGO but that least is still 2.5x the RSS of python-2.7.18. Not sure why, but yeah it is 35x faster than Python.
Re: NvP: s.add('x') 100M times
Huh? Tracing GCs should never win this. Something strange is going on... :-)
Re: NvP: s.add('x') 100M times
Thanks. I tried that just now: ms:nim jim$ nim c -d:danger --gc:arc str1 Hint: 11937 LOC; 0.390 sec; 12.988MiB peakmem; Dangerous Release build; proj: /Users/jim/nim/str1; out: /Users\ /jim/nim/str1 [SuccessX] ms:nim jim$ /usr/bin/time -l ./str1 10001 0.90 real 0.73 user 0.15 sys 440176640 maximum resident set size 107478 page reclaims 5 page faults 1 voluntary context switches 4 involuntary context switches Run Does this need 1.3x?
Re: NvP: s = s & 'x'
Use a `main()` for bench.
Re: NvP: s = s & 'x'
Or `s &= x`
NvP: s.add('x') 100M times
This string test uses s.add('x') instead of s = s & x for Nim, and s += 'x' for Python. ms:nim jim$ cat str1.nim var s: string for i in 0..100_000_000: s.add('x') echo len(s) ms:nim jim$ nim c -d:danger str1 Hint: 14210 LOC; 0.275 sec; 15.977MiB peakmem; Dangerous Release build; proj: /Users/jim/nim/str1; out: /Users\ /jim/nim/str1 [SuccessX] ms:nim jim$ /usr/bin/time -l ./str1 10001 0.68 real 0.56 user 0.10 sys 326627328 maximum resident set size 79753 page reclaims 8 page faults 1 voluntary context switches 6 involuntary context switches ms:nim jim$ cat str1.py s = '' for i in xrange(1): s += 'x' print len(s) ms:nim jim$ /usr/bin/time -l py str1.py 1 20.74 real20.67 user 0.06 sys 105099264 maximum resident set size 25834 page reclaims 9 involuntary context switches Run Nim blows Python out of the water on this, though it uses 326M of RAM to create a 100M string. Python's memory use is good, only 105M for a 100M string, but it's slow. For these tests, I'm not so much looking to find the best way to create a 100M string in Nim or Python. I'm comparing the two to find out where there may be large performance differences, hopefully in Nim's favor, and to get a better understanding of how Nim works.
Re: NvP: s.add('x') 100M times
Memory consumption is usually _much_ better with `--gc:arc`.
Re: Change Nim colour on GitHub
> I am not sure why since the rst parser seems to support it, maybe it would be > fixed by a forum update or maybe I just did something wrong. Security. You are not allowed to destroy the site's layout. ;-)
NvP: s = s & 'x'
Today I'm comparing string operations with Nim 1.2.1 vs Python. This test concatenates a letter to a string 1M times. ms:nim jim$ cat str1a.nim var s: string for i in 0..1_000_000: s = s & 'x' echo len(s) ms:nim jim$ nim c -d:danger str1a Hint: 14210 LOC; 0.565 sec; 16.02MiB peakmem; Dangerous Release build; proj: /Users/jim/nim/str1a; out: /Users/jim/nim/str1a [SuccessX] ms:nim jim$ /usr/bin/time -l ./str1a 101 45.02 real44.98 user 0.03 sys 48394240 maximum resident set size 11825 page reclaims 8 page faults 1 voluntary context switches 16 involuntary context switches ms:nim jim$ cat str1a.py s = '' for i in xrange(100): s = s + 'x' print len(s) ms:nim jim$ /usr/bin/time -l py str1a.py 100 0.22 real 0.21 user 0.00 sys 6078464 maximum resident set size 1686 page reclaims 11 involuntary context switches Run I tried enclosing the Nim test with a proc. That did reduce RAM from 48.4M to 46M, but runtime was still 45s.
Re: Change Nim colour on GitHub
thanks to @PMunch we now have a nice image from which to pick from (currently) valid colors for github: [https://github.com/PMunch/colourfinder](https://github.com/PMunch/colourfinder) In the image the blacked out areas are those that color-proximity test in linguist would declare invalid. This is based on CIEDE2000 color distance, see [https://github.com/pietroppeter/color_distance](https://github.com/pietroppeter/color_distance). My proposal would be to use #ffc200: [https://www.color-hex.com/color/ffc200](https://www.color-hex.com/color/ffc200) * it is in the middle of the spectrum (full saturation). * it is between Yellow (color of crown) and Orange (Araq's favorite color). **Note 1** : to pick a color I used: [https://pinetools.com/image-color-picker](https://pinetools.com/image-color-picker) **Note 2** : I was not able to use the `.. raw::html` rst directive in forum to add ``. I am not sure why since the rst parser seems to support it, maybe it would be fixed by a forum update or maybe I just did something wrong.
Re: NvP: s = s & 'x'
Use `.add` instead. We don't optimize `` s = s & 'x'`` because nobody writes it this way.
Omni - DSL for low level audio programming
Hello everyone! For those who didn't attend the NimConf, I just wanted to announce here a project that I have been working on for the past 8 months, [Omni](https://vitreo12.github.io/omni). Omni is a new DSL to program audio algorithms in. It's been entirely written in Nim, leveraging on the power of metaprogramming: the whole syntax, in fact, is built using Nim macros and templates. For a more in depth look on how it works, feel free to check the repo [here](https://github.com/vitreo12/omni). Just note that the code is very much still an alpha, so errors are expected for some corner cases. If you find any, it would be great if you could report them on GitHub :) As a sneak peak, a simple sinusoidal oscillator would look like so in omni code: ins: 1 outs: 1 init: phase = 0.0 sample: freq_incr = in1 / samplerate out1 = sin(phase * TWOPI) phase = (phase + freq_incr) % 1.0 Run If this sounds interesting to you, I'd suggest you to check the talk from the NimConf at this [link](https://youtu.be/ruT7sbs5O-Q) Let me know what you think of this! Francesco
Re: Norm 2.0.0
I don't quite understand this:
Re: Dictionary syntax
In other words, it seems "excessive" to you because you do not imagine that you would never use anything but the stdlib `Table` "behind" `{}`. Never switching out something like that is "famous last words" in some circles. :-) So, the verbosity guards against the regret of hard-coding.
Re: Dictionary syntax
The `{}` syntax is what the Lisp world calls an "association-list" and is more general than what the Python world calls a dictionary literal/constructor syntax. In Nim `{a:b,c:d, ...}` is just syntactic sugar for `[ (a,b), (c,d), ... ]` (or `@[]` if you prefer a seq analogy, but it's at compile time when either seq or array lengths are "known"). This is a good in that you can use the same syntax for multiple back-end associative array implementations (like a simple sorted array, trees, alternate hash tables, etc.). In a sense the syntax is only loosely coupled to the semantics here. The cost is the slight extra verbosity of tacking on a `.toTable`, `toSortedArray`, `.toBST`, `.toBTree` or whatever which is deemed an acceptable trade off for the generality. Nim is not Python and you do have to put types/type-related things in various places.
Dictionary syntax
what does this syntax `var a = {1: "one", 2: "two"}` exist for if it doesn't directly create a dictionary (table)? why is it not possible to make a dictionary without calling the `toTable` method (`var a = {1: "one", 2: "two"}.toTable`)? it seems a bit excessive to me.
Re: Name of nim file at compile time
What i mean is: I tried it and it didn't work, as it would return the module where currentSourcePath is called, and not the compiled nim file
Re: Naming conventions - need leading underscore
> If your type has an x property, and you want a x= setter for it, you'll have > to rename x to something else, to allow proc x=() to exist You don't have to rename x. As mentioned in the [manual](https://nim-lang.org/docs/manual.html#procedures-properties) The resolution order of dot is well defined. > This accesses the 'host' field and is not a recursive call to `host=` because > the builtin dot access is preferred if it is available If underscore make you feel safer, IMO it is a illusion of safety.
Re: Naming conventions - need leading underscore
> How do you solve this problem First, I can not remember that I myself cared for the fact if a symbol in source code I was reading or working on was local/private or public. There may exists cases when one cares, but I just can not remember a concrete case. Second, we can use unicode for symbol names. I never did, but maybe there is a nice unicode symbol available? And further, when a leading underscore should indicate private symbols, then the compiler had to enforce it, so that the visible appearance is always correct. And when we use leading underscore as private mark, then we can also use question mark to indicate query as sorted?() in Ruby and Chrystal to return a bool instead a sorted copy. And then we can ask for more such visible markers. And finally we may have IDE/Editor support to differentiate between private and public symbols.
Re: Naming conventions - need leading underscore
Sorry to revamp this, but I've been away... @mratsim > Python has no control over visibility, all fields in a class are visible by > default. > > **The leading underscore convention is a social workaround to a missing > technical feature.** > > In Nim this is not a problem, and even less so because it's a static language > so visibility issues are resolved at compile-time and not in the middle of a > multiple-hours run. Well... visibility issues are resolved as far as access is concerned, but not as far as reading the code is. Personally I always found it quite more comfortable to read code where private functions and variables have a leading underscore (even in C/C++, etc). Sometimes it's also useful for other denotations as well. But I'm also not sure the issue is resolved as far as naming conventions go. I personally keep fighting the language over this: If your type has an `x` property, and you want a `x=` setter for it (when `set_x()` doesn't make sense), you'll have to rename `x` to something else, to allow `proc `x=`()` to exist (to be safer, as you rightly pointed out). But what do you rename it to? I've been experimenting with naming my hidden properties like `p_x` or `pvt_x`, etc, but... it all just makes the code more cluttered. As far as I can tell, `_x` (or in some cases even `__x`) would be the cleanest and clearest conventions... which is probably why everyone converged on it in python and lua for whatever denotations they needed. (Occam's Razor comes to mind, too.) How do you solve this problem without making the code more cluttered?
Re: How to get & set text in clipboard ?
Author is here. Yeah, you can just fetch library sources from GitHub and use them directly.