Re: null == "" is true?
Hi @Steven Schveighoffer, Yes solution looking useful and sure it will work. Thanks.
Re: Unable to use map() and array() inside a class-field's initializer.
On Thursday, 14 July 2022 at 14:41:53 UTC, Paul Backus wrote: Explicit type annotation: vvv Thank You! I will remember that in case of weird errors I can try to help the compiler with type inference.
Re: Unable to use map() and array() inside a class-field's initializer.
On Thursday, 14 July 2022 at 13:57:24 UTC, realhet wrote: Hello, Somehow it can't reach map and array inside a class field initializer. If I put that small expression inside a function, it works. If I encapsulate the initializer expression into a lambda and evaluate it right away, it also works. Only the nice form fails. Why is that? ```d import std; enum E{a, b, c} static struct S{ const E e; string otherProperties; } //trying to initialize an array inside static if(1) class D{ //this fails: Error: function `onlineapp.D.map!(E[]).map` need `this` to access member `map` auto x = [EnumMembers!E].map!(e => S(e)).array; } ``` Simpler workaround: ```d // Explicit type annotation: vvv auto x = [EnumMembers!E].map!((E e) => S(e)).array; ``` This turns the lambda from a template into a normal function, which apparently is enough to un-confuse the compiler. Still unclear why it's getting confused in the first place, though.
Unable to use map() and array() inside a class-field's initializer.
Hello, Somehow it can't reach map and array inside a class field initializer. If I put that small expression inside a function, it works. If I encapsulate the initializer expression into a lambda and evaluate it right away, it also works. Only the nice form fails. Why is that? ```d import std; enum E{a, b, c} static struct S{ const E e; string otherProperties; } //trying to initialize an array inside static if(1) class D{ //this fails: Error: function `onlineapp.D.map!(E[]).map` need `this` to access member `map` auto x = [EnumMembers!E].map!(e => S(e)).array; } auto initialS(){ return [EnumMembers!E].map!(e => S(e)).array; } class C{ auto x = initialS; //this way it works } void main(){ writeln((new C).x); } ```
Re: vectorization of a simple loop -- not in DMD?
On Thursday, 14 July 2022 at 13:00:24 UTC, ryuukk_ wrote: On Thursday, 14 July 2022 at 05:30:58 UTC, Siarhei Siamashka wrote: On Tuesday, 12 July 2022 at 13:23:36 UTC, ryuukk_ wrote: I wonder if DMD/LDC/GDC have built in tools to profile and track performance Linux has a decent system wide profiler: https://perf.wiki.kernel.org/index.php/Main_Page And there are other useful tools, such as callgrind. To take advantage of all these tools, DMD/LDC/GDC only need to provide debugging symbols in the generated binaries, which they already do. Profiling applications to identify performance bottlenecks is very easy nowadays. I am not talking about linux, and i am not talking about 3rd party tools I am talking about the developers of DMD/LDC/GDC, do they profile the compilers, do they provide ways to monitor/track performance? do they benchmark specific parts of the compilers? I am not talking about the output of valgrind Zig also has: https://ziglang.org/perf/ (very slow to load) Having such thing is more useful than being able to plug valgrind god knows how into the compiler and try to decipher what does what and what results correspond to what internally, and what about a graph over time to catch regressions? DMD is very fast at compiling code, so i guess Walter doing enough work to monitor all of that LDC on the other hand.. they'd benefit a lot by having such thing in place Running valgrind on the compiler is completely trivial. Builtin profilers are often terrible. LDC and GDC and dmd all have instrumenting profilers builtin, of varying quality. gprof in particular is somewhat infamous. dmd isn't particularly fast, it does a lot of unnecessary work. LDC is slow because LLVM is slow. We need a graph over time, yes.
Re: vectorization of a simple loop -- not in DMD?
On Thursday, 14 July 2022 at 05:30:58 UTC, Siarhei Siamashka wrote: On Tuesday, 12 July 2022 at 13:23:36 UTC, ryuukk_ wrote: I wonder if DMD/LDC/GDC have built in tools to profile and track performance Linux has a decent system wide profiler: https://perf.wiki.kernel.org/index.php/Main_Page And there are other useful tools, such as callgrind. To take advantage of all these tools, DMD/LDC/GDC only need to provide debugging symbols in the generated binaries, which they already do. Profiling applications to identify performance bottlenecks is very easy nowadays. I am not talking about linux, and i am not talking about 3rd party tools I am talking about the developers of DMD/LDC/GDC, do they profile the compilers, do they provide ways to monitor/track performance? do they benchmark specific parts of the compilers? I am not talking about the output of valgrind Zig also has: https://ziglang.org/perf/ (very slow to load) Having such thing is more useful than being able to plug valgrind god knows how into the compiler and try to decipher what does what and what results correspond to what internally, and what about a graph over time to catch regressions? DMD is very fast at compiling code, so i guess Walter doing enough work to monitor all of that LDC on the other hand.. they'd benefit a lot by having such thing in place
Re: vectorization of a simple loop -- not in DMD?
On Monday, 11 July 2022 at 18:15:16 UTC, Ivan Kazmenko wrote: Hi. I'm looking at the compiler output of DMD (-O -release), LDC (-O -release), and GDC (-O3) for a simple array operation: ``` void add1 (int [] a) { foreach (i; 0..a.length) a[i] += 1; } ``` Here are the outputs: https://godbolt.org/z/GcznbjEaf From what I gather at the view linked above, DMD does not use XMM registers for speedup, and does not unroll the loop either. Switching between 32bit and 64bit doesn't help either. However, I recall in the past it was capable of at least some of these optimizations. So, how do I enable them for such a function? Ivan Kazmenko. No, not in DMD. DMD generates what looks like 32 bit code adapted to x86_64. LDC may optimize this kind of loop with a tri-way branch depending on how many array elements remain. but it can both generate very good loop code(particularly when AVX-512 is available and the struct/data arrangement in memory is unfavorable for SIMD) and very questionable code. You may be losing performance for obscure reasons that look like gnomes decided to steal your precious cpu cycles and when that happens there is no way to fix it other than manually going in with a disassembler/debugger, changing defect optimizations in hot code paths to something faster then save back to executable file.(yikes, i know.)