Re: Bug?
On Tuesday, 5 May 2020 at 04:02:06 UTC, RazvanN wrote: truct K { ~this() nothrow {} } void main() { static class C { this(K, int) {} } static int foo(bool flag) { if (flag) throw new Exception("hello"); return 1; } try { new C(K(), foo(true)); } catch(Exception) { } } Result: object.Exception@test.d(18): hello If the destructor of K is not marked nothrow the code does not throw an exception. Is this a bug or am I missing something? Surely the above code, which silently discards the exception, does not print "hello"? Regardless, I ran your code with writeln inside the catch(), and without the try-catch entirely, with and without nothrow on K's destructor. I am unable to replicate the issue on my computer with DMD 2.091.0, as well as on run.dlang.io. Is something missing in your code here? -- Simen
Bug?
truct K { ~this() nothrow {} } void main() { static class C { this(K, int) {} } static int foo(bool flag) { if (flag) throw new Exception("hello"); return 1; } try { new C(K(), foo(true)); } catch(Exception) { } } Result: object.Exception@test.d(18): hello If the destructor of K is not marked nothrow the code does not throw an exception. Is this a bug or am I missing something?
Re: Compilation memory use
On Monday, 4 May 2020 at 17:00:21 UTC, Anonymouse wrote: TL;DR: Is there a way to tell what module or other section of a codebase is eating memory when compiling? I'm keeping track of compilation memory use using zsh `time` with some environmental variables. It typically looks like this. ``` $ time dub build -c dev Performing "debug" build using /usr/bin/dmd for x86_64. [...] Linking... To force a rebuild of up-to-date targets, run again with --force. dub build -c dev 9.47s user 1.53s system 105% cpu 10.438 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory:4533 MB page faults from disk: 1 other page faults: 1237356 ``` So it tells me the maximum memory that was required to compile it all. However, it only tells me just that; there's no way to know what part of the code is expensive and what part isn't. I can copy dub's dmd command and run it with `-v` and try to infer that the modules that are slow to pass semantic3 are also the hungry ones. But are they? Is there a better metric? I do have a custom dmd build with tracing functionality, but the profiles are not very user friendly and woefully under-documented. https://github.com/UplinkCoder/dmd/tree/tracing_dmd You can use the source of the file `src/printTraceHeader.d` to see how the profile is written, and by extension read. The actual trace file format is in `src/dmd/trace_file.di` you have to throw the -trace=$yourfilename switch when compiling. I am happy to assist with interpreting the results. though for big projects it's usually too much of a mess to really figure out.
Re: countUntil with negated pre-defined predicate?
On 2020-05-03 21:59:54 +, Harry Gillanders said: I'm unsure as to which part is unclear, Well, I was trapped by this formatting/syntax: size_t drawableCharacterCount (CodePoints) (auto ref CodePoints codePoints) if (isInputRange!CodePoints && is(ElementType!CodePoints : dchar)) { and was wondering what the first line contributes... now understanding that it's the function signature and the following "if" line is template contraint (IIRC). So it's actually: size_t drawableCharacterCount (CodePoints) (auto ref CodePoints codePoints) if (isInputRange!CodePoints && is(ElementType!CodePoints : dchar)) { However, I find this sytax a bit unfortunate because I can't spot quickly that this "if" is a tempalte constraint... but maybe I get more used to it. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Compilation memory use
TL;DR: Is there a way to tell what module or other section of a codebase is eating memory when compiling? I'm keeping track of compilation memory use using zsh `time` with some environmental variables. It typically looks like this. ``` $ time dub build -c dev Performing "debug" build using /usr/bin/dmd for x86_64. [...] Linking... To force a rebuild of up-to-date targets, run again with --force. dub build -c dev 9.47s user 1.53s system 105% cpu 10.438 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory:4533 MB page faults from disk: 1 other page faults: 1237356 ``` So it tells me the maximum memory that was required to compile it all. However, it only tells me just that; there's no way to know what part of the code is expensive and what part isn't. I can copy dub's dmd command and run it with `-v` and try to infer that the modules that are slow to pass semantic3 are also the hungry ones. But are they? Is there a better metric?
Re: Idomatic way to guarantee to run destructor?
On Monday, 4 May 2020 at 11:50:49 UTC, Steven Schveighoffer wrote: I'm not sure if Ali is referring to this, but the usage of scope to allocate on the stack was at one time disfavored by the maintainers. This is why std.typecons.scoped was added (to hopefully remove that feature). Though, if dip1000 ever becomes the default, allocating on the stack could be a valid optimization. It's not an optimization, it's the status quo for years, although apparently not properly spec'd. And unlike the `scoped` library solution, `scope` is lightweight and works with -betterC too.
Re: Idomatic way to guarantee to run destructor?
On Mon, May 04, 2020 at 09:33:27AM -0700, Ali Çehreli via Digitalmars-d-learn wrote: [...] > Now it's news to me that 'new' does not allocate on the heap when > 'scope' is used. I'm not sure I'm comfortable with it but that's true. > Is this unique? Otherwise, 'new' always allocates on the heap, no? IIRC this is by design. The idea is that if an object is 'scope', then it will not escape the current scope and therefore it's safe to allocate it on the stack instead of the heap. IIRC historically this was a semi-hack to allow people to specify that the object should be allocated on the stack instead of the heap, back when the semantics of 'scope' wasn't clearly defined yet. T -- Knowledge is that area of ignorance that we arrange and classify. -- Ambrose Bierce
Re: Idomatic way to guarantee to run destructor?
On 5/4/20 12:33 PM, Ali Çehreli wrote: Now it's news to me that 'new' does not allocate on the heap when 'scope' is used. I'm not sure I'm comfortable with it but that's true. Is this unique? Otherwise, 'new' always allocates on the heap, no? This feature was in D1 AFAIK. So it's really old news ;) -Steve
Re: Idomatic way to guarantee to run destructor?
On 5/4/20 2:47 AM, Olivier Pisano wrote: On Monday, 4 May 2020 at 09:20:06 UTC, Ali Çehreli wrote: On 4/30/20 10:04 AM, Ben Jones wrote:> On Thursday, 30 April 2020 at 16:55:36 UTC, Robert M. Münch wrote: > I think you want to use scope rather than auto which will put the class > on the stack and call its destructor: > https://dlang.org/spec/attribute.html#scope That is correct about calling the destructor but the object would still be allocated with 'new', hence be on the heap. There is also library feature 'scoped', which places the object on the stack: https://dlang.org/phobos/std_typecons.html#scoped Ali https://godbolt.org/z/SEVsp5 My ASM skills are pretty limited, but it seems to me that the call to _d_allocclass is omitted when using scope. At least with LDC and GDC. Am I missing something ? I stand corrected. 'scope's running the destructor was news to me, so I tested with a writeln example where the messages changed order depending on whether 'scope' was replaced with 'auto' or not: import std.stdio; class C { ~this() { writeln("~this"); } } void foo() { scope c = new C(); // <-- here } void main() { foo(); writeln("foo returned"); } Now it's news to me that 'new' does not allocate on the heap when 'scope' is used. I'm not sure I'm comfortable with it but that's true. Is this unique? Otherwise, 'new' always allocates on the heap, no? I added the following three lines to the end of foo() too see (without needing to look at assembly) that the 'scope' and 'auto' class objects are allocated on different memory regions (by taking object addresses as proof :) ): writeln(cast(void*)c); auto c2 = new C(); writeln(cast(void*)c2); Ali
Re: Idomatic way to guarantee to run destructor?
On 5/4/20 5:47 AM, Olivier Pisano wrote: On Monday, 4 May 2020 at 09:20:06 UTC, Ali Çehreli wrote: On 4/30/20 10:04 AM, Ben Jones wrote:> On Thursday, 30 April 2020 at 16:55:36 UTC, Robert M. Münch wrote: > I think you want to use scope rather than auto which will put the class > on the stack and call its destructor: > https://dlang.org/spec/attribute.html#scope That is correct about calling the destructor but the object would still be allocated with 'new', hence be on the heap. There is also library feature 'scoped', which places the object on the stack: https://dlang.org/phobos/std_typecons.html#scoped https://godbolt.org/z/SEVsp5 My ASM skills are pretty limited, but it seems to me that the call to _d_allocclass is omitted when using scope. At least with LDC and GDC. Am I missing something ? I'm not sure if Ali is referring to this, but the usage of scope to allocate on the stack was at one time disfavored by the maintainers. This is why std.typecons.scoped was added (to hopefully remove that feature). Though, if dip1000 ever becomes the default, allocating on the stack could be a valid optimization. -Steve
Re: How can I check if an element is iterable?
On Monday, 4 May 2020 at 01:49:28 UTC, Ali Çehreli wrote: On 5/3/20 1:44 PM, Marcone wrote: [...] Still, the type of a variable would determine whether whether it's iterable. As an improvement, the following program can be changed to call use() recursively to visit all members of e.g. structs (which can be determined by 'is (T == struct)'). import std.stdio; import std.traits; void use(T)(T var, size_t indent = 0) { static if (isIterable!T && !isSomeString!T) { foreach (i, e; var) { writefln!"%*s: %s"(indent, i, e); } } else { writefln!"%*s"(indent, var); } } void main() { int i = 42; string s = "hello"; double[] arr = [ 1.5, 2.5, 3.5 ]; use(i); use(s); use(arr); } Ali Very good! Thank you!
Re: Idomatic way to guarantee to run destructor?
On Monday, 4 May 2020 at 09:20:06 UTC, Ali Çehreli wrote: On 4/30/20 10:04 AM, Ben Jones wrote:> On Thursday, 30 April 2020 at 16:55:36 UTC, Robert M. Münch wrote: > I think you want to use scope rather than auto which will put the class > on the stack and call its destructor: > https://dlang.org/spec/attribute.html#scope That is correct about calling the destructor but the object would still be allocated with 'new', hence be on the heap. There is also library feature 'scoped', which places the object on the stack: https://dlang.org/phobos/std_typecons.html#scoped Ali https://godbolt.org/z/SEVsp5 My ASM skills are pretty limited, but it seems to me that the call to _d_allocclass is omitted when using scope. At least with LDC and GDC. Am I missing something ?
Re: Idomatic way to guarantee to run destructor?
On 4/30/20 10:04 AM, Ben Jones wrote:> On Thursday, 30 April 2020 at 16:55:36 UTC, Robert M. Münch wrote: > I think you want to use scope rather than auto which will put the class > on the stack and call its destructor: > https://dlang.org/spec/attribute.html#scope That is correct about calling the destructor but the object would still be allocated with 'new', hence be on the heap. There is also library feature 'scoped', which places the object on the stack: https://dlang.org/phobos/std_typecons.html#scoped Ali
Re: countUntil with negated pre-defined predicate?
On 5/3/20 2:59 PM, Harry Gillanders wrote:> On Sunday, 3 May 2020 at 12:19:30 UTC, Robert M. Münch wrote: > an `auto ref` parameter[1] in a > function template is essentially a parameter that receives the > argument by reference if the templated type is a value-type, Please replace "a value-type" above with "an lvalue". > whereas if the templated type is a reference-type, it receives the > argument by value. And please replace "a reference-type" above with "an rvalue." :) Ali > [1]: https://dlang.org/spec/template.html#auto-ref-parameters