Re: Debugging compile time memory usage
On Tuesday, 26 June 2018 at 00:59:24 UTC, Nicholas Wilson wrote: On Sunday, 24 June 2018 at 14:16:26 UTC, Kamil Koczurek wrote: [...] If you were able to compile it with LDC and not DMD, then that is most likely due to the LDC executable being 64-bit (limited to available system RAM+swaps) and the DMD executable being 32-bit (limited to 4GB). If you want to use DMD, build a 64-bit version yourself and complain on general that the releases are not 64-bit. Is there a specific reason why DMD isn't shipped as 64bit by default? Literally most machines are 64bit these days, so why do we limit ourselves to 32bit?
Re: 'static foreach' chapter and more
On Tuesday, 26 June 2018 at 01:52:42 UTC, Ali Çehreli wrote: I've made some online improvements to "Programming in D" since September 2017. [...] Great work on the book and keeping it up to date!
Re: D hash table comparison benchmark
On Tuesday, 26 June 2018 at 03:45:27 UTC, Seb wrote: Did you by chance also benchmark it with other languages like C++, Go or Rust? I didn't since I was evaluating hashtable implementations for use in a D application. BTW I'm not sure what your plans are, but are you aware of this recent article? https://probablydance.com/2018/05/28/a-new-fast-hash-table-in-response-to-googles-new-fast-hash-table I wasn't, thanks.
Re: opDispatch and alias this
On Tuesday, June 26, 2018 03:17:37 aliak via Digitalmars-d wrote: > On Tuesday, 26 June 2018 at 03:07:08 UTC, aliak wrote: > > Also, what would the work around be for code that relies on > > opDispatch and alias this? And shouldn't the PR take in to > > account the "Deprecation process" and deprecate it first before > > banning it? > > > > Cheers, > > - Ali > > s/shouldn't/will > > Rephrased: will the usage of opDispatch with alias this be > deprecated first :) Any such change would have to be done via deprecation, because it's a breaking change. Sometimes, breaking changes occur immediately due to bug fixes (though it's sometimes been argued that even breakage such as that should be minimized), but they should not occur due to a feature change unless the nature of the feature change absolutely could not be done via a deprecation, and it was deemed so critical that that breakage was worth it. There's no way that this falls in that camp. So, if the change ends up being made, and it immediately breaks your code without there being any deprecation warning first, then someone wasn't doing their job properly, and you should report it as a regression. - Jonathan M Davis
[Issue 19027] New: iota(int.min, int.max).length is incorrect
https://issues.dlang.org/show_bug.cgi?id=19027 Issue ID: 19027 Summary: iota(int.min, int.max).length is incorrect Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: davidbenn...@bravevision.com --- void main() { import std.range : iota; import std.conv : to; auto length = iota(int.min, int.max).length; assert(length == uint.max, "opps... length is " ~ length.to!string); } --- fails with: core.exception.AssertError@[snip]: opps... length is 18446744073709551615 ??:? _d_assert_msg [0x18578d52] ??:? _Dmain [0x1855e6cc] I iota seems work fine with byte, short and long, it's just int that has issues. Thanks, David --
Re: Disappointing performance from DMD/Phobos
On Tuesday, 26 June 2018 at 02:10:17 UTC, Manu wrote: Some code: - struct Entity { enum NumSystems = 4; struct SystemData { uint start, length; } SystemData[NumSystems] systemData; @property uint systemBits() const { return systemData[].map!(e => e.length).sum; } } Entity e; e.systemBits(); // <- call the function, notice the codegen - This property sum's 4 ints... that should be insanely fast. It should also be something like 5-8 lines of asm. Turns out, that call to sum() is eating 2.5% of my total perf (significant among a substantial workload), and the call tree is quite deep. Basically, inliner tried, but failed to seal the deal, and leaves a call stack 7 levels deep. Pipeline programming is hip and also *recommended* D usage. The optimiser must do a good job. This is such a trivial workloop, and with constant length (4). I expect 3 integer adds to unroll and inline. A call-tree 7 levels deep is quite a ways from the mark. Maybe this is another instance of Walter's "phobos begat madness" observation? The unoptimised callstack is mental. Compiling with -O trims most of the noise in the call tree, but it fails to inline the remaining work which ends up 7-levels down a redundant call-tree. Then use LDC! ;) But seriously, DMD's inliner is a) in the wrong spot in the compilation pipeline (at the AST level) and b) is timid to say the least.
Re: D hash table comparison benchmark
On Tuesday, 26 June 2018 at 02:53:22 UTC, Nathan S. wrote: With LDC2 the times for vibe.utils.hashmap and memutils.hashmap are suspiciously low, leading me to suspect that the optimizer might be omitting most of the work. Here are the figures without optimizations enabled. == Speed Ranking using DMD (no optimizations) == 95 msecs built-in AA 168 msecs vibe.utils.hashmap 182 msecs jive.map 224 msecs memutils.hashmap 663 msecs containers.hashmap w/GCAllocator 686 msecs containers.hashmap w/Mallocator == Speed Ranking using LDC2 (no optimizations) == 68 msecs built-in AA 143 msecs vibe.utils.hashmap 155 msecs jive.map 164 msecs memutils.hashmap 515 msecs containers.hashmap w/GCAllocator 537 msecs containers.hashmap w/Mallocator Did you by chance also benchmark it with other languages like C++, Go or Rust? BTW I'm not sure what your plans are, but are you aware of this recent article? https://probablydance.com/2018/05/28/a-new-fast-hash-table-in-response-to-googles-new-fast-hash-table There were also plans to lower the AA implementation entirely into D runtime/user space, s.t. specialization can be done easier, but sadly these plans stagnated so far: https://github.com/dlang/druntime/pull/1282 https://github.com/dlang/druntime/pull/1985
Re: template sequence parameters treats member functions differently?
On Monday, 25 June 2018 at 18:59:37 UTC, Steven Schveighoffer wrote: On 6/25/18 2:51 PM, aliak wrote: On Monday, 25 June 2018 at 15:06:42 UTC, Steven Schveighoffer wrote: I don't see any reason why the alias is to the function and not the contexted function. I don't see how it's any different from the ones which use inner functions. I can only agree - me no see either. And having no clue as to how the compiler is implemented, I cannot even conjecture :) Well, it's worth an enhancement request in any case. -Steve doneo: https://issues.dlang.org/show_bug.cgi?id=19026
[Issue 19026] New: Aliasing an inner function gets context but aliasing a member function through instance does not
https://issues.dlang.org/show_bug.cgi?id=19026 Issue ID: 19026 Summary: Aliasing an inner function gets context but aliasing a member function through instance does not Product: D Version: D2 Hardware: All URL: http://dlang.org/ OS: All Status: NEW Severity: enhancement Priority: P3 Component: dmd Assignee: nob...@puremagic.com Reporter: ali.akhtarz...@gmail.com Based on this forum thread: https://forum.dlang.org/thread/hyforteterjqhgghp...@forum.dlang.org Aliasing an inner function seems inconsistent with aliasing a member function in that the former aliases the contexted function whereas the latter not: void main() { struct S { void f() {} } S s; void f() {} alias inner = f; alias member = s.f; pragma(msg, typeof()); // delegate pragma(msg, typeof()); // function } so inner() can be called but not member() as you get: Error: need this for f of type void() --
Re: Disappointing performance from DMD/Phobos
On Tuesday, 26 June 2018 at 02:20:37 UTC, Manu wrote: I optimised another major gotcha eating perf, and now this issue is taking 13% of my entire work time... bummer. Without disagreeing with you, ldc2 optimizes this fine. https://run.dlang.io/is/NJct6U const @property uint onlineapp.Entity.systemBits(): .cfi_startproc movl4(%rdi), %eax addl12(%rdi), %eax addl20(%rdi), %eax addl28(%rdi), %eax retq
Re: Disappointing performance from DMD/Phobos
On Tuesday, 26 June 2018 at 02:10:17 UTC, Manu wrote: [snip] @property uint systemBits() const { return systemData[].map!(e => e.length).sum; } [snip] This property sum's 4 ints... that should be insanely fast. It should also be something like 5-8 lines of asm. Turns out, that call to sum() is eating 2.5% of my total perf (significant among a substantial workload), and the call tree is quite deep. Basically, inliner tried, but failed to seal the deal, and leaves a call stack 7 levels deep. Last time I checked, dmd's inliner would give up as soon as it sees any type of loop, even the simplest while loop... then the unroller and optimiser later on have less to work with. So I would expect it's the loop in `sumPairwise()` [0] or `sumKahan()` [1] that's the main source of your problems. If we could get that to inline better using `dmd -inline` it would probably speed up quite a lot of code. [0] https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L5483 [1] https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L5578
Re: opDispatch and alias this
On Tuesday, 26 June 2018 at 03:07:08 UTC, aliak wrote: Also, what would the work around be for code that relies on opDispatch and alias this? And shouldn't the PR take in to account the "Deprecation process" and deprecate it first before banning it? Cheers, - Ali s/shouldn't/will Rephrased: will the usage of opDispatch with alias this be deprecated first :)
Re: Disappointing performance from DMD/Phobos
On Monday, June 25, 2018 19:10:17 Manu via Digitalmars-d wrote: > Some code: > - > struct Entity > { > enum NumSystems = 4; > struct SystemData > { > uint start, length; > } > SystemData[NumSystems] systemData; > @property uint systemBits() const { return systemData[].map!(e => > e.length).sum; } > } > Entity e; > e.systemBits(); // <- call the function, notice the codegen > - > > This property sum's 4 ints... that should be insanely fast. It should > also be something like 5-8 lines of asm. > Turns out, that call to sum() is eating 2.5% of my total perf > (significant among a substantial workload), and the call tree is quite > deep. > > Basically, inliner tried, but failed to seal the deal, and leaves a > call stack 7 levels deep. > > Pipeline programming is hip and also *recommended* D usage. The > optimiser must do a good job. This is such a trivial workloop, and > with constant length (4). > I expect 3 integer adds to unroll and inline. A call-tree 7 levels > deep is quite a ways from the mark. > > Maybe this is another instance of Walter's "phobos begat madness" > observation? The unoptimised callstack is mental. Compiling with -O trims > most of the noise in the call tree, but it fails to inline the remaining > work which ends up 7-levels down a redundant call-tree. dmd's inliner is notoriously poor, but I don't know how much effort has really been put into fixing the problem. I do recall it being argued several times that it only should only be in the backend and that there shouldn't be one in the frontend, but either way, the typical solution seems to be to use ldc instead of dmd if you really care about the performance of the generated binary. I don't follow dmd PRs closely, but I get the impression that far more effort gets put into feature-related stuff and bug fixes than performance improvements. Walter at least occasionally does performance improvements, but when he talks about them, it seems like a number of folks react negatively, thinking that his time would be better spent on features and the like and that folks just use ldc for performance. So, all in all, the result is not great for dmd's performance. I don't know what the solution is, though I agree that we're better off if dmd generates fast code in general even if it's not as good as what ldc does. Regardless, if you can give simple test cases that clearly should be generating far better code than they are, then at least there's a clear target for improvement rather than just "dmd should generate faster code," so there's something clearly actionable. - Jonathan M Davis
Re: opDispatch and alias this
On Monday, 25 June 2018 at 23:13:12 UTC, Seb wrote: Apparently three years ago it was we decided to ban alias this and opDispatch in the same class. What are your thoughts on this now? Is anyone depending on using alias this + opDispatch together like e.g. in https://github.com/dlang/phobos/pull/6596? Yes! I at least depend on it: https://github.com/aliak00/optional/blob/master/source/optional/dispatcher.d And it's very deliberate as well, and not sure there's a workaround if this is banned. Or is there? I went thought the DIP and I'm not sure I could exactly understand where the problems were - The first two-thirds seems to describe how alias this is used? Is that correct? The sections Resolution Algorithm first describes how it "should" be resolved right? I didn't understand if that is different than the current resolution (is it?) and how though. Is the recursive alias this a problem or is that just a "btw, you can do this" kinda thing. The Limitation section also says that opDispatch and alias this shouldn't be allowed, but why? I think maybe the DIP seems to mix "will" and "should"? so it makes it hard to see what is the current behavior and what the behavior should be? There is also no link to any discussions either - I see there was a forum discussion 4 years ago though, was that part of the DIP review process? - and going through it there were some questions for e.g. walter, timon, among others that I don't _think_ are answered in the dip. Also, what would the work around be for code that relies on opDispatch and alias this? And shouldn't the PR take in to account the "Deprecation process" and deprecate it first before banning it? Cheers, - Ali
Re: D hash table comparison benchmark
With LDC2 the times for vibe.utils.hashmap and memutils.hashmap are suspiciously low, leading me to suspect that the optimizer might be omitting most of the work. Here are the figures without optimizations enabled. == Speed Ranking using DMD (no optimizations) == 95 msecs built-in AA 168 msecs vibe.utils.hashmap 182 msecs jive.map 224 msecs memutils.hashmap 663 msecs containers.hashmap w/GCAllocator 686 msecs containers.hashmap w/Mallocator == Speed Ranking using LDC2 (no optimizations) == 68 msecs built-in AA 143 msecs vibe.utils.hashmap 155 msecs jive.map 164 msecs memutils.hashmap 515 msecs containers.hashmap w/GCAllocator 537 msecs containers.hashmap w/Mallocator
Re: Disappointing performance from DMD/Phobos
On Mon, 25 Jun 2018 at 19:10, Manu wrote: > > Some code: > - > struct Entity > { > enum NumSystems = 4; > struct SystemData > { > uint start, length; > } > SystemData[NumSystems] systemData; > @property uint systemBits() const { return systemData[].map!(e => > e.length).sum; } > } > Entity e; > e.systemBits(); // <- call the function, notice the codegen > - > > This property sum's 4 ints... that should be insanely fast. It should > also be something like 5-8 lines of asm. > Turns out, that call to sum() is eating 2.5% of my total perf > (significant among a substantial workload), and the call tree is quite > deep. > > Basically, inliner tried, but failed to seal the deal, and leaves a > call stack 7 levels deep. > > Pipeline programming is hip and also *recommended* D usage. The > optimiser must do a good job. This is such a trivial workloop, and > with constant length (4). > I expect 3 integer adds to unroll and inline. A call-tree 7 levels > deep is quite a ways from the mark. > > Maybe this is another instance of Walter's "phobos begat madness" observation? > The unoptimised callstack is mental. Compiling with -O trims most of > the noise in the call tree, but it fails to inline the remaining work > which ends up 7-levels down a redundant call-tree. I optimised another major gotcha eating perf, and now this issue is taking 13% of my entire work time... bummer.
D hash table comparison benchmark
The below benchmarks come from writing 100 int-to-int mappings to a new hashtable then reading them back, repeated 10_000 times. The built-in AA doesn't deallocate memory when it falls out of scope but the other maps do. Benchmark code in next post. == Speed Ranking using LDC2 (optimized) == 21 msecs vibe.utils.hashmap 37 msecs memutils.hashmap 57 msecs built-in AA 102 msecs jive.map 185 msecs containers.hashmap w/GCAllocator 240 msecs containers.hashmap w/Mallocator == Speed Ranking using DMD (optimized) == 55 msecs memutils.hashmap 64 msecs vibe.utils.hashmap 80 msecs built-in AA 131 msecs jive.map 315 msecs containers.hashmap w/GCAllocator 361 msecs containers.hashmap w/Mallocator ** What if the array size is smaller or larger? The ordering didn't change so I won't post the results. ** ** What if we reuse the hashtable? ** == Speed Ranking using LDC2 (optimized) == 10.45 msecs vibe.utils.hashmap 11.85 msecs memutils.hashmap 12.61 msecs containers.hashmap w/GCAllocator 12.91 msecs containers.hashmap w/Mallocator 14.30 msecs built-in AA 19.21 msecs jive.map == Speed Ranking using DMD (optimized) == 18.05 msecs memutils.hashmap 21.03 msecs jive.map 24.99 msecs built-in AA 25.22 msecs containers.hashmap w/Mallocator 25.75 msecs containers.hashmap w/GCAllocator 29.93 msecs vibe.utils.hashmap == Not benchmarked == stdx.collections.hashtable (dlang-stdx/collections): compilation error kontainer.orderedAssocArray (alphaKAI/kontainer): doesn't accept int keys tanya.container.hashtable (caraus-ecms/tanya): either has a bug or is very slow
Re: D hash table comparison benchmark
Benchmark code: dub.sdl ``` name "hashbench" description "D hashtable comparison." dependency "emsi_containers" version="~>0.7.0" dependency "memutils" version="~>0.4.11" dependency "vibe-d:utils" version="~>0.8.4" dependency "jive" version="~>0.2.0" //dependency "collections" version="~>0.1.0" //dependency "tanya" version="~>0.10.0" //dependency "kontainer" version="~>0.0.2" ``` app.d ```d int nthKey(in uint n) @nogc nothrow pure @safe { // Can be any invertible function. // The goal is to map [0 .. N] to a sequence not in ascending order. int h = cast(int) (n + 1); h = (h ^ (h >>> 16)) * 0x85ebca6b; h = (h ^ (n >>> 13)) * 0xc2b2ae35; return h ^ (h >>> 16); } pragma(inline, false) uint hashBench(HashTable, Args...)(in uint N, in uint seed, Args initArgs) { static if (initArgs.length) HashTable hashtable = HashTable(initArgs); else // Separate branch needed for builtin AA. HashTable hashtable; foreach (uint n; 0 .. N) hashtable[nthKey(n)] = n + seed; uint sum; foreach_reverse (uint n; 0 .. N/2) sum += hashtable[nthKey(n)]; foreach_reverse(uint n; N/2 .. N) sum += hashtable[nthKey(n)]; return sum; } pragma(inline, false) uint hashBenchReuse(HashTable)(in uint N, in uint seed, ref HashTable hashtable) { foreach (uint n; 0 .. N) hashtable[nthKey(n)] = n + seed; uint sum; foreach_reverse (uint n; 0 .. N/2) sum += hashtable[nthKey(n)]; foreach_reverse(uint n; N/2 .. N) sum += hashtable[nthKey(n)]; return sum; } enum benchmarkCode(string name, string signature = name) = ` { sw.reset(); result = 0; sw.start(); foreach (_; 0 .. M) { result += hashBench!(`~signature~`)(N, result); } sw.stop(); string s = "`~name~`"; printf("[checksum %d] %3d msecs %s\n", result, sw.peek.total!"msecs", [0]); } `; enum benchmarkCodeReuse(string name, string signature = name) = ` { sw.reset(); result = 0; sw.start(); `~signature~` hashtable; foreach (_; 0 .. M) { result += hashBenchReuse!(`~signature~`)(N, result, hashtable); } sw.stop(); string s = "`~name~`"; printf("(checksum %d) %3.2f msecs %s\n", result, sw.peek.total!"usecs" / 1000.0, [0]); } `; void main(string[] args) { import std.datetime.stopwatch : AutoStart, StopWatch; import core.stdc.stdio : printf, puts; import std.experimental.allocator.gc_allocator : GCAllocator; import std.experimental.allocator.mallocator : Mallocator; alias BuiltinAA(K,V) = V[K]; import containers.hashmap : EMSI_HashMap = HashMap; import memutils.hashmap : Memutils_HashMap = HashMap; import vibe.utils.hashmap : Vibe_HashMap = HashMap; import jive.map : Jive_Map = Map; //import stdx.collections.hashtable : Stdx_Hashtable = Hashtable; //import tanya.container.hashtable : Tanya_HashTable = HashTable; //import kontainer.orderedAssocArray.orderedAssocArray : Kontainer_OrderedAssocArray = OrderedAssocArray; immutable uint N = args.length < 2 ? 100 : () { import std.conv : to; auto result = to!uint(args[1]); return (result == 0 ? 100 : result); }(); immutable M = N <= 500_000 ? (1000_000 / N) : 2; enum topLevelRepetitions = 3; printf("Hashtable benchmark N (size) = %d (repetitions) = %d\n", N, M); StopWatch sw = StopWatch(AutoStart.no); uint result; version(all) { puts("\n=Results (new hashtables)="); foreach (_repetition; 0 .. topLevelRepetitions) { printf("*Trial #%d*\n", _repetition+1); mixin(benchmarkCode!("built-in AA", "BuiltinAA!(int, int)")); mixin(benchmarkCode!("containers.hashmap w/Mallocator", "EMSI_HashMap!(int, int, Mallocator)")); mixin(benchmarkCode!("containers.hashmap w/GCAllocator", "EMSI_HashMap!(int, int, GCAllocator)")); mixin(benchmarkCode!("memutils.hashmap", "Memutils_HashMap!(int,int)")); mixin(benchmarkCode!("vibe.utils.hashmap", "Vibe_HashMap!(int,int)")); mixin(benchmarkCode!("jive.map", "Jive_Map!(int,int)")); //mixin(benchmarkCode!("stdx.collections.hashtable", "Stdx_Hashtable!(int,int)")); //mixin(benchmarkCode!("tanya.container.hashtable", "Tanya_HashTable!(int,int)")); //mixin(benchmarkCode!("kontainer.orderedAssocArray.orderedAssocArray", "Kontainer_OrderedAssocArray!(int,int)")); } } version(all) { puts("\n=Results (reusing hashtables)=\n"); foreach (_repetition; 0 .. topLevelRepetitions) { printf("*Trial #%d*\n", _repetition+1);
Disappointing performance from DMD/Phobos
Some code: - struct Entity { enum NumSystems = 4; struct SystemData { uint start, length; } SystemData[NumSystems] systemData; @property uint systemBits() const { return systemData[].map!(e => e.length).sum; } } Entity e; e.systemBits(); // <- call the function, notice the codegen - This property sum's 4 ints... that should be insanely fast. It should also be something like 5-8 lines of asm. Turns out, that call to sum() is eating 2.5% of my total perf (significant among a substantial workload), and the call tree is quite deep. Basically, inliner tried, but failed to seal the deal, and leaves a call stack 7 levels deep. Pipeline programming is hip and also *recommended* D usage. The optimiser must do a good job. This is such a trivial workloop, and with constant length (4). I expect 3 integer adds to unroll and inline. A call-tree 7 levels deep is quite a ways from the mark. Maybe this is another instance of Walter's "phobos begat madness" observation? The unoptimised callstack is mental. Compiling with -O trims most of the noise in the call tree, but it fails to inline the remaining work which ends up 7-levels down a redundant call-tree.
Re: opDispatch and alias this
On Tuesday, 26 June 2018 at 00:56:13 UTC, Jonathan M Davis wrote: On Monday, June 25, 2018 23:13:12 Seb via Digitalmars-d wrote: - Jonathan M Davis I've tried to fix this issue. If briefly there is one collision: Final alias this should be r-value: static struct S { int i; } void test(ref S s) {} auto val = Final!S(42); assert (!__traits(compiles, test(val))); //val works as r-value However should be able to mutate fields of val: val.i = 24; assert(val.i == 24); //should works We may discard this case: for struct is value-type and head const should disable changing of fields. For example in C++ you can't declare instance of struct when you can not reassign instance, but can reassign a field.
Re: Phobos and the Tools repo are now on DUB
On Monday, 25 June 2018 at 21:34:43 UTC, Seb wrote: Phobos ... I forgot the links to the DUB registry: https://phobos.dub.pm https://dtools.dub.pm
'static foreach' chapter and more
I've made some online improvements to "Programming in D" since September 2017. http://ddili.org/ders/d.en/index.html NOTE: The copies of the book at hard copy printers are not updated yet. If you order from Amazon etc. it will still be the OLD version. I need some more time to work on that... Also, only the PDF electronic format is up-to-date; other ebook formats are NOT. * The code samples are now up-to-date with 2.080.1 * Digit separator (%,) format specifier: http://ddili.org/ders/d.en/formatted_output.html#ix_formatted_output.separator * Stopwatch is moved to module std.datetime.stopwatch * Replace 'body' with 'do' * Text file imports (string imports): http://ddili.org/ders/d.en/mixin.html#ix_mixin.file%20import * First assignment to a member is construction (search for that text on the page): http://ddili.org/ders/d.en/special_functions.html#ix_special_functions.this,%20constructor * static foreach: http://ddili.org/ders/d.en/static_foreach.html#ix_static_foreach.static%20foreach Ali
Re: Debugging compile time memory usage
On Sunday, 24 June 2018 at 14:16:26 UTC, Kamil Koczurek wrote: I recently wrote a brainfuck compiler in D, which loads the BF source at compile time, performs some (simple) optimizations, translates everything to D and puts it into the source code with a mixin. I did manage to get some pretty good performance, but for some programs in brainfuck I have to use LDC instead of DMD because the latter runs out of memory. Is there a way for me to optimize my code in such a way that DMD will be able to compile it? D code: https://pastebin.com/fg1bqwnd BF program that works: https://github.com/erikdubbelboer/brainfuck-jit/blob/master/mandelbrot.bf BF program that makes DMD crash: https://github.com/fabianishere/brainfuck/blob/master/examples/hanoi.bf After putting BF code in code.bf and D in main.d, I compile it with the following command: dmd main.d -J./ Error msg: unable to fork: Cannot allocate memory DMD version: DMD64 D Compiler v2.080.0-dirty If you were able to compile it with LDC and not DMD, then that is most likely due to the LDC executable being 64-bit (limited to available system RAM+swaps) and the DMD executable being 32-bit (limited to 4GB). If you want to use DMD, build a 64-bit version yourself and complain on general that the releases are not 64-bit.
Re: opDispatch and alias this
On Monday, June 25, 2018 23:13:12 Seb via Digitalmars-d wrote: > Apparently three years ago it was we decided to ban alias this > and opDispatch in the same class. > What are your thoughts on this now? > Is anyone depending on using alias this + opDispatch together > like e.g. in https://github.com/dlang/phobos/pull/6596? I think that that choice made a lot of sense, but regardless, I think that the only reason that the issue of whether having alias this and opDispatch on the same type has anything to do with Final is because if we make it illegal, then Final has to be fixed so that it only does one, whereas if we don't make it illegal, then presumably, we can leave Final as-is (assuming that having both opDispatch and alias this on Final doesn't result in subtle bugs, which it may). Either way, conceptually, what Final is doing should use alias this. It's trying to implement head-const, and it really doesn't make sense to require an explicit cast to convert from head-const to fully const. That should work implicitly - which means using alias this. As far as I can tell, the only reason that opDispatch comes into the picture here at all is because Proxy was used in order to simplify Final's implementation. What Proxy is trying to do with sub-typing is fundamentally opposite of what Final is trying to do, since Proxy specifically does _not_ want implicit conversions, whereas Final does want them. If Final is fixed so that it doesn't reuse Proxy, then this whole problem goes away. Granted, that's a lot more work than what that PR currently does, but it's the right solution unless we're going to drop the idea of disallowing opDispatch and alias this on the same type, and given how alias this and opDispatch work, it seems highly risky in general to have them both be on the same type. It might actually work in some cases, but it sure seems like it's begging for subtle bugs. - Jonathan M Davis
Re: Can I parse this kind of HTML with arsd.dom module?
On Sunday, 24 June 2018 at 03:46:09 UTC, Dr.No wrote: to make it work. But if anyone else know how to fix this, will be very welcome too! try it now. thanks to Sandman83 on github.
opDispatch and alias this
Apparently three years ago it was we decided to ban alias this and opDispatch in the same class. What are your thoughts on this now? Is anyone depending on using alias this + opDispatch together like e.g. in https://github.com/dlang/phobos/pull/6596?
Re: Nullable!T with T of class type
On Monday, June 25, 2018 19:40:30 kdevel via Digitalmars-d-learn wrote: > Just stumbled over the following design: > > class S {...} > > class R { > >Nullable!S s; > > } > > s was checked in code like > > R r; > > if (r.s is null) >throw new Exception ("some error message"); > > At runtime the following was caught: > > fatal error: caught Throwable: Called `get' on null Nullable!S > > Why can't this programming error be detected at compile time? If you have a function that accepts Nullable!T, when that function is called, how on earth is it going to know whether the argument it received was null at compile time? That depends entirely on the argument, which could have come from anywhere. So, in the general case, the compiler can't possibly determine whether a variable is null or not at compile time. And as far as just within a function goes, Nullable is a library type. There's nothing special about it. The compiler has no magic knowledge about what it does or how it functions. So, how would it know that R r; r.foo(); was bad code? And honestly, this sort of problem actually gets surprisingly thorny even if you tried to bake this into the compiler for a built-in type. Sure, it would be straightforward for the compiler to see that int* i; *i = 42; is dereferencing null, but as soon as you start adding if statements, function calls, etc. it quickly becomes impossible for the compiler to accurately determine whether the pointer is null or not. Any such attempt will inevitably end up with false positives, forcing you to do stuff like assign variables values when you know that it's unnecessary - it would complicate the compiler considerably to even make the attempt. It's far simpler to just treat it as a runtime error. Even more restricted languages such as Java do that. Java does try to force you to initialize stuff (resulting in annoying false positives at times), but in general, it still can't guarantee when a variable is null or not and is forced to insert runtime null checks. > Is it possible > to "lower" the Nullable operations if T is a class type such that > there > is only one level of nullification? It's been discussed before, but doing so would make the behavior of Nullable depend subtly on what type it contained and risks bugs in generic code. Also, there _is_ code out there which depends on the fact that you can have a Nullable!MyClass where the variable has a value and that value is a null reference. If you really don't want the extra bool, then just don't use Nullable. Class references are already naturally nullable. - Jonathan M Davis
Associative Array that Supports upper/lower Ranges
I was in need of an associative array / dictionary object that could also support getting ranges of entries with keys below or above a given value. I couldn't find anything that would do this, and ended up using the RedBlackTree to store key/value pairs, and then wrap the relevant functions with key lookups. I feel that there was probably an easier way to do this, but I didn't find one. Regardless, if anyone else has this kind of problem, you can get around it like this: ``` module rbtree_map; import std.container.rbtree; import std.algorithm : map; import std.functional : binaryFun; import std.meta : allSatisfy; import std.range : ElementType, isInputRange; import std.traits : isDynamicArray, isImplicitlyConvertible; /** * A dictionary or associative array backed by a Red-Black tree. */ unittest { auto rbTreeMap = new RBTreeMap!(string, int)(); rbTreeMap["a"] = 4; rbTreeMap["b"] = 2; rbTreeMap["c"] = 3; rbTreeMap["d"] = 1; rbTreeMap["e"] = 5; assert(rbTreeMap.length() == 5); assert("c" in rbTreeMap); rbTreeMap.removeKey("c"); assert("c" !in rbTreeMap); rbTreeMap.lowerBound("c"); // Range of ("a", 4), ("b", 2) rbTreeMap.upperBound("c"); // Range of ("d", 1), ("e", 5) } final class RBTreeMap(KeyT, ValueT, alias KeyLessF = "a < b", bool allowDuplicates = false) { public: static struct Pair { KeyT key; ValueT value; } alias keyLess = binaryFun!KeyLessF; alias RedBlackTreeT = RedBlackTree!(Pair, (pair1, pair2) => keyLess(pair1.key, pair2.key), allowDuplicates); RedBlackTreeT rbTree; // Forward compatible methods like: empty(), length(), opSlice(), etc. alias rbTree this; this() { rbTree = new RedBlackTreeT(); } this(Pair[] elems...) { rbTree = new RedBlackTreeT(elems); } this(PairRange)(PairRange pairRange) if (isInputRange!PairRange && isImplicitlyConvertible!(ElementType!PairRange, Pair)) { rbTree = new RedBlackTreeT(pairRange); } override bool opEquals(Object rhs) { RBTreeMap that = cast(RBTreeMap) rhs; if (that is null) return false; return rbTree == that.rbTree; } /// Insertion size_t stableInsert(K, V)(K key, V value) if (isImplicitlyConvertible!(K, KeyT) && isImplicitlyConvertible!(V, ValueT)) { return rbTree.stableInsert(Pair(key, value)); } alias insert = stableInsert; ValueT opIndexAssign(ValueT value, KeyT key) { rbTree.stableInsert(Pair(key, value)); return value; } /// Membership bool opBinaryRight(string op)(KeyT key) const if (op == "in") { return Pair(key) in rbTree; } /// Removal size_t removeKey(K...)(K keys) if (allSatisfy!(isImplicitlyConvertibleToKey, K)) { KeyT[K.length] toRemove = [keys]; return removeKey(toRemove[]); } //Helper for removeKey. private template isImplicitlyConvertibleToKey(K) { enum isImplicitlyConvertibleToKey = isImplicitlyConvertible!(K, KeyT); } size_t removeKey(K)(K[] keys) if (isImplicitlyConvertible!(K, KeyT)) { auto keyPairs = keys.map!(key => Pair(key)); return rbTree.removeKey(keyPairs); } size_t removeKey(KeyRange)(KeyRange keyRange) if (isInputRange!KeyRange && isImplicitlyConvertible!(ElementType!KeyRange, KeyT) && !isDynamicArray!KeyRange) { auto keyPairs = keys.map(key => Pair(key)); return rbTree.removeKey(keyPairs); } /// Ranges RedBlackTreeT.Range upperBound(KeyT key) { return rbTree.upperBound(Pair(key)); } RedBlackTreeT.ConstRange upperBound(KeyT key) const { return rbTree.upperBound(Pair(key)); } RedBlackTreeT.ImmutableRange upperBound(KeyT key) immutable { return rbTree.upperBound(Pair(key)); } RedBlackTreeT.Range lowerBound(KeyT key) { return rbTree.lowerBound(Pair(key)); } RedBlackTreeT.ConstRange lowerBound(KeyT key) const { return rbTree.lowerBound(Pair(key)); } RedBlackTreeT.ImmutableRange lowerBound(KeyT key) immutable { return rbTree.lowerBound(Pair(key)); } auto equalRange(KeyT key) { return rbTree.equalRange(Pair(key)); } } ```
Phobos and the Tools repo are now on DUB
Phobos -- It's now possible to access the latest version of Phobos's experimental packages through dub: ``` #!/usr/bin/env dub /++dub.sdl: dependency "phobos:checkedint" version="~master" +/ void main(string[] args) { import stdx.checkedint; // From latest Phobos import std.stdio; // DMD's Phobos writeln("checkedint: ", 2.checked + 3); } ``` For now, only checkedint and allocator are exposed as subpackages. FAQ --- 1) Why stdx and not std.experimental? Otherwise the linker would run into symbol conflicts. 2) Why ~master? D's versioning doesn't conform with SemVer yet. There are plans to change it soon though. 3) What's the purpose? Allow usage of experimental packages like std.experimental.allocator for Dub packages. For example, at the moment the upcoming collections library (https://github.com/dlang-stdx/collections), doesn't compile with stable dmd or ldc as it depends on very recent changes in the std.experimental.allocator. 4) What are the next steps? At the moment the "allocator" sub package doesn't fully work as some parts of the allocator module depend on package(std) parts of Phobos (e.g. std.conv.emplaceRef). So fixing this is of course the next step. However, for now, there are no plans to expose the more stable parts of Phobos on dub. 5) Wait. Aren't Phobos and DMD supposed to be built in sync? It's certainly true that Druntime has a lot of dependencies on DMD, but most modules in Phobos (except e.g. std.traits or std.math) don't have such a strong dependence. This especially applies to the experimental modules which were originally developed as dub packages (and probably should have stayed on dub). 6) What are your guarantees for stability? None. This is experimental. Tools - Almost all tools in the tools repo (https://github.com/dlang/tools) can be built & run with dub now. For example, this converts your Windows files to Posix file ending: ``` dub fetch dtools dub run dtools:tolf -- mywindowsfile.d ``` The long-term goal here is to replace the win32.mak with a pure D/Dub build file.
[Issue 19025] Better definition of deallocateAll in ContiguousFreeList
https://issues.dlang.org/show_bug.cgi?id=19025 mmcoma...@gmail.com changed: What|Removed |Added CC||mmcoma...@gmail.com --
[Issue 18034] SIMD optimization issues
https://issues.dlang.org/show_bug.cgi?id=18034 mmcoma...@gmail.com changed: What|Removed |Added CC||mmcoma...@gmail.com --
Re: Visual D 0.47.0 released
On Sun, 24 Jun 2018 at 06:10, Rainer Schuetze via Digitalmars-d-announce wrote: > > Hi, > > a new release of Visual D has just been uploaded. Major changes are > > * improved Visual C++ project integration: better dependencies, >automatic libraries, name demangling > * new project wizard > * mago debugger: show vtable, dynamic type of interfaces, >symbol names of pointer address > > See http://rainers.github.io/visuald/visuald/VersionHistory.html for the > full version history. > > Visual D is a Visual Studio extension that adds D language support to > VS2008-2017. It is written in D, its source code can be found on github: > https://github.com/D-Programming-Language/visuald, pull requests welcome. > > An installer can be found at > http://rainers.github.io/visuald/visuald/StartPage.html > > Happy coding, > Rainer Thanks again for all the work on this one! This is a great release with loads of important improvements.
[Issue 18953] Win32: extern(C++) struct destructor not called correctly through runtime
https://issues.dlang.org/show_bug.cgi?id=18953 --- Comment #5 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/3a4707b4b15743ff4a068fe5ac6b983b6ce042c4 fix issue 18953 - extern(C++) struct destructor not called correctly through runtime Win32 already fixed by d08e0fb3bff6767bd28516f53393083d86f63045 other 32-bit platforms have the same issue https://github.com/dlang/dmd/commit/f95b301a78a4124904e85622f63215b8ce248e67 Merge pull request #8359 from rainers/issue18953 fix issue 18953 - 32-bit: extern(C++) struct destructor not called cor… --
[Issue 18984] Debugging stack struct's which are returned causes incorrect debuginfo.
https://issues.dlang.org/show_bug.cgi?id=18984 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 18953] Win32: extern(C++) struct destructor not called correctly through runtime
https://issues.dlang.org/show_bug.cgi?id=18953 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 18916] ICE using Typedef and __LINE__
https://issues.dlang.org/show_bug.cgi?id=18916 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 18266] ICE: should allow reusing identifier in declarations in disjoint scopes in a function
https://issues.dlang.org/show_bug.cgi?id=18266 --- Comment #2 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/5e18ed610d8d946c1718f591806342e933788559 Fix Issue 18266 - ICE: should allow reusing identifier in declarations in disjoint scopes in a function https://github.com/dlang/dmd/commit/884fa1dcb056582de68bd76a13bd508f70d31ec0 Merge pull request #8370 from RazvanN7/Issue_18266 Fix Issue 18266 - ICE: should allow reusing identifier in declaration… --
[Issue 18984] Debugging stack struct's which are returned causes incorrect debuginfo.
https://issues.dlang.org/show_bug.cgi?id=18984 --- Comment #7 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/b3c9c561ce8a52b9997a0fef8912eecd296f8c32 fix issue 18984 - Debugging stack struct's which are returned causes incorrect debuginfo. change the names of the hidden return value and the NRVO variable https://github.com/dlang/dmd/commit/b9cf7458c2c6521bf78cd97822d1905e2e43da4e Merge pull request #8368 from rainers/issue18984 fix issue 18984 - stack struct's which are returned causes incorrect debuginfo. merged-on-behalf-of: Petar Kirov --
[Issue 18966] extern(C++) constructor should match C++ semantics assigning vtable
https://issues.dlang.org/show_bug.cgi?id=18966 --- Comment #3 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/403d5f0148abe60ac1836da9038c18cebd8b8644 fix Issue 18966 - extern(C++) constructor should match C++ semantics assigning vtable set vtbl after exlicit or implicit call to super() in C++ constructors https://github.com/dlang/dmd/commit/297844f035be1753cbbf07f9338c6383b2b62708 Merge pull request #8362 from rainers/issue18966 fix Issue 18966 - extern(C++) constructor should match C++ semantics … merged-on-behalf-of: David Nadlinger --
[Issue 18266] ICE: should allow reusing identifier in declarations in disjoint scopes in a function
https://issues.dlang.org/show_bug.cgi?id=18266 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 18916] ICE using Typedef and __LINE__
https://issues.dlang.org/show_bug.cgi?id=18916 --- Comment #4 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/3b3e97904dd844d2c7546c528a4cb6fec19fe106 Fix Issue 18916 - ICE using Typedef and __LINE__ https://github.com/dlang/dmd/commit/663625bce0f0f392e1ddb7078d216604c2b69f61 Merge pull request #8310 from JinShil/fix_18916 Fix Issue 18916 - ICE using Typedef and __LINE__ --
Re: foreach / mutating iterator - How to do this?
On Monday, June 25, 2018 17:29:23 Robert M. Münch via Digitalmars-d-learn wrote: > I have two foreach loops where the inner should change the iterator > (append new entries) of the outer. > > foreach(a, candidates) { > foreach(b, a) { > if(...) candidates ~= additionalCandidate; > } > } > > The foreach docs state that the collection must not change during > iteration. > > So, how to best handle such a situation then? Using a plain for loop? Either that or create a separate array containing the elements you're adding and then append that to candidates after the loop has terminated. Or if all you're really trying to do is run an operation on a list of items, and in the process, you get more items that you want to operate on but don't need to keep them around afterwards, you could just wrap the operation in a function and use recursion. e.g. foreach(a, candidates) { doStuff(a); } void func(T)(T a) { foreach(b, a) { if(...) func(additionalCandidate); } } But regardless, you can't mutate something while you're iterating over it with foreach, so you're either going to have to manually control the iteration yourself so that you can do it in a way that guarantees that it's safe to add elements while iterating, or you're going to have to adjust what you're doing so that it doesn't need to add to the list of items while iterating over it. The big issue with foreach is that if it's iterating over is a range, then it copies it, and if it's not a range, it slices it (or if it defines opApply, that gets used). So, foreach(e; range) gets lowered to foreach(auto __c = range; !__c.empty; __c.popFront()) { auto e = __c.front; } which means that range is copied, and it's then unspecified behavior as to what happens if you try to use the range after passing it to foreach (the exact behavior depends on how the range is implemented), meaning that you really shouldn't be passing a range to foreach and then still do anything with it. If foreach is given a container, then it slices it, e.g. foreach(e; container) foreach(auto __c = container[]; !__c.empty; __c.popFront()) { auto e = __c.front; } so it doesn't run into the copying problem, but it's still not a good idea to mutate the container while iterating. What happens when you try to mutate the container while iterating over a range from that container depends on the container, and foreach in general isn't supposed to be able to iterate over something while it's mutated. Dynamic and associative arrays get different lowerings than generic ranges or containers, but they're also likely to run into problems if you try to mutate them while iterating over them. So, if using a normal for loop instead of foreach fixes your problem, then there you go. Otherwise, rearrange what you're doing so that it doesn't need to add anything to the original list of items in the loop. Either way, trying to mutate what you're iterating over is going to cause bugs, albeit slightly different bugs depending on what you're iterating over. - Jonathan M Davis
[Issue 19025] New: Better definition of deallocateAll in ContiguousFreeList
https://issues.dlang.org/show_bug.cgi?id=19025 Issue ID: 19025 Summary: Better definition of deallocateAll in ContiguousFreeList Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: mmcoma...@gmail.com Deallocation of all memory in ContiguousFreeList can be done without using parent.deallocateAll. Buffer used by ContiguousFreeList is always allocated by parent so deallocateAll it should be enought to implement deallocateAll. If I understand everything properly such an implementation would be enought: bool deallocateAll() { bool result = parent.deallocate(support); allocated = 0; return result; } I am not sure but current implementation might be wrong because it is calling parent.deallocateAll() and parent might have been used to allocate some other data not only buffer for ContiguousFreeList. --
Nullable!T with T of class type
Just stumbled over the following design: class S {...} class R { : Nullable!S s; : } s was checked in code like R r; : if (r.s is null) throw new Exception ("some error message"); At runtime the following was caught: fatal error: caught Throwable: Called `get' on null Nullable!S Why can't this programming error be detected at compile time? Is it possible to "lower" the Nullable operations if T is a class type such that there is only one level of nullification? BTW: https://dlang.org/library/std/typecons/nullable.html contains duplicate sections: Function nullable Function nullable Struct Nullable Struct Nullable
Re: template sequence parameters treats member functions differently?
On 6/25/18 2:51 PM, aliak wrote: On Monday, 25 June 2018 at 15:06:42 UTC, Steven Schveighoffer wrote: I don't see any reason why the alias is to the function and not the contexted function. I don't see how it's any different from the ones which use inner functions. I can only agree - me no see either. And having no clue as to how the compiler is implemented, I cannot even conjecture :) Well, it's worth an enhancement request in any case. -Steve
Re: template sequence parameters treats member functions differently?
On Monday, 25 June 2018 at 15:06:42 UTC, Steven Schveighoffer wrote: On 6/24/18 5:19 PM, aliak wrote: [...] No, because the alias is an alias to the function, not the delegate. The act of taking the address creates the delegate, where the delegate's ptr is the context pointer (i.e. s), and the funcptr is the function that accepts the pointer (i.e. S.f). When you pass in s.f to an alias, you are actually passing in S.f. It's the fact that you are looking in the *namespace* of s when you do the alias. The is special for the compiler, and can't be deferred to later. Ahh, I see. Ah well. So not really much i can do here with this then I guess. Thanks for explaining though! BUT, I'm thinking this may be fixable, as it's inconsistent with inner functions: auto foo(alias x)() { return x(); } struct S { int bar() { return 42; } // int baz() { return foo!bar; } // nope } void main() { S s; int bar() { return 42; } assert(foo!bar() == 42); // ok // assert(foo!(s.bar) == 42); // nope int baz() { return s.bar; } assert(foo!baz() == 42); // ok! } I don't see any reason why the alias is to the function and not the contexted function. I don't see how it's any different from the ones which use inner functions. -Steve I can only agree - me no see either. And having no clue as to how the compiler is implemented, I cannot even conjecture :)
Futures
Hi, I am sure std.parallelism and vibe.d both have futures/channels/executors systems as there appears not to be anything in Phobos as a futures system. Or am I wrong here? What is needed is a futures system with a single threaded executor system that can be integrated with GtkD so as to make something for D akin to what is happening in gtk-rs. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
[Issue 16987] ABI error wrt. COM interfaces returning structs
https://issues.dlang.org/show_bug.cgi?id=16987 ki...@gmx.net changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #1 from ki...@gmx.net --- This was fixed by https://github.com/dlang/dmd/pull/8330, for Win32 and Win64, and apparently wasn't COM-specific, but a Visual C++ ABI quirk. --
Re: overload .
On Monday, 25 June 2018 at 15:39:09 UTC, Mr.Bingo wrote: On Monday, 25 June 2018 at 13:58:54 UTC, aliak wrote: A.x is translated in to A.opDispatch!"x" with no args. So I guess you can overload or you can static if on a template parameter sequence: import std.stdio; struct S { auto opDispatch(string name, Args...)(Args args) { static if (!Args.length) { return 3; } else { // set something } } } void main() { S s; s.x = 3; writeln(s.x); } Cheers, - Ali Ok, for some reason using two different templated failed but combining them in to one passes: auto opDispatch(string name, T)(T a) auto opDispatch(string name)() Maybe it is a bug in the compiler that it only checks one opDispatch? Two opDispatchs as in: import std.stdio: writeln; struct S { void opDispatch(string name, T)(T t) { writeln(t); } auto opDispatch(string name)() { writeln("ret"); return 4; } } void main() { S s; s.x; s.x = 4; } ?? The above seems to work fine. Or maybe you meant something else?
[Issue 18978] Cannot create new projects in 0.47.0-beta1
https://issues.dlang.org/show_bug.cgi?id=18978 Thomas changed: What|Removed |Added Status|RESOLVED|VERIFIED --
Re: Visual D 0.47.0 released
On Sunday, 24 June 2018 at 13:08:53 UTC, Rainer Schuetze wrote: a new release of Visual D has just been uploaded Thanks Rainer, much appreciated. Looking forward to the many debugging improvements.
[Issue 18996] Inserting a type containing indirections into an std.container Array causes SIGILL(4). Illegal Instruction.
https://issues.dlang.org/show_bug.cgi?id=18996 --- Comment #9 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/druntime https://github.com/dlang/druntime/commit/1b0af1ee6030b233eba7d29629c5ea995d5df944 Fix issue 18996 - ProtoGC should support removing roots and ranges that were not originally added. Behavior is now consistent with conservative GC. https://github.com/dlang/druntime/commit/90c140f439e3ee719d8a01455dee5dc976e1f9d1 Merge pull request #2220 from schveiguy/fix18996 Fix issue 18996 - ProtoGC should support removing roots and ranges that were not originally added --
[Issue 19005] [REG2.081-b1] object.hashOf no longer works for std.datetime.date.Date
https://issues.dlang.org/show_bug.cgi?id=19005 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 18996] Inserting a type containing indirections into an std.container Array causes SIGILL(4). Illegal Instruction.
https://issues.dlang.org/show_bug.cgi?id=18996 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 19008] core.internal.convert.toUbyte doesn't work on enums
https://issues.dlang.org/show_bug.cgi?id=19008 --- Comment #1 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/druntime https://github.com/dlang/druntime/commit/3e233e7e88bbf84d27cf87c33d94ab85f6f6fc12 Fix Issue 19008 - core.internal.convert.toUbyte doesn't work on enums https://github.com/dlang/druntime/commit/709270989ec493e18d82b5bb62b81208b570317f Merge pull request #2226 from n8sh/convert-19008 Fix Issue 19008 - core.internal.convert.toUbyte doesn't work on enums --
[Issue 19005] [REG2.081-b1] object.hashOf no longer works for std.datetime.date.Date
https://issues.dlang.org/show_bug.cgi?id=19005 --- Comment #3 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/druntime https://github.com/dlang/druntime/commit/416d1dd1afc43ff423becca63801f5807e8efdb9 Fix Issue 19005 - [REG2.081-b1] object.hashOf no longer works for std.datetime.date.Date https://github.com/dlang/druntime/commit/c2b856c9d6b37d54a41bf10dc85dad761777a0ff Merge pull request #2227 from n8sh/hash-19005 Fix Issue 19005 - [REG2.081-b1] object.hashOf no longer works for std.datetime.date.Date --
[Issue 19008] core.internal.convert.toUbyte doesn't work on enums
https://issues.dlang.org/show_bug.cgi?id=19008 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
Re: Making sense of recursion
On 06/25/2018 07:45 PM, zbr wrote: void mergeSort(int[] arr, int l, int r) { if (l < r) // 1 { int m = l+(r-l)/2; // 2 mergeSort(arr, l, m); // 3 mergeSort(arr, m+1, r); // 4 merge(arr, l, m, r); // 5 } // 6 } // 7 mergeSort(arr, 0, 4); When I see this, I visualize the recursion to perform this way: mergeSort(arr, 0, 4): 0 < 4 ? true: mergeSort(0, 2): 0 < 2 ? true: mergeSort(0, 1): 0 < 1 ? true: mergeSort(0, 0): 0 < 0 ? false: //reach the end of mergeSort / reach line 6 and then 7 I don't see the computer ever reaching line 4 and 5? Obviously I'm wrong but where is my mistake? You seem to think that a recursive call takes over completely, and that the caller ceases to exist. That's not so. mergeSort does call "itself", but that means there's two active calls now. And when it calls "itself" again, there's three. And so on. When an inner call returns, the outer one resumes with the next line as usual. It's not just a list of recursive calls, it's a tree: mergeSort(0, 3) mergeSort(0, 1) // line 3 mergeSort(0, 0) // line 3 mergeSort(1, 1) // line 4 merge // line 5 mergeSort(2, 3) // line 4 mergesort(2, 2) // line 3 mergesort(3, 3) // line 4 merge // line 5 merge // line 5
Re: Making sense of recursion
On Monday, 25 June 2018 at 17:45:01 UTC, zbr wrote: Hi, this question is not specifically D related but I'll just ask anyway. Consider the following snippet: [...] Your mistake is in your visualization :-) But... more like: 0 < 4 ? true : mergeSort(0,2) && mergeSort(3, 4) And so on. I.e, the it's not either or to run the second mergeSort, they both happen.
Making sense of recursion
Hi, this question is not specifically D related but I'll just ask anyway. Consider the following snippet: void mergeSort(int[] arr, int l, int r) { if (l < r) // 1 { int m = l+(r-l)/2;// 2 mergeSort(arr, l, m); // 3 mergeSort(arr, m+1, r); // 4 merge(arr, l, m, r); // 5 }// 6 } // 7 mergeSort(arr, 0, 4); When I see this, I visualize the recursion to perform this way: mergeSort(arr, 0, 4): 0 < 4 ? true: mergeSort(0, 2): 0 < 2 ? true: mergeSort(0, 1): 0 < 1 ? true: mergeSort(0, 0): 0 < 0 ? false: //reach the end of mergeSort / reach line 6 and then 7 I don't see the computer ever reaching line 4 and 5? Obviously I'm wrong but where is my mistake? Thanks.
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: On Sunday, June 24, 2018 23:53:09 Timoses via Digitalmars-d wrote: On Sunday, 24 June 2018 at 23:34:49 UTC, Per Nordlöw wrote: > Provided that > > __traits(allMembers, E) > > is a cheap operation as it's called once for every > enumerator. I could get it out of the loop; if I do as > > @property string toString() @safe pure nothrow @nogc > { > > final switch (_enum) > { > > enum members = __traits(allMembers, E); enum members = [_traits(allMembers, E)]; seems to work Or if you want it to stay an AliasSeq, then just use Alias or AliasSeq on it. e.g. alias members = AliasSeq!(__traits(allMembers, E)); Given that the result of __traits in this case is an AliasSeq, I have no idea why the gramar doesn't allow you to just use the result of __traits directly as an AliasSeq with something like alias members = __traits(allMembers, E); but it doesn't. So, you have to wrap it, dumb as that may be. There's a bug report about it in bugzilla somewhere. https://issues.dlang.org/show_bug.cgi?id=16390 I guess? First search result : D
Re: Bug on website code.dlang.org
On Monday, 25 June 2018 at 14:45:31 UTC, Miguel L wrote: Sorry, I don't know where is the right place to report a bug on the web site. I just noticed searching for "code" in code.dlang.org generates this crash: I suppose it's related to: https://forum.dlang.org/post/tbdxlunlactuypmep...@forum.dlang.org https://forum.dlang.org/post/qnityxrrcqpvhfqvj...@forum.dlang.org -> ? https://github.com/dlang/dub-registry/issues/296
Re: foreach / mutating iterator - How to do this?
On Mon, Jun 25, 2018 at 05:29:23PM +0200, Robert M. Münch via Digitalmars-d-learn wrote: > I have two foreach loops where the inner should change the iterator > (append new entries) of the outer. > > foreach(a, candidates) { > foreach(b, a) { > if(...) candidates ~= additionalCandidate; > } > } > > The foreach docs state that the collection must not change during > iteration. > > So, how to best handle such a situation then? Using a plain for loop? [...] Yes. T -- The fact that anyone still uses AOL shows that even the presence of options doesn't stop some people from picking the pessimal one. - Mike Ellis
Re: Code failing unknown reason out of memory, also recursive types
On Monday, 25 June 2018 at 14:41:28 UTC, rikki cattermole wrote: Let me get this straight, you decided to max out your memory address space /twice over/ before you hit run time, and think that this would be a good idea? Well, that cause was suppose to allocate a dynamic array instead of a tuple. Somehow it got reverted. Works when allocating the dynamic array. How bout the compiler predict how big a variable is going to be allocated and if it exceeds memory then give an error instead of an out of memory error. If it would have gave me a line number I would have saw the problem immediately.
Boston D presentation Wednesday 6/27
Hi all, We are going to meet on 6/27 at 6 pm at the Capital One cafe in the Back Bay in Boston (https://www.capitalone.com/local/boston-backbay). Sameer Pradhan, a regular at the Boston D meetup group will show us his NLP tool that he is planning to port from Python to D, called OntoNotes. I'm sure we will also find some place for drinks/dinner afterward, and I'd love to see some new D-velopers in our area, come and join us! Note: I'm planning to make the Boston D meetings have a more regular schedule: at least once a month, with or without any presentations. I want to make sure the group stays together, and I think we have some good resources in our area to talk about how D can be used to help anyone with any project. Kind of like an AMA in person :) On that note, if anyone has any ideas for good places to hang out and talk D, I'm open to suggestions. The cafe is nice, but I'm also think it would be nice to have places like the hotels we hang out and code at dconfs. -Steve
Re: overload .
On Monday, 25 June 2018 at 13:58:54 UTC, aliak wrote: On Monday, 25 June 2018 at 13:37:01 UTC, Mr.Bingo wrote: One can overload assignment and dispatch so that something like A.x = ... is valid when x is not a typical member but gets resolved by the above functions. Therefore, I can create a member for assignment. How can I create a member for getting the value? A.x = 3; // Seems to get translated in to A.opDispatch!("x")(3) works but foo(A.x); // fails and the compiler says x does not exist I need something consistent with opDot. I am trying to create "virtual"(not as in function) fields and I can only get assignment but not accessor. A.x is translated in to A.opDispatch!"x" with no args. So I guess you can overload or you can static if on a template parameter sequence: import std.stdio; struct S { auto opDispatch(string name, Args...)(Args args) { static if (!Args.length) { return 3; } else { // set something } } } void main() { S s; s.x = 3; writeln(s.x); } Cheers, - Ali Ok, for some reason using two different templated failed but combining them in to one passes: auto opDispatch(string name, T)(T a) auto opDispatch(string name)() Maybe it is a bug in the compiler that it only checks one opDispatch?
Re: Expanding tool (written in D) use, want advice
On Friday, 22 June 2018 at 14:45:46 UTC, Jesse Phillips wrote: Should I be looking more at the benefits of having D as a tool? It was a good choice for me since I know D so well (and other reasons at the time), but C# is a reasonable language in this space. I'm thinking, like should I go into how learning D wouldn't be too hard for new hire since it has similar syntax to C# and so on. One strong argument to make is based on performance. Give them numbers about how fast your tool runs and make it efficient. The idea is that because the linting tool will be run for every incremental build a developer makes, slower running times are a barrier to productivity. But once performance targets are defined, and if the company thinks that C# can also meet those targets, then really it's their call. Ultimately it is their company and their assets. In such a case, I would generalize your tool for use outside of the specific context of your company, and make it the basis of an open source project.
foreach / mutating iterator - How to do this?
I have two foreach loops where the inner should change the iterator (append new entries) of the outer. foreach(a, candidates) { foreach(b, a) { if(...) candidates ~= additionalCandidate; } } The foreach docs state that the collection must not change during iteration. So, how to best handle such a situation then? Using a plain for loop? -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: template sequence parameters treats member functions differently?
On 6/24/18 5:19 PM, aliak wrote: Hi, I'm having some issues with template sequence parameters, it seems they are not typed as delegates inside a template, but are outside. I.e. template T(V...) { alias T = typeof([0]); } struct S { void f() {} } S s; pragma(msg, T!(s.f)); // void function() pragma(msg, typeof()); // void delegate() How come the output is different? Is it supposed to be the same? No, because the alias is an alias to the function, not the delegate. The act of taking the address creates the delegate, where the delegate's ptr is the context pointer (i.e. s), and the funcptr is the function that accepts the pointer (i.e. S.f). When you pass in s.f to an alias, you are actually passing in S.f. It's the fact that you are looking in the *namespace* of s when you do the alias. The is special for the compiler, and can't be deferred to later. BUT, I'm thinking this may be fixable, as it's inconsistent with inner functions: auto foo(alias x)() { return x(); } struct S { int bar() { return 42; } // int baz() { return foo!bar; } // nope } void main() { S s; int bar() { return 42; } assert(foo!bar() == 42); // ok // assert(foo!(s.bar) == 42); // nope int baz() { return s.bar; } assert(foo!baz() == 42); // ok! } I don't see any reason why the alias is to the function and not the contexted function. I don't see how it's any different from the ones which use inner functions. -Steve
Bug on website code.dlang.org
Sorry, I don't know where is the right place to report a bug on the web site. I just noticed searching for "code" in code.dlang.org generates this crash: https://code.dlang.org/search?q=code 500 - Internal Server Error Internal Server Error Internal error information: vibe.db.mongo.connection.MongoDriverException@../../.dub/packages/vibe-d-0.8.4/vibe-d/mongodb/vibe/db/mongo/cursor.d(304): Query failed. Does the database exist? ??:? [0xa7d1ee] ??:? [0xa9dd0a] ??:? [0xa88ad2] exception.d:421 [0x50ea03] exception.d:388 [0x5050dd] cursor.d:304 [0x51ba4c] dbcontroller.d:393 [0x51c5b3] dbcontroller.d:253 [0x51c349] dbcontroller.d:367 [0x539cb4] cursor.d:233 [0x51b538] cursor.d:60 [0x47fdd9] iteration.d:587 [0x64c415] array.d:134 [0x47a42e] dbcontroller.d:325 [0x44b3e3] registry.d:103 [0x418162] web.d:474 [0x4180a1] web.d:1024 [0x5c116a] web.d:194 [0x5c0d5b] router.d:218 [0x91635e] router.d:674 [0x91906e] router.d:607 [0x91602a] router.d:211 [0x915d94] server.d:2247 [0x91ed67] server.d:241 [0x91d396] server.d:233 [0x91cfd7] server.d:2006 [0x926653] libevent2_tcp.d:612 [0x9fd22c] core.d:632 [0x49afcd] core.d:1241 [0x9e3f7f] ??:? [0xa80c81]
Re: Code failing unknown reason out of memory, also recursive types
Let me get this straight, you decided to max out your memory address space /twice over/ before you hit run time, and think that this would be a good idea?
Code failing unknown reason out of memory, also recursive types
import std.stdio; union Vector(T, size_t N = size_t.max) { import std.range, std.typecons, std.meta, std.algorithm, std.conv, std.math; static if (N == size_t.max) // For size_t.max sets N to be infinite/dynamic; { mixin("Tuple!("~"T,".repeat(N).join()~") data;"); @property size_t Length() { return rect.length; } @property double norm(size_t n = 2)() { return (iota(0,data.length).map!(a => data[a].pow(n))).pow(1/cast(double)n); } } else { mixin("Tuple!("~"T,".repeat(N).join()~") data;"); @property size_t Length() { return N; } @property double norm(size_t n = 2)() { mixin("return ("~(iota(0,N).map!(a => "data["~to!string(a)~"].pow(n)").join("+"))~").pow(1/cast(double)n);"); } } auto opDispatch(string s, Args...)(Args v) if (s.length > 1 && s[0] == 'x') { static if (N == size_t.max) if (data.length < to!int(s[1..$])) for(int i = 0; i < to!int(s[1..$]) - data.length; i++) data ~= 0; static if (Args.length == 0) mixin(`return data[`~s[1..$]~`];`); else static if (Args.length == 1) mixin(`data[`~s[1..$]~`] = v[0]; `); } alias data this; } void main() { import std.math, std.variant; Vector!(Algebraic!(Vector!int, int)) v; //v.x1 = 3; //v.x2 = 4; //v.x3 = 5; //writeln(v.x3); //writeln(v.norm); } Trying to create a vector of vectors where any entry can be another vector of vectors or an int.
Re: overload .
On Monday, 25 June 2018 at 13:37:01 UTC, Mr.Bingo wrote: One can overload assignment and dispatch so that something like A.x = ... is valid when x is not a typical member but gets resolved by the above functions. Therefore, I can create a member for assignment. How can I create a member for getting the value? A.x = 3; // Seems to get translated in to A.opDispatch!("x")(3) works but foo(A.x); // fails and the compiler says x does not exist I need something consistent with opDot. I am trying to create "virtual"(not as in function) fields and I can only get assignment but not accessor. A.x is translated in to A.opDispatch!"x" with no args. So I guess you can overload or you can static if on a template parameter sequence: import std.stdio; struct S { auto opDispatch(string name, Args...)(Args args) { static if (!Args.length) { return 3; } else { // set something } } } void main() { S s; s.x = 3; writeln(s.x); } Cheers, - Ali
Re: VisualD / fatal error C1905: Front-End and Back-End are not compatible (have to use the same processor)
With the latest releasae I still have the same problem. I really don't have any idea what the cause could be or how to fix it... Anyone? Viele Grüsse. Robert M. Münch On 2018-05-21 17:46:45 +, Robert M. Münch said: A project I can compile via the command line and dub, gives an error in VisualD. I created the VisualD configuration through dub: fatal error C1905: Front-End und Back-End sind nicht kompatibel (müssen den gleichenProzessor verwenden). This translates to: "Front-End and Back-End are not compatible (have to use the same processor)" Well, I don't have a clue what this should mena, nor how this could happen. It sounds a bit like if the compiler & linker are not useing the same architecture (I want to use x64) but I didn't find any options to check/change this. Any ideas? -- Robert M. Münch http://www.saphirion.com smarter | better | faster
overload .
One can overload assignment and dispatch so that something like A.x = ... is valid when x is not a typical member but gets resolved by the above functions. Therefore, I can create a member for assignment. How can I create a member for getting the value? A.x = 3; // Seems to get translated in to A.opDispatch!("x")(3) works but foo(A.x); // fails and the compiler says x does not exist I need something consistent with opDot. I am trying to create "virtual"(not as in function) fields and I can only get assignment but not accessor.
Re: Wrapping a forward range in another forward range
On Sunday, 24 June 2018 at 21:28:06 UTC, aliak wrote: On Sunday, 24 June 2018 at 20:33:32 UTC, Rudy Raab wrote: So I have an XLSX (MS Excel 2007+ file format) library that I wrote (https://github.com/TransientResponse/dlang-xlsx) that I recently converted from std.xml to dxml. That went well and it still works (much faster too). [...] I think it's the isSomeChar!(ElementType!R), not the isRandomAccessRange (because string isSomeString and !isSomeChar)? Cheers, - Ali Changing it to isSomeString!(ElementType!R) moves the error to my empty() function: ``` source\xlsx.d(205,22): Error: template std.range.primitives.empty cannot deduce function from argument types !()(XLSheet!(string[])), candidates are: C:\D\dmd2\windows\bin\..\..\src\phobos\std\range\primitives.d(2090,16): std.range.primitives.empty(T)(auto ref scope const(T) a) if (is(typeof(a.length) : size_t) || isNarrowString!T) ``` I tried implementing a length() function (the number of rows remaining in the range, which is known at runtime), but the error remains.
Re: Determine if CTFE or RT
On Monday, 25 June 2018 at 10:49:26 UTC, Simen Kjærås wrote: On Monday, 25 June 2018 at 09:36:45 UTC, Martin Tschierschke wrote: I am not sure that I understood it right, but there is a way to detect the status of a parameter: My question was different, but I wished to get a ctRegex! or regex used depending on the expression: import std.regex:replaceAll,ctRegex,regex; auto reg(alias var)(){ static if (__traits(compiles, {enum ctfeFmt = var;}) ){ // "Promotion" to compile time value enum ctfeReg = var ; pragma(msg, "ctRegex used"); return(ctRegex!ctfeReg); }else{ return(regex(var)); pragma(msg,"regex used"); } } } So now I can always use reg!("") and let the compiler decide. To speed up compilation I made an additional switch, that when using DMD (for development) alway the runtime version is used. The trick is to use the alias var in the declaration and check if it can be assigned to enum. The only thing is now, that you now always use the !() compile time parameter to call the function. Even, when in the end is translated to an runtime call. reg!("") and not reg("..."). Now try reg!("prefix" ~ var) or reg!(func(var)). This works in some limited cases, but falls apart when you try something more involved. It can sorta be coerced into working by passing lambdas: template ctfe(T...) if (T.length == 1) { import std.traits : isCallable; static if (isCallable!(T[0])) { static if (is(typeof({enum a = T[0]();}))) { enum ctfe = T[0](); } else { alias ctfe = T[0]; } } else { static if (is(typeof({enum a = T[0];}))) { enum ctfe = T[0]; } else { alias ctfe = T[0]; } } } string fun(string s) { return s; } unittest { auto a = ctfe!"a"; string b = "a"; auto c = ctfe!"b"; auto d = ctfe!("a" ~ b); // Error: variable b cannot be read at compile time auto e = ctfe!(() => "a" ~ b); auto f = ctfe!(fun(b)); // Error: variable b cannot be read at compile time auto g = ctfe!(() => fun(b)); } -- Simen This doesn't work, the delegate only hides the error until you call it. auto also does not detect enum. Ideally it should be a manifest constant if precomputed... this allows chaining of optimizations. auto x = 3; auto y = foo(x); the compiler realizes x is an enum int and then it can also precompute foo(x). Since it converts to a runtime type immediately it prevents any optimizations and template tricks.
Re: Parenthesis around if/for/while condition is not necessary
On Monday, 25 June 2018 at 10:38:49 UTC, Basile B. wrote: On Monday, 25 June 2018 at 10:36:46 UTC, Basile B. wrote: On Sunday, 24 June 2018 at 23:08:15 UTC, aliak wrote: Wow nice, that was quick, would it be much more to make it so that braces are required with if statements that do not start with an open paren? It's in the commit. https://github.com/BBasile/dmd/commit/5455a65c8fdee5a6d198782d1f168906b59e6d3d#diff-cd066d37445cac534313c0137c2d4bbeR5599 Indeed! Totally missed that! PR I!!! :D
[Issue 19018] Lexer allows invalid integer literals, like `0x`
https://issues.dlang.org/show_bug.cgi?id=19018 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 19018] Lexer allows invalid integer literals, like `0x`
https://issues.dlang.org/show_bug.cgi?id=19018 --- Comment #1 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/37ff0dc4376b1617066be6435ac3cedd86f7cc7f Fix Issue 19018 - deprecate invalid integer literal https://github.com/dlang/dmd/commit/362f6c4f3c9c73a8e40a93bde1cbf792cd55dfdf Merge pull request #8396 from kubo39/invalid-integer-literal Fix Issue 19018 - lexer do not allow invalid integer literal merged-on-behalf-of: Jacob Carlborg --
Re: Determine if CTFE or RT
On Monday, 25 June 2018 at 09:36:45 UTC, Martin Tschierschke wrote: I am not sure that I understood it right, but there is a way to detect the status of a parameter: My question was different, but I wished to get a ctRegex! or regex used depending on the expression: import std.regex:replaceAll,ctRegex,regex; auto reg(alias var)(){ static if (__traits(compiles, {enum ctfeFmt = var;}) ){ // "Promotion" to compile time value enum ctfeReg = var ; pragma(msg, "ctRegex used"); return(ctRegex!ctfeReg); }else{ return(regex(var)); pragma(msg,"regex used"); } } } So now I can always use reg!("") and let the compiler decide. To speed up compilation I made an additional switch, that when using DMD (for development) alway the runtime version is used. The trick is to use the alias var in the declaration and check if it can be assigned to enum. The only thing is now, that you now always use the !() compile time parameter to call the function. Even, when in the end is translated to an runtime call. reg!("") and not reg("..."). Now try reg!("prefix" ~ var) or reg!(func(var)). This works in some limited cases, but falls apart when you try something more involved. It can sorta be coerced into working by passing lambdas: template ctfe(T...) if (T.length == 1) { import std.traits : isCallable; static if (isCallable!(T[0])) { static if (is(typeof({enum a = T[0]();}))) { enum ctfe = T[0](); } else { alias ctfe = T[0]; } } else { static if (is(typeof({enum a = T[0];}))) { enum ctfe = T[0]; } else { alias ctfe = T[0]; } } } string fun(string s) { return s; } unittest { auto a = ctfe!"a"; string b = "a"; auto c = ctfe!"b"; auto d = ctfe!("a" ~ b); // Error: variable b cannot be read at compile time auto e = ctfe!(() => "a" ~ b); auto f = ctfe!(fun(b)); // Error: variable b cannot be read at compile time auto g = ctfe!(() => fun(b)); } -- Simen
Re: Determine if CTFE or RT
On 06/25/2018 07:47 AM, Mr.Bingo wrote: The docs say that CTFE is used only when explicit, I was under the impression that it would attempt to optimize functions if they could be computed at compile time. The halting problem has nothing to do with this. The ctfe engine already complains when one recurses to deep, it is not difficult to have a time out function that cancels the computation within some user definable time limit... and since fail can simply fall through and use the rtfe, it is not a big deal. The problem then, if D can't arbitrarily use ctfe, means that there should be a way to force ctfe optionally! A D compiler is free to precompute whatever it sees fit, as an optimization. It's just not called "CTFE" then, and `__ctfe` will be false during that kind of precomputation. For example, let's try compiling this code based on an earlier example of yours: int main() { return foo(3) + foo(8); } int foo(int i) { return __ctfe && i == 3 ? 1 : 2; } `dmd -O -inline` compiles that to: <_Dmain>: 0: 55 push rbp 1: 48 8b ecmovrbp,rsp 4: b8 04 00 00 00 moveax,0x4 9: 5d poprbp a: c3 ret As expected, `ldc2 -O` is even smarter: <_Dmain>: 0: b8 04 00 00 00 moveax,0x4 5: c3 ret Both compilers manage to eliminate the calls to `foo`. They have been precomputed. `__ctfe` was false, though, because the term "CTFE" only covers the forced/guaranteed kind of precomputation, not the optimization.
Re: Parenthesis around if/for/while condition is not necessary
On Sunday, 24 June 2018 at 23:08:15 UTC, aliak wrote: On Sunday, 24 June 2018 at 11:27:12 UTC, Basile B. wrote: On Saturday, 23 June 2018 at 06:24:29 UTC, Basile B. wrote: On Saturday, 23 June 2018 at 06:18:53 UTC, user1234 wrote: On Saturday, 23 June 2018 at 05:09:13 UTC, aedt wrote: On Saturday, 23 June 2018 at 04:45:07 UTC, user1234 wrote: [...] Same thing as the following" return a && b; I'm not saying to drop parens completely, I'm saying why is it not optional. D seems to have no problem with x.to!string or std.lines I agree that this would be in adequation with certain stuff of the D syntax, such as parens-less single template parameter. Someone has to make a DIP for this Maybe but this is a simple parser thing. For example after reading the discussion here i have tested the idea in my toy programming language (https://github.com/BBasile/styx/commit/83c96d8a789aa82f9bed254ab342ffc4aed4fd88) and i believe that for D this would be as simple ( < 20 SLOC, w/o the tests). otherwise we're good for one of this sterile NG discussion leading to nothing, i.e intellectual mast... well guess the word. I'm tempted to try this in DMDFE. Change is simple enough so that if it get rejected no much time is lost. FYI this works fine, as expected it's just some small parser changes. I didn't touch to for and foreach for now. I think that SwitchStatement is a candidate too. https://github.com/BBasile/dmd/commit/5455a65c8fdee5a6d198782d1f168906b59e6d3d However note that there's a nice thing with the phobos style that won't be that nice anymore: if (condition) action(); ^ if condition) action(); ---^ ^ It's not nicely aligned anymore ! Wow nice, that was quick, would it be much more to make it so that braces are required with if statements that do not start with an open paren? It's in the commit.
Re: Parenthesis around if/for/while condition is not necessary
On Monday, 25 June 2018 at 10:36:46 UTC, Basile B. wrote: On Sunday, 24 June 2018 at 23:08:15 UTC, aliak wrote: On Sunday, 24 June 2018 at 11:27:12 UTC, Basile B. wrote: On Saturday, 23 June 2018 at 06:24:29 UTC, Basile B. wrote: On Saturday, 23 June 2018 at 06:18:53 UTC, user1234 wrote: On Saturday, 23 June 2018 at 05:09:13 UTC, aedt wrote: On Saturday, 23 June 2018 at 04:45:07 UTC, user1234 wrote: [...] Same thing as the following" return a && b; I'm not saying to drop parens completely, I'm saying why is it not optional. D seems to have no problem with x.to!string or std.lines I agree that this would be in adequation with certain stuff of the D syntax, such as parens-less single template parameter. Someone has to make a DIP for this Maybe but this is a simple parser thing. For example after reading the discussion here i have tested the idea in my toy programming language (https://github.com/BBasile/styx/commit/83c96d8a789aa82f9bed254ab342ffc4aed4fd88) and i believe that for D this would be as simple ( < 20 SLOC, w/o the tests). otherwise we're good for one of this sterile NG discussion leading to nothing, i.e intellectual mast... well guess the word. I'm tempted to try this in DMDFE. Change is simple enough so that if it get rejected no much time is lost. FYI this works fine, as expected it's just some small parser changes. I didn't touch to for and foreach for now. I think that SwitchStatement is a candidate too. https://github.com/BBasile/dmd/commit/5455a65c8fdee5a6d198782d1f168906b59e6d3d However note that there's a nice thing with the phobos style that won't be that nice anymore: if (condition) action(); ^ if condition) action(); ---^ ^ It's not nicely aligned anymore ! Wow nice, that was quick, would it be much more to make it so that braces are required with if statements that do not start with an open paren? It's in the commit. https://github.com/BBasile/dmd/commit/5455a65c8fdee5a6d198782d1f168906b59e6d3d#diff-cd066d37445cac534313c0137c2d4bbeR5599
Re: Determine if CTFE or RT
On Monday, 25 June 2018 at 08:05:53 UTC, Mr.Bingo wrote: On Monday, 25 June 2018 at 07:02:24 UTC, Jonathan M Davis wrote: On Monday, June 25, 2018 05:47:30 Mr.Bingo via Digitalmars-d-learn wrote: The problem then, if D can't arbitrarily use ctfe, means that there should be a way to force ctfe optionally! If you want to use CTFE, then give an enum the value of the expression you want calculated. If you want to do it in place, then use a template such as template ctfe(alias exp) { enum ctfe = exp; } so that you get stuff like func(ctfe!(foo(42))). I would be extremely surprised if the compiler is ever changed to just try CTFE just in case it will work as an optimization. That would make it harder for the programmer to understand what's going on, and it would balloon compilation times. If you want to write up a DIP on the topic and argue for rules on how CTFE could and should function with the compiler deciding to try CTFE on some basis rather than it only being done when it must be done, then you're free to do so. https://github.com/dlang/DIPs But I expect that you will be sorely disappointed if you ever expect the compiler to start doing CTFE as an optimization. It's trivial to trigger it explicitly on your own, and compilation time is valued far too much to waste it on attempting CTFE when in the vast majority of cases, it's going to fail. And it's worked quite well thus far to have it work only cases when it's actually needed - especially with how easy it is to make arbitrary code run during CTFE simply by doing something like using an enum. - Jonathan M Davis You still don't get it! It is not trivial! It is impossible to trigger it! You are focused far too much on the optimization side when it is only an application that takes advantage of the ability for rtfe to become ctfe when told, if it is possible. I don't know how to make this any simpler, sorry... I guess we'll end it here. I am not sure that I understood it right, but there is a way to detect the status of a parameter: My question was different, but I wished to get a ctRegex! or regex used depending on the expression: import std.regex:replaceAll,ctRegex,regex; auto reg(alias var)(){ static if (__traits(compiles, {enum ctfeFmt = var;}) ){ // "Promotion" to compile time value enum ctfeReg = var ; pragma(msg, "ctRegex used"); return(ctRegex!ctfeReg); }else{ return(regex(var)); pragma(msg,"regex used"); } } } So now I can always use reg!("") and let the compiler decide. To speed up compilation I made an additional switch, that when using DMD (for development) alway the runtime version is used. The trick is to use the alias var in the declaration and check if it can be assigned to enum. The only thing is now, that you now always use the !() compile time parameter to call the function. Even, when in the end is translated to an runtime call. reg!("") and not reg("...").
[Issue 18374] Add range functions to Nullable
https://issues.dlang.org/show_bug.cgi?id=18374 --- Comment #2 from Mitu --- (In reply to Seb from comment #1) > Are you aware of the new `apply`? > > https://dlang.org/changelog/2.080.0.html#std-typecons-nullable-apply > > It still would be great to have Nullable and ranges working nicely together, > but at least apply is a start. I am, but it wasn't there the day I have created this issue. apply() definitely does the trick. It still lacks one case though - when we want to call the void function unless the value is null: --- Nullable!int something; something.apply!writeln; --- I still think that Nullable as a range might be more powerful and its integration with the range behavior might be save some LoC in some places, but I cannot come up of an example now. --
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, June 25, 2018 07:43:53 Per Nordlöw via Digitalmars-d wrote: > On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: > > Or if you want it to stay an AliasSeq, then just use Alias or > > AliasSeq on it. e.g. > > > > alias members = AliasSeq!(__traits(allMembers, E)); > > Thanks! Should we prefer this over > > enum members = [__traits(allMembers, E)]; Which is better depends on what you're doing. alias members = AliasSeq!(_traits(allMembers, E)); gives you an AliasSeq, whereas enum members = [__traits(allMembers, E)]; gives you a dynamic array. The AliasSeq means needing to use templates to operate on it, whereas using the dynamic array means using CTFE to operate on it. If you're just going to use a static foreach on it, I wouldn't expect it to matter much which you use, but ultimately, you'll have to see what works best with your exact use case. In the long run, once newCTFE is complete, I suspect that using CTFE where possible instead of templates is going to be better, but I think that it tends to be less of a clear win with how inefficient CTFE currently is. But again, it depends on what exactly the code is doing. - Jonathan M Davis
Re: Determine if CTFE or RT
On Monday, 25 June 2018 at 07:02:24 UTC, Jonathan M Davis wrote: On Monday, June 25, 2018 05:47:30 Mr.Bingo via Digitalmars-d-learn wrote: The problem then, if D can't arbitrarily use ctfe, means that there should be a way to force ctfe optionally! If you want to use CTFE, then give an enum the value of the expression you want calculated. If you want to do it in place, then use a template such as template ctfe(alias exp) { enum ctfe = exp; } so that you get stuff like func(ctfe!(foo(42))). I would be extremely surprised if the compiler is ever changed to just try CTFE just in case it will work as an optimization. That would make it harder for the programmer to understand what's going on, and it would balloon compilation times. If you want to write up a DIP on the topic and argue for rules on how CTFE could and should function with the compiler deciding to try CTFE on some basis rather than it only being done when it must be done, then you're free to do so. https://github.com/dlang/DIPs But I expect that you will be sorely disappointed if you ever expect the compiler to start doing CTFE as an optimization. It's trivial to trigger it explicitly on your own, and compilation time is valued far too much to waste it on attempting CTFE when in the vast majority of cases, it's going to fail. And it's worked quite well thus far to have it work only cases when it's actually needed - especially with how easy it is to make arbitrary code run during CTFE simply by doing something like using an enum. - Jonathan M Davis You still don't get it! It is not trivial! It is impossible to trigger it! You are focused far too much on the optimization side when it is only an application that takes advantage of the ability for rtfe to become ctfe when told, if it is possible. I don't know how to make this any simpler, sorry... I guess we'll end it here.
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, 25 June 2018 at 07:43:53 UTC, Per Nordlöw wrote: On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: Or if you want it to stay an AliasSeq, then just use Alias or AliasSeq on it. e.g. alias members = AliasSeq!(__traits(allMembers, E)); Thanks! Should we prefer this over enum members = [__traits(allMembers, E)]; ? I tested on a really big enum: alias members = AliasSeq!(__traits(allMembers, E)); is faster. :)
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: Or if you want it to stay an AliasSeq, then just use Alias or AliasSeq on it. e.g. alias members = AliasSeq!(__traits(allMembers, E)); Thanks! Should we prefer this over enum members = [__traits(allMembers, E)]; ?
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 23:53:09 UTC, Timoses wrote: enum members = [_traits(allMembers, E)]; seems to work Great! Now becomes: @safe: /** Enumeration wrapper that uses optimized conversion to string (via `toString` * member). */ struct Enum(E) if (is(E == enum)) { @property string toString() @safe pure nothrow @nogc { enum members = [__traits(allMembers, E)]; final switch (_enum) { static foreach (index, member; members) { static if (index == 0 || (__traits(getMember, E, members[index - 1]) != __traits(getMember, E, member))) { case __traits(getMember, E, member): return member; } } } } E _enum;// the wrapped enum alias _enum this; }
Re: Determine if CTFE or RT
On Monday, June 25, 2018 05:47:30 Mr.Bingo via Digitalmars-d-learn wrote: > The problem then, if D can't arbitrarily use ctfe, means that > there should be a way to force ctfe optionally! If you want to use CTFE, then give an enum the value of the expression you want calculated. If you want to do it in place, then use a template such as template ctfe(alias exp) { enum ctfe = exp; } so that you get stuff like func(ctfe!(foo(42))). I would be extremely surprised if the compiler is ever changed to just try CTFE just in case it will work as an optimization. That would make it harder for the programmer to understand what's going on, and it would balloon compilation times. If you want to write up a DIP on the topic and argue for rules on how CTFE could and should function with the compiler deciding to try CTFE on some basis rather than it only being done when it must be done, then you're free to do so. https://github.com/dlang/DIPs But I expect that you will be sorely disappointed if you ever expect the compiler to start doing CTFE as an optimization. It's trivial to trigger it explicitly on your own, and compilation time is valued far too much to waste it on attempting CTFE when in the vast majority of cases, it's going to fail. And it's worked quite well thus far to have it work only cases when it's actually needed - especially with how easy it is to make arbitrary code run during CTFE simply by doing something like using an enum. - Jonathan M Davis