Re: GC memory fragmentation
On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote: Hi, we're using vibe-d (on Linux) for a long running REST API server and have problem with constantly growing memory until system kills it with OOM killer. One thing that comes to mind: is your application compiled as 32-bit? The garbage collector is much more likely to leak memory with a 32-bit address space since it much more likely for a random int to appear to be a pointer to the interior of a block of GC-allocated memory.
Re: How do I check if a type is assignable to null at compile time?
On Friday, 26 February 2021 at 05:34:26 UTC, Paul Backus wrote: On Friday, 26 February 2021 at 05:25:14 UTC, Jack wrote: I started with: enum isAssignableNull(T) = is(T : Object) || isPointer(T); but how do I cover all cases? Something like this should work: enum isAssignableNull(T) = __traits(compiles, (T t) => t = null); `isAssignableNull!(immutable void*)` is true with his definition but false with yours. Of course you are correct that you cannot assign to an immutable pointer.
Re: How do I check if a type is assignable to null at compile time?
On Friday, 26 February 2021 at 05:25:14 UTC, Jack wrote: I started with: enum isAssignableNull(T) = is(T : Object) || isPointer(T); but how do I cover all cases? If I understand what you mean by "is assignable to null", this should do it: --- enum bool isAssignableNull(T) = is(typeof(null) : T); // Tests: immutable string arrayslice = null; static assert(isAssignableToNull!(immutable string)); void function(int) funptr = null; static assert(isAssignableToNull!(typeof(funptr))); int[int] aa = null; static assert(isAssignableToNull!(int[int])); ---
Re: Is this a compiler error? "recursive template expansion"
On Tuesday, 8 December 2020 at 22:01:52 UTC, Basile B. wrote: On Tuesday, 8 December 2020 at 20:11:40 UTC, Nathan S. wrote: The following code fails to compile. Is this a compiler error or if not what is wrong with the code? What is wrong is that partial specialization is not correct. The correct partial specialization is: --- struct Template2(T) { enum tString = T.stringof; static if (is(T == class)) enum tLinkage = __traits(getLinkage, T); } struct Template1(Param1, Param2 = Template2!Param1) {} alias AliasTemplate1S(SecondParam) = Template1!(S,SecondParam); //^here class S { Template1!int x; } --- Now that being said, the compiler could complain about the incorrect partial spec instead of falling in the nonsensical error message. There is a gap because the second template param looks optional so you dont put it the partial specialization but it is still required. Anyway. This case is either a "bad diagnostic" or an "enhancement" request. Or should be specified. Thanks a lot! In my case what I was intending was: alias AliasTemplate1S = Template1!(S, Template2!S) which as you suggest works fine. It's a bit odd that the non-optional second parameter becomes optional again if declaration order is shuffled, but my motivation to look into this has temporarily abated since it's no longer stopping me from doing something else. For the program where I ran into this problem the most convenient fix turns out to be to get rid of the default template parameter and instead use a pattern like this: --- struct Template1(Param1, Param2) {} alias Template1(Param1) = Template1!(Param1, Template2!Param1); ---
Is this a compiler error? "recursive template expansion"
The following code fails to compile. Is this a compiler error or if not what is wrong with the code? --- struct Template2(T) { // If both of the following are removed compilation succeeds // without any other changes: enum tString = T.stringof; static if (is(T == class)) enum tLinkage = __traits(getLinkage, T); } struct Template1(Param1, Param2 = Template2!Param1) {} // Moving the definition of AliasTemplate1S after the definition of S // causes compilation to succeed without any other changes. alias AliasTemplate1S = Template1!S; class S { // If the following line is removed compilation succeeds // without any other changes. Template1!int x; } void main() { } --- Failure message: main.d(20): Error: struct main.Template1(Param1, Param2 = Template2!Param1) recursive template expansion main.d(20):while looking for match for Template1!int main.d(7): Error: class S is forward referenced main.d(10): Error: template instance main.Template2!(S) error instantiating main.d(14):instantiated from here: Template1!(S)
Re: std.conv.ConvException from double to uint64_t, but only locally in a large project
On Tuesday, 4 August 2020 at 17:49:56 UTC, drathier wrote: Replaced all mentions of uint64_t with ulong, and now it works. Must have an enum called uint64_t defined somewhere in a library I depend on or something? Really wish this was clearer. BTW I believe the reason that `uint64_t` is an enum (which is being imported by "import std" but isn't converting) solely on macOS/iOS is for compatibility with C++ name mangling.
Re: Why is this allowed
On Tuesday, 30 June 2020 at 16:22:57 UTC, JN wrote: Spent some time debugging because I didn't notice it at first, essentially something like this: int[3] foo = [1, 2, 3]; foo = 5; writeln(foo); // 5, 5, 5 Why does such code compile? I don't think this should be permitted, because it's easy to make a mistake (when you wanted foo[index] but forgot the []). If someone wants to assign a value to every element they could do foo[] = 5; instead which is explicit. What's your opinion on using that syntax in the initial declaration, like `float[16] foo = 0`?
Re: Distinguish between a null array and an empty array
On Sunday, 24 May 2020 at 12:12:31 UTC, bauss wrote: Is there a way to do that? Since the following are both true: int[] a = null; int[] b = []; assert(a is null); assert(!a.length); assert(b is null); assert(!b.length); What I would like is to tell that b is an empty array and a is a null array. Yes you can tell: your `is null` check actually works, the part that doesn't work is that `int[] b = []` is initializing b to null. Here's an example: --- int[0] emptyArray; int[] a = null; // a.ptr is null, a.length is 0 int[] b = emptyArray[]; // b.ptr is non-null, b.length is 0 assert(a is null); assert(!a.length); assert(b !is null); assert(!b.length); ---
Re: link error on Windows
On Tuesday, 19 May 2020 at 04:54:38 UTC, Joel wrote: I tried with DMD32 D Compiler v2.088.1-dirty, and it compiled and created an exe file, but not run (msvcr100.dll not found - and tried to find it on the net without success). DMD 2.089 changed the default linking options. I bet an up-to-date DMD will also work if you invoke it as "dmd -m32mscoff". It should also work if you build it in 64-bit mode.
Re: AA code 50x slower
On Sunday, 16 February 2020 at 12:57:43 UTC, AlphaPurned wrote: template AA(string[] S) { auto _do() { int[string] d; foreach(s; S) d[s] = 0; return d; } enum AA = _do; } if (t in AA!(["a", "and", "mp4", "mp3", "the", "with", "live", "no", "&", "of", "band"])) continue; The if statement literally causes a 50x slow down of the code. (LDC debug, dmd debug takes about 1000 times longer) template AA(string[] S) { __gshared int[string] AA; shared static this() { int[string] d; foreach (s; S) d[s] = 0; AA = d; }; } if (t in AA!(["a", "and", "mp4", "mp3", "the", "with", "live", "no", "&", "of", "band"])) continue; This will have the performance you want if you don't care whether it works in CTFE.
How to get the name of an object's class at compile time?
What I want is something like this: string className(in Object obj) { return obj is null ? "null" : typeid(obj).name; } ...except I want it to work in CTFE. What is the way to do this in D?
Re: Is there a way to slice non-array type in @safe?
On Thursday, 11 July 2019 at 16:31:58 UTC, Stefanos Baziotis wrote: I searched the forum but did not find something. I want to do this: int foo(T)(ref T s1, ref T s2) { const byte[] s1b = (cast(const(byte)*))[0 .. T.sizeof]; const byte[] s2b = (cast(const(byte)*))[0 .. T.sizeof]; } Which is to create a byte array from the bytes of the value given, no matter the type. The above works, but it's not @safe. Thanks, Stefanos If you know that what you're doing cannot result in memory corruption but the compiler cannot automatically infer @safe, it is appropriate to use @trusted. (For this case make sure you're not returning the byte slices, since if the arguments were allocated on the stack you could end up with a pointer to an invalid stack frame. If it's the caller's responsibility to ensure the slice doesn't outlive the struct then it is the caller that should be @trusted or not.)
Is there any way to define an interface that can implicitly convert to Object?
I want to be able to do things like: --- bool isSame(Object a, Object b) { return a is b; } interface SomeInterface { int whatever(); } bool failsToCompile(SomeInterface a, SomeInterface b) { return isSame(a, b); } --- Error: function isSame(Object a, Object b) is not callable using argument types (SomeInterface, SomeInterface) Is there a way to declare an interface as explicitly not a COM interface or a C++ interface? Having to add "cast(Object)" everywhere is annoying.
Re: Casting to interface not allowed in @safe code?
On Sunday, 23 June 2019 at 21:24:14 UTC, Nathan S. wrote: https://issues.dlang.org/show_bug.cgi?id=2. The fix for this has been accepted and is set for inclusion in DMD 2.080.
Re: Casting to interface not allowed in @safe code?
On Tuesday, 21 May 2019 at 07:59:13 UTC, Jim wrote: On Tuesday, 21 May 2019 at 07:33:17 UTC, rumbu wrote: On Tuesday, 21 May 2019 at 07:16:49 UTC, Jim wrote: On Tuesday, 21 May 2019 at 07:04:27 UTC, rumbu wrote: On Tuesday, 21 May 2019 at 05:51:30 UTC, Jim wrote: That's because foo is of type Base, not implementing FeatureX. Right, Base isn't implementing FeatureX, but foo is really a Foo That's your knowledge, for the compiler foo is really a Base, as written in your own code. Yes, thinking about it again it makes sense. It doesn't even slightly make sense. I just ran into this today myself. Unlike Java and C#, casting from Foo to FeatureX is not an assertion that the Foo implements FeatureX. Instead it's how you test at runtime if the class of a specific object derived from Foo implements FeatureX: if it doesn't then the result of the cast is null. I've opened a bug report at https://issues.dlang.org/show_bug.cgi?id=2.
Re: Blog post: What D got wrong
On Saturday, 15 December 2018 at 19:53:06 UTC, Atila Neves wrote: Not the case in Rust, not the case in how I write D. TBH it's not such a big deal because something has to be typed, I just default to const now anyway instead of auto. @safe and pure though... I'd be interested in seeing some of that Rust code. My impression from Clojure is that an all-immutable style requires leaning heavily on the garbage collector and as far as I know Rust has none.
Re: Blog post: What D got wrong
On Thursday, 13 December 2018 at 10:14:45 UTC, Atila Neves wrote: My impression is that it's a consensus that it _should_, but it's not going to happen due to breaking existing code. I think it would be a bad idea for `immutable` because more often than not it would need to be turned off. I've heard Java called a "BSDM language" because it forces the programmer to type reams of unnecessary characters to accomplish ordinary tasks. This reminds me of that.
Re: usable @nogc Exceptions with Mir Runtime
On Wednesday, 24 October 2018 at 10:57:27 UTC, 9il wrote: Release v0.0.5 comes with - mir.exception - @nogc MirException - mir.format - @nogc formatting Fantastic!
Re: More zero-initialization optimizations pending in std.experimental.allocator?
On Friday, 19 October 2018 at 21:29:42 UTC, Per Nordlöw wrote: Now that https://github.com/dlang/phobos/pull/6411 has been merged and DMD stable soon has the new __traits(isZeroInit, T) found here https://dlang.org/changelog/2.083.0.html#isZeroInit are there more zero-initializations that can be optimized in std.experimental.allocator? I looked and identified low-hanging fruit in std.mutation.initializeAll & moveEmplace and in std.typecons.RefCounted (PR #6698), and in std.conv.emplaceInitializer (PR #6461). Other opportunities would rely on being able to identify if it's ever more efficient to write `memset(, 0, typeof(x).sizeof)` instead of `x = typeof(x).init` which seems like the kind of optimization that belongs in the compiler instead.
Re: Function parameter type inference: is this example working as intended?
On Wednesday, 5 September 2018 at 02:33:47 UTC, Nathan S. wrote: The below writes "uint". Is this working as intended? https://run.dlang.io/is/Dx2e7f Note that it's called not with a `ulong` literal but with a `long` literal.
Function parameter type inference: is this example working as intended?
The below writes "uint". Is this working as intended? https://run.dlang.io/is/Dx2e7f --- import std.stdio; auto foo(T = uint)(uint x) { return T.stringof; } auto foo(T = ulong)(ulong x) { return T.stringof; } void main() { writeln(foo(10L)); } ---
Re: Weird bugs in DMD 2.81.0
On Monday, 9 July 2018 at 19:31:52 UTC, Steven Schveighoffer wrote: FYI there have been a lot of recent changes on druntime that have to do with hashing. It's possible something changed that affects you. None of the recent changes ought to affect `ushort[string]` since it hashes its keys using `rt.typeinfo.ti_Ag.TypeInfo_Aa.getHash` which hasn't been touched. If https://github.com/dlang/druntime/pull/2243 is accepted the `TypeInfo.getHash` hashes of various strings would change but this shouldn't cause inability to lookup keys in a builtin AA.
Re: Is there a way to get the address of the function that would be used in Implicit Function Template Instantiation?
On Wednesday, 27 June 2018 at 22:39:26 UTC, Jonathan M Davis wrote: You could explicitly instantiate the function template and then take its address. Explicitly instantiating the template can result in a function that may be behaviorally identical but have a different address. https://run.dlang.io/is/E9WroB --- auto foo(T)(const T x) { return x; } void main() { const int a; assert(!int !is !(const int)); // The addresses are different. foo(a); // If I look in the object file I can see this uses foo!int. assert(!(typeof(a)) !is !int); } ---
Is there a way to get the address of the function that would be used in Implicit Function Template Instantiation?
Let's say there's a function template `doImpl` and `doImpl(x)` compiles thanks to IFTI. Is there any way to get the address of the function that would be called in `doImpl(x)`?
Re: D hash table comparison benchmark
On Tuesday, 26 June 2018 at 03:45:27 UTC, Seb wrote: Did you by chance also benchmark it with other languages like C++, Go or Rust? = Not reusing hashtables, optimizations enabled = 79 msecs Rust std::collections::HashMap 90 msecs Go built-in map 177 msecs C++ std::unordered_map (whichever implementation comes with Xcode) = Reusing hashtables, optimizations enabled = 24 msecs C++ std::unordered_map (whichever implementation comes with Xcode) 26 msecs Go built-in map 36 msecs Rust std::collections::HashMap
Re: template recursion
On Tuesday, 26 June 2018 at 20:47:27 UTC, Steven Schveighoffer wrote: Naming the hook for `put` the same thing as the global function was one of the biggest mistakes in the range library. I almost think we would be better off to deprecate that and pick another hook name. If you ever do that it would also be nice to use separate names for "put a single X into Y" and "put everything from container X into Y".
Re: D hash table comparison benchmark
On Tuesday, 26 June 2018 at 14:33:25 UTC, Eugene Wissner wrote: Tanya hashes any value, also integral types; other hashtables probably not. Your intuition is correct here. Most of the tables use `typeid(key).getHash()`, which for `int` just returns `key`. = Built-in AA = General case: `typeid.getHash` with additional scrambling. Customizable: no. =memutils.hashmap= General case: `typeid.getHash`. Customizable: no. Other notes: * Special case for `toHash` member (bypasses `typeid`). * Interesting special cases for ref-counted data types, with further special cases for ref-counted strings and arrays. =vibe.utils.hashmap= General case: `typeid.getHash`. Customizable: yes, through optional `Traits` template parameter. Other notes: * Special case for `toHash` member (bypasses `typeid`). * Tries to implement a special case for objects with the default `Object.toHash` function, but it seems like it can never work. =jive.map= General case: `typeid.getHash`. Customizable: no. =containers.hashmap= General case: `typeid.getHash`. Customizable: yes, through optional `hashFunction` template parameter. Other notes: * Special case for strings on 64-bit builds. Uses FNV-1a instead of default hash function.
Re: Nullable!T with T of class type
On Monday, 25 June 2018 at 22:58:41 UTC, Jonathan M Davis wrote: Java does try to force you to initialize stuff (resulting in annoying false positives at times), but in general, it still can't guarantee when a variable is null or not and is forced to insert runtime null checks. Java can be somewhat clever about this though. Often it just needs to perform a single null check in a method body and thereafter knows the pointer can't be null and elides the check.
Re: Nullable!T with T of class type
On Monday, 25 June 2018 at 19:40:30 UTC, kdevel wrote: Is it possible to "lower" the Nullable operations if T is a class type such that there is only one level of nullification? Yes: https://run.dlang.io/is/hPxbyf template Nullable(S) { import std.traits : isPointer, isDynamicArray; static if (is(S == class) || is(S == interface) || is(S == function) || is(S == delegate) || isPointer!S || isDynamicArray!S) { alias Nullable = S; } else { static import std.typecons; alias Nullable = std.typecons.Nullable!S; } }
Re: D hash table comparison benchmark
BTW the output is formatted so you can get a sorted list of times across all trials by piping the output through `sort -n`. That's also why the tests reusing maps start with ( instead of [, so they will be grouped separately.
Re: D hash table comparison benchmark
On Tuesday, 26 June 2018 at 03:45:27 UTC, Seb wrote: Did you by chance also benchmark it with other languages like C++, Go or Rust? I didn't since I was evaluating hashtable implementations for use in a D application. BTW I'm not sure what your plans are, but are you aware of this recent article? https://probablydance.com/2018/05/28/a-new-fast-hash-table-in-response-to-googles-new-fast-hash-table I wasn't, thanks.
Re: Disappointing performance from DMD/Phobos
On Tuesday, 26 June 2018 at 02:20:37 UTC, Manu wrote: I optimised another major gotcha eating perf, and now this issue is taking 13% of my entire work time... bummer. Without disagreeing with you, ldc2 optimizes this fine. https://run.dlang.io/is/NJct6U const @property uint onlineapp.Entity.systemBits(): .cfi_startproc movl4(%rdi), %eax addl12(%rdi), %eax addl20(%rdi), %eax addl28(%rdi), %eax retq
Re: D hash table comparison benchmark
With LDC2 the times for vibe.utils.hashmap and memutils.hashmap are suspiciously low, leading me to suspect that the optimizer might be omitting most of the work. Here are the figures without optimizations enabled. == Speed Ranking using DMD (no optimizations) == 95 msecs built-in AA 168 msecs vibe.utils.hashmap 182 msecs jive.map 224 msecs memutils.hashmap 663 msecs containers.hashmap w/GCAllocator 686 msecs containers.hashmap w/Mallocator == Speed Ranking using LDC2 (no optimizations) == 68 msecs built-in AA 143 msecs vibe.utils.hashmap 155 msecs jive.map 164 msecs memutils.hashmap 515 msecs containers.hashmap w/GCAllocator 537 msecs containers.hashmap w/Mallocator
D hash table comparison benchmark
The below benchmarks come from writing 100 int-to-int mappings to a new hashtable then reading them back, repeated 10_000 times. The built-in AA doesn't deallocate memory when it falls out of scope but the other maps do. Benchmark code in next post. == Speed Ranking using LDC2 (optimized) == 21 msecs vibe.utils.hashmap 37 msecs memutils.hashmap 57 msecs built-in AA 102 msecs jive.map 185 msecs containers.hashmap w/GCAllocator 240 msecs containers.hashmap w/Mallocator == Speed Ranking using DMD (optimized) == 55 msecs memutils.hashmap 64 msecs vibe.utils.hashmap 80 msecs built-in AA 131 msecs jive.map 315 msecs containers.hashmap w/GCAllocator 361 msecs containers.hashmap w/Mallocator ** What if the array size is smaller or larger? The ordering didn't change so I won't post the results. ** ** What if we reuse the hashtable? ** == Speed Ranking using LDC2 (optimized) == 10.45 msecs vibe.utils.hashmap 11.85 msecs memutils.hashmap 12.61 msecs containers.hashmap w/GCAllocator 12.91 msecs containers.hashmap w/Mallocator 14.30 msecs built-in AA 19.21 msecs jive.map == Speed Ranking using DMD (optimized) == 18.05 msecs memutils.hashmap 21.03 msecs jive.map 24.99 msecs built-in AA 25.22 msecs containers.hashmap w/Mallocator 25.75 msecs containers.hashmap w/GCAllocator 29.93 msecs vibe.utils.hashmap == Not benchmarked == stdx.collections.hashtable (dlang-stdx/collections): compilation error kontainer.orderedAssocArray (alphaKAI/kontainer): doesn't accept int keys tanya.container.hashtable (caraus-ecms/tanya): either has a bug or is very slow
Re: D hash table comparison benchmark
Benchmark code: dub.sdl ``` name "hashbench" description "D hashtable comparison." dependency "emsi_containers" version="~>0.7.0" dependency "memutils" version="~>0.4.11" dependency "vibe-d:utils" version="~>0.8.4" dependency "jive" version="~>0.2.0" //dependency "collections" version="~>0.1.0" //dependency "tanya" version="~>0.10.0" //dependency "kontainer" version="~>0.0.2" ``` app.d ```d int nthKey(in uint n) @nogc nothrow pure @safe { // Can be any invertible function. // The goal is to map [0 .. N] to a sequence not in ascending order. int h = cast(int) (n + 1); h = (h ^ (h >>> 16)) * 0x85ebca6b; h = (h ^ (n >>> 13)) * 0xc2b2ae35; return h ^ (h >>> 16); } pragma(inline, false) uint hashBench(HashTable, Args...)(in uint N, in uint seed, Args initArgs) { static if (initArgs.length) HashTable hashtable = HashTable(initArgs); else // Separate branch needed for builtin AA. HashTable hashtable; foreach (uint n; 0 .. N) hashtable[nthKey(n)] = n + seed; uint sum; foreach_reverse (uint n; 0 .. N/2) sum += hashtable[nthKey(n)]; foreach_reverse(uint n; N/2 .. N) sum += hashtable[nthKey(n)]; return sum; } pragma(inline, false) uint hashBenchReuse(HashTable)(in uint N, in uint seed, ref HashTable hashtable) { foreach (uint n; 0 .. N) hashtable[nthKey(n)] = n + seed; uint sum; foreach_reverse (uint n; 0 .. N/2) sum += hashtable[nthKey(n)]; foreach_reverse(uint n; N/2 .. N) sum += hashtable[nthKey(n)]; return sum; } enum benchmarkCode(string name, string signature = name) = ` { sw.reset(); result = 0; sw.start(); foreach (_; 0 .. M) { result += hashBench!(`~signature~`)(N, result); } sw.stop(); string s = "`~name~`"; printf("[checksum %d] %3d msecs %s\n", result, sw.peek.total!"msecs", [0]); } `; enum benchmarkCodeReuse(string name, string signature = name) = ` { sw.reset(); result = 0; sw.start(); `~signature~` hashtable; foreach (_; 0 .. M) { result += hashBenchReuse!(`~signature~`)(N, result, hashtable); } sw.stop(); string s = "`~name~`"; printf("(checksum %d) %3.2f msecs %s\n", result, sw.peek.total!"usecs" / 1000.0, [0]); } `; void main(string[] args) { import std.datetime.stopwatch : AutoStart, StopWatch; import core.stdc.stdio : printf, puts; import std.experimental.allocator.gc_allocator : GCAllocator; import std.experimental.allocator.mallocator : Mallocator; alias BuiltinAA(K,V) = V[K]; import containers.hashmap : EMSI_HashMap = HashMap; import memutils.hashmap : Memutils_HashMap = HashMap; import vibe.utils.hashmap : Vibe_HashMap = HashMap; import jive.map : Jive_Map = Map; //import stdx.collections.hashtable : Stdx_Hashtable = Hashtable; //import tanya.container.hashtable : Tanya_HashTable = HashTable; //import kontainer.orderedAssocArray.orderedAssocArray : Kontainer_OrderedAssocArray = OrderedAssocArray; immutable uint N = args.length < 2 ? 100 : () { import std.conv : to; auto result = to!uint(args[1]); return (result == 0 ? 100 : result); }(); immutable M = N <= 500_000 ? (1000_000 / N) : 2; enum topLevelRepetitions = 3; printf("Hashtable benchmark N (size) = %d (repetitions) = %d\n", N, M); StopWatch sw = StopWatch(AutoStart.no); uint result; version(all) { puts("\n=Results (new hashtables)="); foreach (_repetition; 0 .. topLevelRepetitions) { printf("*Trial #%d*\n", _repetition+1); mixin(benchmarkCode!("built-in AA", "BuiltinAA!(int, int)")); mixin(benchmarkCode!("containers.hashmap w/Mallocator", "EMSI_HashMap!(int, int, Mallocator)")); mixin(benchmarkCode!("containers.hashmap w/GCAllocator", "EMSI_HashMap!(int, int, GCAllocator)")); mixin(benchmarkCode!("memutils.hashmap", "Memutils_HashMap!(int,int)")); mixin(benchmarkCode!("vibe.utils.hashmap", "Vibe_HashMap!(int,int)")); mixin(benchmarkCode!("jive.map", "Jive_Map!(int,int)")); //mixin(benchmarkCode!("stdx.collections.hashtable", "Stdx_Hashtable!(int,int)")); //mixin(benchmarkCode!("tanya.container.hashtable", "Tanya_HashTable!(int,int)")); //mixin(benchmarkCode!("kontainer.orderedAssocArray.orderedAssocArray", "Kontainer_OrderedAssocArray!(int,int)")); } } version(all) { puts("\n=Results (reusing hashtables)=\n"); foreach (_repetition; 0 .. topLevelRepetitions) { printf("*Trial #%d*\n", _repetition+1);
Re: Delegates and classes for custom code.
On Tuesday, 17 April 2018 at 04:09:57 UTC, Chris Katko wrote: I'm having trouble conceptualizing this issue at the moment. But it seems if I pass to the delegate my object, then I can ONLY use one class type. Can you post the code you're trying to run?
Re: how to make private class member private
On Tuesday, 13 March 2018 at 22:56:31 UTC, Jonathan M Davis wrote: The downside is that it increases the number of symbols which the program has to deal with when linking against a shared library, which can have some negative effects. - Jonathan M Davis If I understand correctly it's also responsible for TypeInfo being generated for private classes regardless of whether or not it is ever used.
Re: "Error: address of variable this assigned to this with longer lifetime"
On Tuesday, 13 March 2018 at 22:33:56 UTC, Jonathan M Davis wrote: And you can't get rid of it, because the object can still be moved, which would invalidate the pointer that you have referring to the static array. ... https://issues.dlang.org/show_bug.cgi?id=17448 Thanks for the info.
Re: how to make private class member private
On Tuesday, 13 March 2018 at 21:36:13 UTC, Arun Chandrasekaran wrote: On Tuesday, 13 March 2018 at 13:59:00 UTC, Steven Schveighoffer wrote: On 3/12/18 10:06 PM, psychoticRabbit wrote: [...] OK, so I agree there are drawbacks. But these can be worked around. [...] Private members still have external linkage. Is there anyway to solve this? Yeah that's a real WTF.
Re: how to make private class member private
On Tuesday, 13 March 2018 at 09:14:26 UTC, psychoticRabbit wrote: what I don't like, is that I have no way at all to protect members of my class, from things in the module, without moving that class out of that module. D wants me to completely trust the module, no matter what. That's make a little uncomfortable, given how long and complex modules can easily become(and aleady are) I used to feel similarly and understand where you're coming from but after using D for a while the old way feels ridiculous and cumbersome to me. The problem of accidents even in large files can be avoided by using names like "m_length" or "_length": no jury in the world will believe you if you write those then say you didn't know they were private.
Re: "Error: address of variable this assigned to this with longer lifetime"
On Tuesday, 13 March 2018 at 21:07:33 UTC, ag0aep6g wrote: You're storing a reference to `small` in `data`. When a SmallString is copied, that reference will still point to the original `small`. When the original goes out of scope, the reference becomes invalid, a dangling pointer. Can't have those in @safe code. The error message isn't exactly clear, though. The error does not go away when restoring these lines: ``` @disable this(this); @disable void opAssign(this); ```
"Error: address of variable this assigned to this with longer lifetime"
What is this malarky? https://run.dlang.io/is/S42EBb "onlineapp.d(16): Error: address of variable this assigned to this with longer lifetime" ```d import std.stdio; struct SmallString { char[24] small; char[] data; @disable this(); this(scope const(char)[] s) @safe { if (s.length < small.length) { small[0 .. s.length] = s[]; small[s.length] = 0; data = small[0 .. s.length]; } else { assert(0, "Compilation failed before I wrote this."); } } } void main() { writeln("Hello D"); } ```
Re: Article: Why Const Sucks
On Monday, 5 March 2018 Jonathan M Davis wrote at http://jmdavisprog.com/articles/why-const-sucks.html: What Java has instead is `final`, which IMHO is borderline useless In Java `final` is extremely useful for efficient threadsafe code.
Re: Is the following well defined and allowed?
On Thursday, 1 March 2018 at 21:01:08 UTC, Steven Schveighoffer wrote: Yeah, it seems like -noboundscheck should never be used. How good is DMD at omitting redundant bounds checks? I assume not much engineering effort has been put towards that due to "-boundscheck=off" being available.
Re: Making mir.random.ndvariable.multivariateNormalVar create bigger data sets than 2
Cross-posting from the github issue (https://github.com/libmir/mir-random/issues/77) with a workaround (execute it at https://run.dlang.io/is/Swr1xU): I am not sure what the correct interface should be for this in the long run, but for now you can use a wrapper function to convert an ndvariable to a variable: ```d /++ Converts an N-dimensional variable to a fixed-dimensional variable. +/ auto specifyDimension(ReturnType, NDVariable)(NDVariable vr) if (__traits(isStaticArray, ReturnType) && __traits(compiles, {static assert(NDVariable.isRandomVariable);})) { import mir.random : isSaturatedRandomEngine; import mir.random.variable : isRandomVariable; static struct V { enum bool isRandomVariable = true; NDVariable vr; ReturnType opCall(G)(scope ref G gen) if (isSaturatedRandomEngine!G) { ReturnType ret; vr(gen, ret[]); return ret; } ReturnType opCall(G)(scope G* gen) if (isSaturatedRandomEngine!G) { return opCall!(G)(*gen); } } static assert(isRandomVariable!V); V v = { vr }; return v; } ``` So `main` from your above example becomes: ```d void main() { import std.stdio; import mir.random : Random, threadLocalPtr; import mir.random.ndvariable : multivariateNormalVar; import mir.random.algorithm : range; import mir.ndslice.slice : sliced; import std.range : take; auto mu = [10.0, 0.0].sliced; auto sigma = [2.0, -1.5, -1.5, 2.0].sliced(2,2); Random* rng = threadLocalPtr!Random; auto sample = rng .range(multivariateNormalVar(mu, sigma).specifyDimension!(double[2])) .take(10); writeln(sample); } ```
Re: Making mir.random.ndvariable.multivariateNormalVar create bigger data sets than 2
On Tuesday, 27 February 2018 at 16:42:00 UTC, Nathan S. wrote: On Tuesday, 27 February 2018 at 15:08:42 UTC, jmh530 wrote: Nevertheless, it probably can't hurt to file an issue if you can't get something like the first one to work. I would think it should just work. The problem is that `mir.random.ndvariable` doesn't satisfy `mir.random.variable.isRandomVariable!T`. ndvariables have a slightly different interface from variables: instead of of `rv(gen)` returning a result, `rv(gen, dst)` writes to dst. I agree that the various methods for working with variables should be enhanced to work with ndvariables. So, I see that the interface will have to be slightly different for ndvariable than for variable. With the exception of MultivariateNormalVariable, the same ndvariable instance can be called to fill output of any length "n", so one can't meaningfully create a range based on just the ndvariable without further specification. What would "front" return? For MultivariateNormalVariable "n" is constrained but it is a runtime parameter rather than a compile-time parameter. You'll want to ping @9il / Ilya Yaroshenko to discuss what the API should be like for this.
Re: Making mir.random.ndvariable.multivariateNormalVar create bigger data sets than 2
On Tuesday, 27 February 2018 at 15:08:42 UTC, jmh530 wrote: Nevertheless, it probably can't hurt to file an issue if you can't get something like the first one to work. I would think it should just work. The problem is that `mir.random.ndvariable` doesn't satisfy `mir.random.variable.isRandomVariable!T`. ndvariables have a slightly different interface from variables: instead of of `rv(gen)` returning a result, `rv(gen, dst)` writes to dst. I agree that the various methods for working with variables should be enhanced to work with ndvariables.
Re: std.traits.isBoolean
On Monday, 19 February 2018 at 15:12:15 UTC, Tony wrote: But, assuming there is a use case for it, what if you want to restrict to a type that is either boolean, or a struct/class that can substitute for boolean - how do you do that without using the "private" TypeOfBoolean thing? In that case you can just write `is(T : bool)`.
Re: New abstraction: Layout
On Saturday, 17 February 2018 at 12:49:07 UTC, Andrei Alexandrescu wrote: On 02/16/2018 10:10 PM, rikki cattermole wrote: Could use the name for the field as well. At the minimum useful for debugging purposes. That would be tricky because fields are decomposed down to primitive types. -- Andrei That's unfortunate. Not being able to get the names and types together is my main dissatisfaction with the current interface.
Re: Size threshold replace complex probing with linear search for small hash tables
On Monday, 19 February 2018 at 10:22:12 UTC, Nordlöw wrote: I'm currently developing a combined HashMap and HashSet with open addressing You might want to consider using Robin Hood hashing to reduce the worst-case length of collision chains, regardless of what kind of probing scheme you use.
Re: opCast cannot implicitly convert a.opCast of type X to Y
On Monday, 12 February 2018 at 02:05:16 UTC, aliak wrote: struct B(T) { T t; } struct A(T) { T t; auto opCast(U)() { return B!U(cast(U)t); } } void main() { auto a = A!int(3); auto b = cast(float)a; // error } Having the result of "cast(float) a" not be a float would be evil.
Re: What does "(this This)" mean in member function templates?
On Monday, 12 February 2018 at 08:42:42 UTC, rikki cattermole wrote: https://dlang.org/spec/template.html#TemplateThisParameter Cheers.
What does "(this This)" mean in member function templates?
For example in std.container.rbtree: --- auto equalRange(this This)(Elem e) { auto beg = _firstGreaterEqual(e); alias RangeType = RBRange!(typeof(beg)); if (beg is _end || _less(e, beg.value)) // no values are equal return RangeType(beg, beg); static if (allowDuplicates) { return RangeType(beg, _firstGreater(e)); } else { // no sense in doing a full search, no duplicates are allowed, // so we just get the next node. return RangeType(beg, beg.next); } } ---
Re: Bye bye, fast compilation times
On Wednesday, 7 February 2018 at 08:21:01 UTC, Seb wrote: There's `version(StdUnittest)` since a few days And I've just submitted a pull request that adds `version(StdUnittest)` to every unittest in the standard library. https://github.com/dlang/phobos/pull/6159
Re: Heads-up: upcoming instabilities in std.experimental.allocator, and what to do
On Thursday, 30 November 2017 at 19:01:02 UTC, Andrei Alexandrescu wrote: So we may switch to ubyte[] Hooray!
Re: Bye bye, fast compilation times
On Tuesday, 6 February 2018 at 22:29:07 UTC, Walter Bright wrote: nobody uses regex for lexer in a compiler. Some years ago I was surprised when I saw this in Clojure's source code. It appears to still be there today: https://github.com/clojure/clojure/blob/1215ba346ffea3fe48def6ec70542e3300b6f9ed/src/jvm/clojure/lang/LispReader.java#L66-L73 --- static Pattern symbolPat = Pattern.compile("[:]?([\\D&&[^/]].*/)?(/|[\\D&&[^/]][^/]*)"); //static Pattern varPat = Pattern.compile("([\\D&&[^:\\.]][^:\\.]*):([\\D&&[^:\\.]][^:\\.]*)"); //static Pattern intPat = Pattern.compile("[-+]?[0-9]+\\.?"); static Pattern intPat = Pattern.compile( "([-+]?)(?:(0)|([1-9][0-9]*)|0[xX]([0-9A-Fa-f]+)|0([0-7]+)|([1-9][0-9]?)[rR]([0-9A-Za-z]+)|0[0-9]+)(N)?"); static Pattern ratioPat = Pattern.compile("([-+]?[0-9]+)/([0-9]+)"); static Pattern floatPat = Pattern.compile("([-+]?[0-9]+(\\.[0-9]*)?([eE][-+]?[0-9]+)?)(M)?"); ---
Re: Bye bye, fast compilation times
On Tuesday, 6 February 2018 at 06:11:55 UTC, Dmitry Olshansky wrote: On Tuesday, 6 February 2018 at 05:45:35 UTC, Steven Schveighoffer wrote: On 2/6/18 12:35 AM, Dmitry Olshansky wrote: That’s really bad idea - isEmail is template so the burden of freaking slow ctRegex is paid on per instantiation basis. Could be horrible with separate compilation. Obviously it is horrible. On my mac, it took about 2.5 seconds to compile this one line. I'm not sure how to fix it though... I suppose you could make Just use the run-time version, it’s not that much slower. But then again static ipRegex = regex(...) will parse and build regex at CTFE. Maybe lazy init? FYI I've made a pull request that replaces uses of regexes in std.net.isemail. It turns out they weren't being used for anything indispensable. Import benchmark results were encouraging. https://github.com/dlang/phobos/pull/6129
Re: Thread safe reference counting
On Saturday, 3 February 2018 at 14:41:06 UTC, Kagamin wrote: That RCSharedAllocator PR made me think, so this is my take on how to keep reference counted allocator in shared storage: https://run.dlang.io/is/r1z1dd You might also want to look at Atila Neves's automem package. It uses atomic increment/decrement when the type being reference-counted is `shared`. https://dlang.org/blog/2017/04/28/automem-hands-free-raii-for-d/
wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD
https://dlang.org/phobos/std_datetime_systime.html#.Clock.currStdTime """ @property @trusted long currStdTime(ClockType clockType = ClockType.normal)(); Returns the number of hnsecs since midnight, January 1st, 1 A.D. for the current time. """ This choice of offset seems Esperanto-like: deliberately chosen to equally inconvenience every user. Is there any advantage to this at all on any platform, or is it just pure badness?
Re: new int[]
On Wednesday, 10 January 2018 at 22:46:30 UTC, ag0aep6g wrote: If I understand correctly, the goal is to have the `int[]` itself on the GC heap. The code void main(string[] args) @nogc { int[] x = [1, 2, 3]; } won't compile, because "array literal in @nogc function 'D main' may cause GC allocation". But "may" isn't the same as "will". What determines it? That's a kind of goofy error message now that I think about it.
Re: new int[]
Is there any problem with: import std.stdio; void main(string[] args) { int[] x = [1, 2, 3]; writeln(x); } https://run.dlang.io/is/CliWcz
Re: Release D 2.078.0
On Wednesday, 10 January 2018 at 21:32:55 UTC, Nathan S. wrote: On my mac laptop running DMD 2.078.0, building and running the mir-algorithm unittests takes 8 seconds normally but takes ~3 minutes 49 seconds with dub options "releaseMode", "optimize", "inline", "noBoundsCheck". When I remove the "inline" option the build + test time becomes <10 seconds. So the weirdly slow part is related to inlining.
Re: Release D 2.078.0
DMD64 2.078.0 on Linux and macOS is taking wildly longer to build and run unittests for mir-algorithm. The extra time appears to be related to release mode optimizations. Build logs: https://travis-ci.org/libmir/mir-algorithm/builds/324052159 DMD 2.077.1 for linux32: 3 min 20 sec DMD 2.077.1 for linux64: 3 min 16 sec DMD 2.077.1 for mac64: 5 min 4 sec DMD 2.078.0-rc.1 for linux32: 13 min 30 sec DMD 2.078.0-rc.1 for linux64: 9 min 39 sec DMD 2.078.0-rc.1 for mac64: 10 min 56 sec, then the job was aborted The above tests all include a non-release build and a release build. On my mac laptop running DMD 2.078.0, building and running the mir-algorithm unittests takes 8 seconds normally but takes ~3 minutes 49 seconds with dub options "releaseMode", "optimize", "inline", "noBoundsCheck". I don't see any new compiler optimizations in the changelog. Any idea of what could be causing this?
Re: Does LDC support profiling at all?
On Friday, 22 December 2017 at 09:52:26 UTC, Chris Katko wrote: DMD can use -profile and -profile=gc. But I tried for HOURS to find the equivalent for LDC and came up with only profile-guided optimization--which I don't believe I want. Yet, if we can get PGO... where's the PROFILE itself it's using to make those decisions! :) Thanks. Is -fprofile-instr-generate= what you're looking for?
Re: why ushort alias casted to int?
On Friday, 22 December 2017 at 10:42:28 UTC, crimaniak wrote: Hm, really. ok, I will use the explicit cast, but I don't like it. It's because the C programming language has similar integer promotion rules. That doesn't make it any more convenient if you weren't expecting it but that is the reason behind it.
Re: Proposal for a standard Decimal type in alpha
On Thursday, 21 December 2017 at 13:59:28 UTC, Jack Stouffer wrote: I just finished getting the type into an alpha version, and I wanted to solicit people's opinions on the API and if I'm heading in the right direction with this. The dub page: https://code.dlang.org/packages/stdxdecimal The docs: https://jackstouffer.github.io/stdxdecimal/ What you can do so far: * Control behavior with a Hook type a-la std.experimental.checkedint * Addition, subtraction, multiplication, division * Construction from a strings and built-in number types I think it would be clearer if the precision, the rounding mode, and the error behavior were three separate parameters instead of a single Hook. Predefined settings named "Abort", "Throw", and "NoOp" would then be self-explanatory, and it wouldn't be necessary to entirely rewrite them if you wanted precision of 10 or 20 decimal digits instead of 9 or wanted to use a different rounding mode.
Mir Random v0.3.0 release
About package -- Mir-Random [1] is a random number generator library that covers C++ STL [2]. It is compatible with mir.ndslice, std.algorithm, and std.range. In the same time mir.random has its own API, which is more secure and safe compared with std.random. Release 0.3.0 -- You may find the release notes with hyperlinks more useful: https://github.com/libmir/mir-random/releases/tag/v0.3.0 Performance Increases -- We now use Daniel Lemire's fast alternative to modulo reduction. Compiling with LDC 1.6.0 for x86-64, Mt19937_64 randIndex throughput increased 40% for uint and 136% for ulong. Xoroshiro128Plus randIndex throughput increased 73% for uint and 325% for ulong. The required mir-algorithm version has increased to v0.7.0 because extMul is necessary for the ulong version. New since v0.2.8 -- - New engine: SplitMix64 / Splittable64 - Convenience functions related to thread-local randoms: rne (like std.random rndGen); threadLocal!Engine for arbitrary engine; & ways of mucking about with the bookkeeping state that most people won't need but a few have requested in the past. - Made some engines compatible with APIs that expect std.random-style UniformRNG. Compatible as-is: Xoroshiro128Plus; all predefined PCG engines; and the new SplitMix64/Splittable64 engines. For any others there is an adaptor. Copy-constructors are disabled so they will only work with functions that "do the right thing" and take PRNGs by reference and don't make implicit copies of them. Fixed since v0.2.8 -- - Changed many parts of the library to be @safe. - Linux GETRANDOM in unpredictableSeed now works on non-x86/x86-64 architectures. - Removed endian-dependency when producing 64-bit values from a native-32-bit PRNG. Changed APIs -- - The versions of genRandomBlocking/genRandomNonBlocking that take a pointer and a length are no longer @trusted. Instead there are trusted overloads for both that take a ubyte[]. - mir.random.algorithm has been changed in the interest of memory safety. You can still write unsafe code but now if you try to write @safe code the library will let you. Instead of taking engines by reference and storing their addresses (which could result in the stored address outliving the engine), now the various functions require arguments to be either objects or pointers to structs. For local-scoped engines there are templates with alias parameters. This is a major API change so feedback/criticism is welcome. Links -- [1] https://github.com/libmir/mir-random [2] http://en.cppreference.com/w/cpp/numeric/random
Re: How to catch line number of exception without catching it ?
On Wednesday, 13 December 2017 at 18:24:09 UTC, Thomas wrote: So my question is: Is there a way to catch that line where the exception has happened without a catch ? Yes: use a debugger.
mir-algorithm v0.7.0: new interpolation, optmath, bugfixes
About Mir Algorithm Mir Algorithm[1] is Dlang core library for math, finance and a home for Dlang multidimensional array package - ndslice. New Modules since v0.6.21 - Reworked interpolation API, now found in mir.interpolate, mir.interpolate.linear, mir.interpolate.pchip. - New module mir.interpolate.spline for cubic interpolation. Warning: multivariate cubic spline with derivatives is still experimental. - New module mir.interpolate.constant for constant interpolation. Warning: multivariate constant interpolant is still experimental. API Changes since v0.6.21 - Added in mir.math.common function attributes @optmath and @fmamath. They only have effect when compiling with LDC but can be used with all compilers. (This now also applies to @fastmath.) @optmath is similar to @fastmath but does not allow unsafe-fp-math. Does not force LDC to replace division with multiplication by reciprocal. - New mir.utility.extMul extended unsigned multiplication that makes available the high bits of the result - New mir.functional.aliasCall - New mir.ndslice.algorithm.maxLength returns max length across all dimensions. - New mir.ndslice.slice.IteratorOf!(T : Slice) extracts iterator type from a Slice - New mir.ndslice.slice.ndassign assignment utility template that works both with scalars and with ndslices. - In mir.ndslice.slice.Slice: iterator is now inout; opUnary now works with - and +; opIndexAssign now returns ref this instead of void. - mir.ndslice.field.MagicField supports length and shape. Removed Modules - mir.interpolation, mir.interpolation.linear, mir.interpolation. Migrate to replacements (mir.interpolate.*). Other Changes since v0.6.21 - Uses of @fastmath in the Mir codebase have been replaced by @optmath, excepting mir.math.sum Summation.fast. - In mir.ndslice.topology under-the-hood improvements in slide, diff, pairwise - In mir.ndslice.slice.Slice opBinary and opBinaryRight now internally use mir.ndslice.topology.vmap instead of mir.ndslice.topology.indexed. Fixed since v0.6.21 - Fix in mir.ndslice.topology.map for compilation failing in cases where chained map calls couldn't be coalesced due to capturing multiple contexts (seemingly a compiler bug in some cases) - Made mir.ndslice.topology.flattened backwards compatible with LDC 1.2.0 for those who haven't upgraded - Added workaround in mir.ndslice.algorithm.reduce for DMD inlining bug for non-Windows x86-64 (LDC unaffected) - mir.primitives.shape now takes its argument by reference [1] https://github.com/libmir/mir-algorithm Release notes with hyperlinks: https://github.com/libmir/mir-algorithm/releases/tag/v0.7.0
Re: Static array as immutable
On Tuesday, 12 December 2017 at 11:37:40 UTC, Jonathan M Davis wrote: On Tuesday, December 12, 2017 10:35:15 Ivan Trombley via Digitalmars-d-learn wrote: On Tuesday, 12 December 2017 at 09:48:09 UTC, Jonathan M Davis wrote: > On Tuesday, December 12, 2017 07:33:47 Ivan Trombley via > > Digitalmars-d-learn wrote: >> Is there some way that I can make this array immutable? >> >>static float[256] ga = void; >>static foreach (i; 0 .. 256) >> >>ga[i] = (i / 255.0f) ^^ (1 / 2.2f); > > If you want anything to be immutable, you either have to > initialize it directly or give it a value in a static > constructor (and the static constructor solution won't work > for local variables). So, you'd need to do something like > > static immutable float[256] ga = someFuncThatGeneratesGA(); > > If the function is pure, and there's no way that the return > value was passed to the function, then its return value can > be assigned to something of any mutability, since the > compiler knows that there are no other references to it, and > it can implicitly cast it, or if the type is a value type > (as in this case), then you just get a copy, and mutability > isn't an issue. Alternatively to using a pure function, you > can use std.exception.assumeUnique to cast to immutable, but > that relies on you being sure that there are no other > references to the data, and it may not work at compile-time, > since casting is a lot more restrictive during CTFE. So, in > general, using a pure function is preferable to assumeUnique. > > - Jonathan M Davis Ah, it doesn't work. I get this error using the ^^ operator: /usr/include/dmd/phobos/std/math.d(5724,27): Error: cannot convert to ubyte* at compile time /usr/include/dmd/phobos/std/math.d(6629,24):called from here: signbit(x) /usr/include/dmd/phobos/std/math.d(6756,16):called from here: impl(cast(real)x, cast(real)y) :( Well, if the code you need to initialize a variable can't be run at compile time, then that variable can't be a variable that needs to be initialized at compile time and be immutable. - Jonathan M Davis While what you're saying is true, exponentiation not being runnable at compile-time is a defect and I would assume a regression. I'll file a bug report. FWIW when trying to run the following with DMD v2.077.1 I get: ``` void main(string[] args) { import std.stdio; enum e = (1.0 / 255.0f) ^^ (1 / 2.2f); writeln("e = ", e); } ``` => [...]/dmd/std/math.d(440): Error: y.vu[4] is used before initialized [...]/dmd/std/math.d(413):originally uninitialized here [...]/dmd/std/math.d(4107):called from here: floorImpl(x) [...]/dmd/std/math.d(2373):called from here: floor(x + 0.5L) [...]/dmd/std/math.d(2110):called from here: exp2Impl(x) [...]/dmd/std/math.d(6743):called from here: exp2(yl2x(x, y)) [...]/dmd/std/math.d(6756):called from here: impl(cast(real)x, cast(real)y)
Re: Release D v2.077.1
On Friday, 1 December 2017 at 12:17:38 UTC, Christian Köstlin wrote: also this link is broken for me i get a 404 Omit the "v" from "v2.077.1.html". https://dlang.org/changelog/2.077.1.html
Re: Time to move logger from experimental to std ?
On Thursday, 30 November 2017 at 09:37:27 UTC, Robert burner Schadek wrote: On Wednesday, 29 November 2017 at 19:48:44 UTC, Nathan S. wrote: Considering that one of those issues is that the logger outputs garbage when given a wstring or a dstring, I would not take this as an indication that it's time to "graduate" the logger from experimental. That was fixed at dconf17 and merged a couple of days later. Glad to hear that. It seems like the reason it might have fallen through the cracks is that the commit message referenced issue 15945 instead of 15954. https://github.com/dlang/phobos/commit/9e6759995a1502dbd92a05b4d0a8b662f1c6032b#diff-41bb8731b16e43f1b48e0d529c498fa9
Re: Time to move logger from experimental to std ?
On Wednesday, 29 November 2017 at 14:32:54 UTC, Basile B. wrote: - There only 4 issues for logger Considering that one of those issues is that the logger outputs garbage when given a wstring or a dstring, I would not take this as an indication that it's time to "graduate" the logger from experimental.
Re: minElement on array of const objects
Unqual!Element seed = r.front; alias MapType = Unqual!(typeof(mapFun(CommonElement.init))); This looks like a job for std.typecons.Rebindable!(const A) instead of Unqual!(const A) which is used currently. I am surprised that this is the first time anyone has run into this.
Re: Project Elvis
On Monday, 6 November 2017 at 12:25:06 UTC, Biotronic wrote: I find I often use this in C# with a more complex expression on the left-hand side, like a function call. A quick search shows more than 2/3 of my uses are function calls or otherwise significantly more complex than a variable. Also, it works great in conjunction with the null conditional: foo.Select(a => bar(a, qux)).FirstOrDefault?.Name ?? "not found"; It seems to be targeted primarily at code that does a lot with classes and is written in such a way that it's not clear whether a class reference should be null or not, whereas most D code doesn't do much with classes. In my C# code, it's used with strings and Nullable more often than with classes. Given my own experience with the ?? operator, I'd argue it's probably not worth it without also including null conditional (?.). A quick search in a few projects indicate roughly half the uses of ?? also use ?.. -- Biotronic Without including ".?", this proposed "Elvis operator" will just be ECMAScript-style "||". I think it will still be useful because "||" is useful, but it would be more elegant to just allow "a || b" to have the common type of "a" and "b" (which wouldn't change the truthiness of the expression) instead of introducing a new operator that is exactly like "||" except it doesn't force the result to be bool.
mir-linux-kernel 1.0.0: Linux system call numbers for different architectures
About package -- Linux system call numbers for different architectures. That's all. https://code.dlang.org/packages/mir-linux-kernel Motivating Example -- Linux 3.17 added the getrandom syscall. Using it instead of /dev/[u]?random was a win. But we didn't think about all of the architectures that people might try building our library on, and soon we got a report from a user that our latest and greatest release was failing to compile on Raspberry Pi. Example -- import mir.linux._asm.unistd: NR_getrandom; /* * If the GRND_NONBLOCK flag is set, then * getrandom() does not block in these cases, but instead * immediately returns -1 with errno set to EAGAIN. */ private ptrdiff_t genRandomImplSysNonBlocking(scope void* ptr, size_t len) @nogc nothrow @system { return syscall(NR_getrandom, cast(size_t) ptr, len, GRND_NONBLOCK); }
Re: How to use containers in lock based concurrency
Is this advice from 2015 outdated? I found it while I was wrestling with shared data structures, and after reading I stopped doing that. https://p0nce.github.io/d-idioms/#The-truth-about-shared The truth about shared It's unclear when and how shared will be implemented. Virtually noone use shared currently. You are better off ignoring it at this moment.
Re: private keyword dont appear to do anything
On Friday, 3 November 2017 at 20:01:27 UTC, Jonathan M Davis wrote: Most folks are surprised by this behavior I found it surprising at first but now any other way seems absurd to me. None of the benefits of data encapsulation apply to code written five lines away in the same file.
Re: Mir Random v0.2.8 release
On Tuesday, 24 October 2017 at 03:30:19 UTC, Nathan S. wrote: - On macOS, OpenBSD, and NetBSD, use arc4random_buf [4] in unpredictableSeed and genRandomNonBlocking. Since I am not sure whether this is common knowledge, arc4random isn't based on RC4 on these platforms. macOS uses AES, and OpenBSD and NetBSD use ChaCha20.
Mir Random v0.2.8 release
About package -- Mir-Random [1] is a random number generator library that covers C++ STL [2]. It is compatible with mir.ndslice, std.algorithm, and std.range. In the same time mir.random has its own API, which is more secure and safe compared with std.random. Release v0.2.8 -- Additions: - Added xorshift1024*φ and xoroshiro128+ generators [3] (mir.random.engine.xorshift : Xorshift1024StarPhi, Xoroshiro128Plus) - Mt19937 and Mt19937_64 can be seeded from array or ndslice - mir.random.engine.preferHighBits!T to query new optional enum bool preferHighBits Improvements: - When the high bits of a PRNG's output are known to have better statistical properties than the low bits, use high bits when not all bits of output are required. - On macOS, OpenBSD, and NetBSD, use arc4random_buf [4] in unpredictableSeed and genRandomNonBlocking. Bugfixes: - Fix isSaturatedRandomEngine!T not working when T.opCall is a function template. - Fix address-based increment for PCGs in unique_stream mode. - Incorporated upstream fix for seeding a MCG with a seed that's a multiple of the modulus. Links -- [1] https://github.com/libmir/mir-random [2] http://en.cppreference.com/w/cpp/numeric/random [3] http://xoroshiro.di.unimi.it/ [4] https://man.openbsd.org/arc4random.3
Re: My two cents
On Monday, 23 October 2017 at 22:22:55 UTC, Adam Wilson wrote: Additionally, MSFT/C# fully recognizes that the benefits of Async/Await have never been and never were intended to be for performance. Async/Await trades raw performance for an ability to handle a truly massive number of simultaneous tasks. Could you clarify this? Do you mean it's not supposed to have better performance for small numbers of tasks, but there is supposed to be some high threshold of tasks/second at which either throughput or latency is better?
Re: Is there further documentation of core.atomic.MemoryOrder?
Thank you. For anyone else with the same question, I also found this page helpful: http://en.cppreference.com/w/cpp/atomic/memory_order
Is there further documentation of core.atomic.MemoryOrder?
Is there a formal description of "hoist-load", "hoist-store", "sink-load", and "sink-store" as used in core.atomic.MemoryOrder (https://dlang.org/library/core/atomic/memory_order.html)?