Re: I had a bad time with slice-in-struct array operation forwarding/mimicing. What's the best way to do it?
On Saturday, 4 May 2019 at 15:36:51 UTC, Nicholas Wilson wrote: On Saturday, 4 May 2019 at 15:18:58 UTC, Random D user wrote: I wanted to make a 2D array like structure and support D slice like operations, but I had surprisingly bad experience. The de facto multi dimensional array type in D is mir's ndslice https://github.com/libmir/mir-algorithm/blob/master/source/mir/ndslice/slice.d#L479 Thanks. I'll take a look.
Re: I had a bad time with slice-in-struct array operation forwarding/mimicing. What's the best way to do it?
On Saturday, 4 May 2019 at 16:10:36 UTC, Adam D. Ruppe wrote: On Saturday, 4 May 2019 at 15:18:58 UTC, Random D user wrote: But array copy and setting/clearing doesn't: int[] bar = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ]; foo[] = bar[]; Generally speaking, opIndex is for getting, opIndexAssign is for setting. Thanks a lot for a very detailed answer. Sorry about the late reply. But for 2d and 3d and more arrays, the number of functions explodes really fast. Yeah, tastes like C++, but I guess I'll bite. I value debuggability and I only have the 2D case, so I think templates are out.
I had a bad time with slice-in-struct array operation forwarding/mimicing. What's the best way to do it?
I wanted to make a 2D array like structure and support D slice like operations, but I had surprisingly bad experience. I quickly copy pasted the example from the docs: https://dlang.org/spec/operatoroverloading.html#array-ops It's something like this: struct Array2D(E) { E[] impl; int stride; int width, height; this(int width, int height, E[] initialData = []) ref E opIndex(int i, int j) Array2D opIndex(int[2] r1, int[2] r2) auto opIndex(int[2] r1, int j) auto opIndex(int i, int[2] r2) int[2] opSlice(size_t dim)(int start, int end) @property int opDollar(size_t dim : 0)() @property int opDollar(size_t dim : 1)() } So basic indexing works fine: Array2D!int foo(4, 4); foo[0, 1] = foo[2, 3]; But array copy and setting/clearing doesn't: int[] bar = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ]; foo[] = bar[]; And I get this very cryptic message: (6): Error: template `example.Array2D!int.Array2D.opSlice` cannot deduce function from argument types `!()()`, candidates are: (51):`example.Array2D!int.Array2D.opSlice(ulong dim)(int start, int end) if (dim >= 0 && (dim < 2))` 1. WTF `!()()` and I haven't even called anything with opSlice i.e. `a .. b`? Anyway, it doesn't overload [] with opIndex(), so fine, I add that. T[] opIndex() { return impl; } Now I get: foo[] = bar[]; // or foo[] = bar; Error: `foo[]` is not an lvalue and cannot be modified Array copying docs say: When the slice operator appears as the left-hand side of an assignment expression, it means that the contents of the array are the target of the assignment rather than a reference to the array. Array copying happens when the left-hand side is a slice, and the right-hand side is an array of or pointer to the same type. 2.WTF I do have slice operator left of assignment. So I guess [] is just wonky named getter (and not an operator) for a slice object and that receives the = so it's trying to overwrite/set the slice object itself. Next I added a ref to the E[] opIndex(): ref E[] opIndex() { return impl; } Now foo[] = bar[] works as expected, but then I tried foo[] = 0; and that fails: Error: cannot implicitly convert expression `0` of type `int` to `int[]` 3. WTF. Didn't I just get reference directly to the slice and array copy works, why doesn't array setting? The ugly foo[][] = 0 does work, but it's so ugly/confusing that I'd rather just use a normal function. So I added: ref E[] opIndexAssign(E value) { impl[] = value; return impl; } And now foo[] = 0; works, but foo[0, 1] = foo[2, 3] doesn't. I get: Error: function `example.Array2D!int.Array2D.opIndexAssign(int f)` is not callable using argument types `(int, int, int)` expected 1 argument(s), not 3 4. WTF. So basically adding opIndexAssign(E value) disabled ref E opIndex(int i, int j). Shouldn't it consider both? I'm surprised how convoluted this is. Is this really the way it's supposed to work or is there a bug or something? So what is the best/clear/concise/D way to do these for a custom type? I was planning for: foo[] = bar; // Full copy foo[] = 0; // Full clear foo[0 .. 5, 1] = bar[ 0 .. 5]; // Row/Col copy foo[1, 0 .. 5] = 0; // Row/Col clear foo[0 .. 5, 2 .. 4] = bar[ 1 .. 6, 0 .. 2 ]; // Box copy foo[0 .. 5, 2 .. 4] = 0; // Box clear Anyway, this is not a huge deal breaker for me, I was just surprised and felt like I'm missing something. I suppose I can manually define every case one by one and not return/use any references etc. or use alias this to forward to impl[] (which I don't want to do since I don't want to change .length for example) or just use normal functions and be done with it. And it's not actually just a regular array I'm making, so that's why it will be mostly custom code, except the very basics.
Re: structs inheriting from and implementing interfaces
On Friday, 29 December 2017 at 12:03:59 UTC, Mike Franklin wrote: In C#, structs can inherit from and implement interfaces. Is that simply because it hasn't been implemented or suggested yet for D, or was there a deliberate design decision? Thanks for your insight, Mike I think it's deliberate, structs are just plain dumb value types. If I remember correctly I think Remedy's Binderoo C++ bindings implemented C++ inheritance on top of structs. You might want to look at that. Or you could do C-style "inheritance" and slap some D magic on top of that. Some pseudo code: struct Base { enum SubType subtype; int someBaseField; } struct Child1 { Base base; // Must be first int foo; } struct Child2 { Base base; float bar; } Base b; Child1 c1; Child2 c2; base_doSomething(Base* b); child1_doSomething(Child1* c1); child2_doSomething(Child2* c2); base_doSomething(cast(Base*)); switch(base.subtype) { case Child1: child1_doSomething(cast(Child1*)); break; case Child2: child2_doSomething(cast(Child2*)); break; } // add some alias this and other d things to smooth things out.
Re: I think is a bug?
On Sunday, 12 March 2017 at 01:55:20 UTC, ketmar wrote: Random D user wrote: How come string* suddenly has a .length property? due to automatic pointer dereferencing that `.` does. no, not a bug. Ah... right. Silly me. Of course, since string is actually immutable(char)[]. That's bit of a nasty corner case where -> == . isn't that nice. Fortunately, it's rare. Thanks. This happened to me, when I was packing stuff into SoA layout and didn't want to duplicate the length in the struct (implicitly by using []). Of course, I forgot to update one place to use the shared length. That is: length ptr ptr ptr instead of ptr length ptr length ptr length Perhaps I should do a SoA layout template that somehow disables .length on individual arrays.
I think is a bug?
int*[] foo; foo.length = 5; import std.c.string; int* baz = cast(string*)malloc(50); import std.c.stdio; printf("%d %d", foo.length, baz.length ); prints: Error: no property 'length' for type 'int*' BUT: string*[] foo; foo.length = 5; import std.c.string; string* baz = cast(string*)malloc(50); import std.c.stdio; printf("%d %d", foo.length, baz.length ); compiles and prints: 5 -842150451 How come string* suddenly has a .length property? Anyway the result is garbage, so I think this must be a bug. DMD32 D Compiler v2.073.2
Why can't I init a new var from const var?
I can init a variable from mutable source without defining any constructor or assignment operators, but not if the source is const. I would imagine the behavior to be the same with mutable and const source, since it's just reading the source and copying it. Is there a reason for this? Or is this a bug? I can workaround this by making copies or casting, that just creates ugly code everywhere. Here's an example (with dmd 2.073): struct Foo { this( Foo source ) { buf = source.buf.dup; } this( const Foo source ) { buf = source.buf.dup; } this( const ref Foo source ) { buf = source.buf.dup; } void opAssign( Foo source ) { buf = source.buf.dup; } void opAssign( const Foo source ) { buf = source.buf.dup; } void opAssign( const ref Foo source ) { buf = source.buf.dup; } char[] buf; } Foo fun(const ref Foo foo, Foo foo2) { Foo bar = foo; // Error: cannot implicitly convert expression (foo) of type const(Foo) to Foo Foo baz = foo2;// Ok, No need for constructors or opAssign Foo baz2 = cast(const Foo)foo2; // Error: cannot implicitly convert expression (Foo(null).this(foo2)) of type const(Foo) to Foo Foo bar2; bar2 = foo; // uses opAssing( const Foo ) / opAssign( const ref Foo ) Foo bar3; bar3 = foo2;// uses opAssign( const Foo ) / opAssign( Foo ) Foo bar4; bar4 = cast(const Foo)foo2; // uses opAssing( const Foo ) //Foo bar = Foo(foo); // This works provided there is non-const opAssign defined. //Foo bar = cast(Foo)foo; // This seems to work as well return bar; } Foo foo; foo = fun(foo, foo);
Re: How to detect/filter modules in __traits(allMembers)?
On Saturday, 11 June 2016 at 20:30:47 UTC, Basile B. wrote: On Saturday, 11 June 2016 at 19:45:56 UTC, Random D user wrote: Any good ideas how to do that? I couldn't figure it out in a short amount of time, but I expect that it's possible. I'm probably missing something obvious here. Probably because D's reflection/meta programming facilities are a bit all over the place (and unnecessarily convoluted IMO). Also I'm not super familiar with every compile-time feature, which is why I want to learn and some meta functions/templates myself. [...] It will compile if you define the option informational warnings (-wi). Yes, ignoring deprecations gets me forward (basically the same as dropping back to previous compiler version), but I'd rather figure out/know a proper solution. I suppose I could wrap those structs (with UDA) into a another named struct or empty template to split them into a separate "namespace" from the import modules. I guess that wouldn't be as bad since all the structs are similar which means their names are similar. So basically, NameType would become Type.Name. Hmm... Anyway, that workaround seems a bit silly, so I'm hoping to find a proper, generic and robust solution without any gimmicks.
How to detect/filter modules in __traits(allMembers)?
Any good ideas how to do that? I couldn't figure it out in a short amount of time, but I expect that it's possible. I'm probably missing something obvious here. Probably because D's reflection/meta programming facilities are a bit all over the place (and unnecessarily convoluted IMO). Also I'm not super familiar with every compile-time feature, which is why I want to learn and some meta functions/templates myself. In my usecase I want to extract structs with UDAs from a module, which also imports bunch of stuff. Now __traits(allMembers) gives me the list of things (in strings) that the module contains. This includes module imports and when I do something like this (in pseudo D): template { foreach(name ; __traits(allMembers, MODULE) ) foreach(attr_name ; __traits(getAttributes, __traits(getMember, MODULE, name))) whatever logic... } I get "Deprecation: foo.bar is not visible from module baz" with 2.071+ compiler (i.e. import symbol lookup rules changed) where foo.bar is an import in MODULE and baz is the module where the template for code above is located. And I'm calling the template from yet another module qux. This used to work with previous versions (modules were just skipped). The trouble begins when an import module is passed to __traits(getMember). (however I can do __traits(hasMember) which is a bit weird, since I'd assume same visibility rules would apply). Here's a pseudo example: mymodule.d: import foo.bar; @uda struct mystruct baz.d template get_uda_structs qux.d import mymodule.d import baz.d auto structs = get_uda_structs(mymodule, uda); // Won't compile: "Deprecation: foo.bar is not visible from module baz"
Re: AA struct hashing bug?
On Tuesday, 8 December 2015 at 11:04:49 UTC, Random D user wrote: I need to look into this more. Ok. This is minimal app that crashes for me. If someone could try this: class App { this() { } void crash( int val ) in { assert( val == 1 ); } body { struct Foo { this( int k ) { a = k; } int a; } Foo foo; int[ Foo ] map; map[ foo ] = 1; // Crash! bug? } } int main( char[][] args ) { App a = new App; a.crash( 1 ); return 0; } And the previous case for the crash looks like this: asm: _D6object14TypeInfo_Class7getHashMxFNbNexPvZm: 7ff6d9e4b500 push rbp 7ff6d9e4b501 mov rbp, rsp 7ff6d9e4b504 sub rsp, 0x30 7ff6d9e4b508 mov [rbp-0x8], rsi 7ff6d9e4b50c mov rsi, [rdx] 7ff6d9e4b50f test rsi, rsi 7ff6d9e4b512 jz _D6object14TypeInfo_Class7getHashMxFNbNexPvZm+0x20 (0x7ff6d9e4b520) 7ff6d9e4b514 mov rcx, rsi 7ff6d9e4b517 mov rax, [rsi] 7ff6d9e4b51a call qword near [rax+0x10] < crash here 7ff6d9e4b51e jmp _D6object14TypeInfo_Class7getHashMxFNbNexPvZm+0x22 (0x7ff6d9e4b522) 7ff6d9e4b520 xor eax, eax 7ff6d9e4b522 mov rsi, [rbp-0x8] 7ff6d9e4b526 lea rsp, [rbp] 7ff6d9e4b52a pop rbp stack: _D6object14TypeInfo_Class7getHashMxFNbNexPvZm() + 0x1e bytesD _D6object14TypeInfo_Const7getHashMxFNbNfxPvZm() + 0x13 bytesD application.Application.startup.Foo.__xtoHash( application.Application.startup.Foo* p, ulong h ) + 0x55 bytes D _D6object15TypeInfo_Struct7getHashMxFNaNbNfxPvZm() + 0x22 bytes D _aaGetY() + 0xa0 bytes D application.Application.startup() Line 159 + 0x26 bytes D
Re: AA struct hashing bug?
On Tuesday, 8 December 2015 at 01:23:40 UTC, Ivan Kazmenko wrote: On Monday, 7 December 2015 at 22:03:42 UTC, Alex Parrill wrote: On Monday, 7 December 2015 at 18:48:18 UTC, Random D user Tested the same code with -m32 and -m64 on Windows. Works for me, too. I tried this again. And it seems it might be my bug or that the runtime somehow corrupts it's state. Scary. So I have an App class that gets created in main. Basically App = new App App.start(); If I put that code as the first thing in the constructor everything works. If I put that code as the first thing in the first method after constructor it crashes. And that code is completely unrelated to everything else. Without the code snippet the whole app works fine. Also if I wrap the code in a local funtion or class, it works fine even in the first method. I need to look into this more.
Re: How to make a transparent wrapper type?
On Monday, 7 December 2015 at 20:03:07 UTC, Namespace wrote: This seems to work: struct RefVal(T) { private T* ptr; this(T* val) { ptr = val; } ref auto opAssign(U)(auto ref U value) { *ptr = value; return *ptr; } auto get() inout { return ptr; } } Yes. It works for assignment as expected. Thanks. I don't know why I didn't try that. I mean I tried something like this: struct RefVal(T) { }
Re: How to make a transparent wrapper type?
On Tuesday, 8 December 2015 at 10:26:18 UTC, Random D user wrote: On Monday, 7 December 2015 at 20:03:07 UTC, Namespace wrote: This seems to work: struct RefVal(T) { private T* ptr; this(T* val) { ptr = val; } ref auto opAssign(U)(auto ref U value) { *ptr = value; return *ptr; } auto get() inout { return ptr; } } Yes. It works for assignment as expected. Thanks. I don't know why I didn't try that. I mean I tried something like this: struct RefVal(T) { } Whoops. For some reason lost focus to window while typing and accidentally sent the message. Well. Anyway, I tried something similar using alias this and template functions, but obviously it didn't work. Unfortunately. Your version doesn't work with methods. For example if Ref!T is Ref!Struct then ref.method() doesn't work. That's the reason for alias this. But it's good enough with public ptr. Maybe opDispatch could help here. I haven't really used it so far.
AA struct hashing bug?
struct Foo { this( int k ) { a = k; } int a; } Foo foo; int[ Foo ] map; map[ foo ] = 1; // Crash! bug? // This also crashes. I believe crash above makes a call like this (or similar) in the rt. //auto h = typeid( foo ).getHash( ); // Crash! win64 & dmd 2.69.2
How to make a transparent wrapper type?
I kind of miss reference values on stack, so I attempted to make one in a struct. Pointers are pretty good (since d doesn't have ->), but it would be nice to avoid dereferencing them explicitly on assignment. Since reference is a pointer that you can't change afterwards. I tried something like this: struct RefVal( T ) { this( T* val ) { ptr = val; } ref auto opAssign( T value ){ *ptr = value; return *ptr; } ref auto opAssign( ref T value ){ *ptr = value; return *ptr; } alias ptr this; T* ptr; } This works for most basic cases but breaks in: struct Foo { this( int k ) { a = k; } void opAssign( int k ) { a = k; } int a; } Foo foo = Foo(2); Foo baz = Foo(3); RefVal!Foo bar = RefVal!Foo( ); bar = baz; bar = 5; // Ooops! doesn't work Is there a way to transparently pass everything to *RefVal.ptr? Also is there a way to make "alias ptr this" to work with "private T*"? Ideally I wouldn't want to give access to the ptr, but for now it's handy as a workaround.
Re: Builtin array and AA efficiency questions
Ah missed your post before replying to H.S. Teoh (I should refresh more often). Thanks for reply. On Thursday, 15 October 2015 at 19:50:27 UTC, Steven Schveighoffer wrote: Without more context, I would say no. assumeSafeAppend is an assumption, and therefore unsafe. If you don't know what is passed in, you could potentially clobber data. In addition, assumeSafeAppend is a non-inlineable, runtime function that can *potentially* be low-performing. Yeah I know that I want to overwrite the data, but still that's probably a lot of calls to assumeSafeAppend. So I agree. instance, you call it on a non-GC array, or one that is not marked for appending, you will most certainly need to take the GC lock and search through the heap for your block. What does marked for appending mean. How does it happen or how is it marked? The best place to call assumeSafeAppend is when you are sure the array has "shrunk" and you are about to append. If you have not shrunk the array, then the call is a waste, if you are not sure what the array contains, then you are potentially stomping on referenced data. So assumeSafeAppend is only useful when I have array whose length is set to lower than it was originally and I want to grow it back (that is arr.length += 1 or arr ~= 1). An array uses a block marked for appending, assumeSafeAppend simply sets how much data is assumed to be valid. Calling assumeSafeAppend on a block not marked for appending will do nothing except burn CPU cycles. So yours is not an accurate description. Related to my question above. How do you get a block not marked for appending? a view slice? Perhaps I should re-read the slice article. I believe it had something like capacity == 0 --> always allocates. Is it this? A.3) If A.2 is true, are there any conditions that it reverts to original behavior? (e.g. if I take a new slice of that array) Any time data is appended, all references *besides* the one that was used to append now will reallocate on appending. Any time data is shrunk (i.e. arr = arr[0..$-1]), that reference now will reallocate on appending. Thanks. IMO this is very concise description of allocation behavior. I'll use this as a guide. So when to call really sort of requires understanding what the runtime does. Note it is always safe to just never use assumeSafeAppend, it is an optimization. You can always append to anything (even non-GC array slices) and it will work properly. Out of curiosity. How does this work? Does it always just reallocate with gc if it's allocated with something else? This is an easy call then: array.reserve(100); // reserve 100 elements for appending array ~= data; // automatically manages array length for you, if length exceeds 100, just automatically reallocates more data. array.length = 0; // clear all the data array.assumeSafeAppend; // NOW is the best time to call, because you can't shrink it any more, and you know you will be appending again. array ~= data; // no reallocation, unless previous max size was exceeded. Thanks. This will probably cover 90% of cases. Usually I just want to avoid throwing away memory that I already have. Which is slow if it's all over your codebase. Like re-reading or recomputing variables that you already have. One doesn't hurt but a hundred does. B.1) I have a temporary AA whose lifetime is limited to a known span (might be a function or a loop with couple functions). Is there way to tell the runtime to immeditially destroy and free the AA? There isn't. This reminds me, I have a lingering PR to add aa.clear which destroys all the elements, but was waiting until object.clear had been removed for the right amount of time. Perhaps it's time to revive that. Should array have clear() as well? Basically wrap array.length = 0; array.assumeSafeAppend(); At least it would then be symmetric (and more intuitive) with built-in containers. -Steve
Re: Builtin array and AA efficiency questions
Thanks for thorough answer. On Thursday, 15 October 2015 at 18:46:22 UTC, H. S. Teoh wrote: It adjusts the size of the allocated block in the GC so that subsequent appends will not reallocate. So how does capacity affect this? I mean what is exactly a GC block here. Shrink to fit bit was confusing, but after thinking about this few mins I guess there's like at least three concepts: slice 0 .. length allocation 0 .. max used/init size (end of 'gc block', also shared between slices) raw mem block 0 .. capacity (or whatever gc set aside (like pages)) slice is managed by slice instance (ptr, length pair) allocation is managed by array runtime (max used by some array) raw mem block is managed by gc (knows the actual mem block) So if slice.length != allocation.length then slice is not an mem "owning" array (it's a reference). And assumeSafeAppend sets allocation.length to slice.length i.e. shrinks to fit. (slice.length > allocation.length not possible, because allocation.length = max(slice.length), so it always just shrinks) Now that slice is a mem "owning" array it owns length growing length happens without reallocation until it hits raw mem block.length (aka capacity). So basically the largest slice owns the memory allocation and it's length. This is my understanding now. Although, I'll probably forget all this in 5..4..3..2... The thought that occurs to me is that you could still use the built-in arrays as a base for your Buffer type, but with various operators overridden so that it doesn't reallocate unnecessarily. Right, so custom array/buffer type it is. Seems the simplest solution. I already started implementing this. Reusable arrays are everywhere. If you want to manually delete data, you probably want to implement your own AA based on malloc/free instead of the GC. The nature of GC doesn't lend it well to manual management. I'll have to do this as well. Although, this one isn't that critical for me. The only thing I can think of is to implement this manually, e.g., by wrapping your AA in a type that keeps a size_t "generation counter", where if any value in the AA is found to belong to a generation that's already past, it pretends that the value doesn't exist yet. Something like this: Right. Like a handle system or AA of ValueHandles in this case. But I'll probably just hack up some custom map and reuse it's mem. Although, I'm mostly doing this for perf (realloc) and not mem size, so it might be too much effort if D AA is highly optimized.
Builtin array and AA efficiency questions
So I was doing some optimizations and I came up with couple basic questions... A) What does assumeSafeAppend actually do? A.1) Should I call it always if before setting length if I want to have assumeSafeAppend semantics? (e.g. I don't know if it's called just before the function I'm in) A.2) Or does it mark the array/slice itself as a "safe append" array? And I can call it once. A.3) If A.2 is true, are there any conditions that it reverts to original behavior? (e.g. if I take a new slice of that array) I read the array/slice article, but is seems that I still can't use them with confidece that it actually does what I want. I tried also look into lifetime.d, but there's so many potential entry/exit/branch paths that without case by case debugging (and no debug symbols for phobos.lib) it's bit too much. What I'm trying to do is a reused buffer which only grows in capacity (and I want to overwrite all data). Preferably I'd manage the current active size of the buffer as array.length. For a buffer typical pattern is: array.length = 100 ... array.length = 0 ... some appends ... array.length = 50 ... etc. There's just so much magic going behind d arrays that it's a bit cumbersome to track manually what's actually happening. When it allocates and when it doesn't. So, I already started doing my own Buffer type which gives me explicit control, but I wonder if there's a better way. B.1) I have a temporary AA whose lifetime is limited to a known span (might be a function or a loop with couple functions). Is there way to tell the runtime to immeditially destroy and free the AA? I'd like to assist the gc with manually destroying some AAs that I know I don't need anymore. I don't really want to get rid of gc, I just don't want to just batch it into some big batch of gc cycle work, since I know right then and there that I'm done with it. For arrays you can do: int[] arr; arr.length = 100; delete arr; // I assume this frees it but for AAs: int[string] aa; delete aa; // gives compiler error Error: cannot delete type int[string] I could do aa.destroy(), but that just leaves it to gc according to docs. Maybe I should start writing my own hashmap type as well? B.2) Is there a simple way to reuse the memory/object of the AA? I could just reuse a preallocated temp AA instead of alloc/freeing it.
Re: Tried release build got ICE, does anyone have a clue what might cause this?
On Saturday, 19 September 2015 at 07:25:58 UTC, ponce wrote: On Friday, 18 September 2015 at 22:54:43 UTC, Random D user wrote: So I tried to build my project in release for the first time in a long while. It takes like 25x longer to compile and finally the compiler crashes. It seems to go away if I disable the optimizer. I get: tym = x1d Internal error: backend\cgxmm.c 547 Does anyone have a clue what might trigger this? I'm asking because my project has grown a bit and I don't really have any good way of isolating this. I'm using dmd 2.068.1 and msvc x64 target. As a backend ICE is is very important that you report this. To workaround, try disabling inlining or -O selectively. Thanks for the tips. I guess I should register an account (which I hate (already too many one off accounts)), since I already have like 3 bugs gathering dust in the corner. Just hit another one (this time in debug): Assertion failure: 'type->ty != Tstruct || ((TypeStruct *)type)->sym == this' on line 957 in file 'struct.c' Ugh... It really seems like D starts to break down once your code grows beyond toy program size. A bit frustrating...
Re: Tried release build got ICE, does anyone have a clue what might cause this?
On Saturday, 19 September 2015 at 21:48:25 UTC, Random D user wrote: Assertion failure: 'type->ty != Tstruct || ((TypeStruct *)type)->sym == this' on line 957 in file 'struct.c' Ok managed to reduce this one to my own copy paste bug. This is invalid code, but compiler shouldn't crash... I'm posting this here for reference (I will file a bug later): class Gui { enum MouseButton { Left = 0, Right }; private: struct ClickPair { MouseButton button = MouseButton.Left; }; struct ClickPair // Second struct ClickPair with the enum above --> Assertion failure: 'type->ty != Tstruct || ((TypeStruct*)type)->sym == this' on line 957 in file 'struct.c' { MouseButton button = MouseButton.Left; }; };
Tried release build got ICE, does anyone have a clue what might cause this?
So I tried to build my project in release for the first time in a long while. It takes like 25x longer to compile and finally the compiler crashes. It seems to go away if I disable the optimizer. I get: tym = x1d Internal error: backend\cgxmm.c 547 Does anyone have a clue what might trigger this? I'm asking because my project has grown a bit and I don't really have any good way of isolating this. I'm using dmd 2.068.1 and msvc x64 target.
Another, is it a bug?
I'm trying to make a base class with get property and a sub class with corresponding set property. The value for the base class is set via constructor. The intuitive way doesn't seem to work and workarounds are unnecessarily ugly (considering you'll sprinkle them all over the codebase). class Father { int eat() { return 1; } } class Daughter : Father { void eat( int apples ) {} // int eat() { return super.eat(); }// Workaround A, works as expected //override int eat( int apples ) {} // Workaround D, fails -> Error: function main.Daughter.eat does not override any function, did you mean to override 'main.Father.eat'? } Daughter d = new Daughter(); // BUG? I expected this to work. It seems that compiler doesn't even look into parent class to see if there's a matching function. //int num = d.eat();// Error: function main.Daughter.eat (int apples) is not callable using argument types () int num2 = (cast(Father)d).eat(); // Workaround B, works as expected int num3 = d.Father.eat(); // Workaround C, works as well
Re: Another, is it a bug?
On Wednesday, 16 September 2015 at 03:17:05 UTC, Meta wrote: Considering Father defines the function `int eat()` and Daughter defines the completely different function `int eat(int)`, it doesn't surprise me. You're not using virtual dispatch when you do `return super.eat` or `d.Father.eat()`, you're delegating the method call to the base class. Yeah... I guess I was expecting it to overload across class boundaries. I mean there's already a member eat in base class and sub class can't override that since it's got different parameters, and it's a function (can't be variable), so the reasonable thing would be to overload it (which is why I tried override to see if it forces/hints overriding/overloading). Instead it creates two ambiguous names of which only one has to be disambiguated to use which seems super error prone. IMO it should just be error/warning. Given that, normally properties are just overloaded methods in D, it's pretty sad classes break this behavior/convention.
Re: Another, is it a bug?
On Wednesday, 16 September 2015 at 03:54:34 UTC, Adam D. Ruppe wrote: On Wednesday, 16 September 2015 at 03:48:59 UTC, Random D user Given that, normally properties are just overloaded methods in D, it's pretty sad classes break this behavior/convention. The D behavior for overloading is different in general: http://dlang.org/hijack.html It basically never overloads across scopes. You need to alias the name into the scope too explicitly Thanks. That pretty much answers directly to all my questions. I tried to look for this info in class docs/reference, but couldn't find it (obviously). I never thought that this would be in "articles".
Re: I guess this is a bug?
On Saturday, 12 September 2015 at 18:28:02 UTC, Random D user wrote: or is it some obscure feature conflict? [...] Oh... and I'm using win 64-bit and dmd 2.068.1, but this behavior was present earlier than that...
I guess this is a bug?
or is it some obscure feature conflict? struct Foo { this( float x_, float y_ ) { // option A //x = x_; //y = y_; // option B v[0] = x_; v[1] = y_; } union { struct { float x = 0; float y = 0; } float[2] v; } } struct Bar { Foo foo = Foo( 1, 2 ); } Bar bar; Bar baz = bar.init; printf( "bar: %f, %f\n", bar.foo.x, bar.foo.y ); printf( "baz: %f, %f\n", baz.foo.x, baz.foo.y ); - prints (with option B): bar: 0.00, 0.00 // BUG?? baz: 1.00, 2.00 prints (with option A): bar: 1.00, 2.00 baz: 1.00, 2.00 - Luckily the option A works as I expected and is good enough for me...