Re: length's type.
On 8/2/24 16:00, Kevin Bailey wrote: I'm asking, why is the default C compatibility (of all things) rather than "safety that I can override if I need to make it faster" ? I'm sure there are more experienced people here that will be able to answer better, but as far as I remember, the policy has been like this since times immemorial: D code that happens to be valid C code as well should either behave exactly like C, or not compile at all. I can't find a quote in the official documentation right now, though. IIRC, there are a couple of specific situations where this is not the case, but I think this is the answer to your question ("why?"). You can well disagree with this policy, and there are probably many good reasons for that, but that's probably a deeper discussion.
Re: How to do reflection on alias symbols
On 17/11/23 2:48, Jonathan M Davis wrote: On Thursday, November 16, 2023 6:04:43 PM MST Jonathan M Davis via Digitalmars-d-learn wrote: Actually, it looks like there's already an old bug report on the issue: https://issues.dlang.org/show_bug.cgi?id=12363 So, it has been reported, but it looks it's one of those that's gone under the radar. - Jonathan M Davis Thanks for finding it! I think that in general D could do with a more systematic approach to reflection. For me, it's one of its greatest features, and it's a bit of a pity that it needs to be done in such an ad-hoc manner with all kind of corner cases. I mean, in order to know if something is an `enum`, I need to do: ```d enum isEnum(alias a) = is(typeof(a)) && !is(typeof()); ``` which feels like the wrong approach, and too much error-prone. I also fear I'm forgetting to consider some corner case. There is `is(E == enum)`, but it only works on types, and fails for anonymous enums, because `typeof` returns the base type. I know that `std.traits` was precisely supposed to hide these dirty details, but as of now it also seems to be missing this kind of systematic approach: I'd like things like `isEnum!symbol`, and also `isAlias!symbol`, etc. But I think I digress a bit too much, this would be a topic rather for the general forum. Thanks again for your help!
How to do reflection on alias symbols
Hi all, Please consider the following currently non-working code: ```d struct bar { public alias pubInt = int; private alias privInt = int; } static foreach(member ; __traits(allMembers, bar)) { // Error: argument `int` has no visibility pragma(msg, __traits(getVisibility, __traits(getMember, bar, member))); } ``` Is there any way to get the visibility, or more generically to reflect on an alias member as itself and not as the symbol pointed to without resorting to nasty __trait(compiles,...) tricks that fail more often than not?
Trait to get public imported symbols
Hi all, I'm trying to write a meta-tool that walks through all the declared symbols of a module. The problem I'm facing is that `__trait(allMembers, module)` works with the symbols declared by the module itself, but it doesn't include public imports. I understand that these are not symbols of the module I'm analyzing, so probably `allMembers` is right in not returning them. Is there some other way to get them that I'm not aware of? Otherwise, this is a hole in the great reflection capabilities of D, I think there should be some way to get the public imported symbols. Perhaps something like `traits(__getPublicImports, module)` could be added? It would show a list of all the symbols added to the global namespace not defined by the module itself, so it would cover both normal (module) and selective imports. Public named imports are already covered because they do generate a symbol and thus they show in `allMembers`.
Re: Keyword "package" prevents from importing a package module "package.d"
On 02.11.23 14:15, Arafel wrote: You simply can't expect to do `import waffle.foo` from within `waffle/` itself (unless you have another `waffle` folder in it, which is often the case). Sorry, this is wrong. It should read: You simply can't expect to do `import waffle.foo` **when invoking the compiler** within `waffle/` itself (unless you have another `waffle` folder in it, which is often the case). You are actually perfectly fine to import other parts of the same package, as long as you run the compiler from right outside the package, or adjust the import paths accordingly.
Re: Keyword "package" prevents from importing a package module "package.d"
On 02.11.23 13:52, BoQsc wrote: Well the whole thread is about importing `package.d` while being inside package to provide runnable working example which contains debug information of the package. Sorry, but I have never seen a package that includes examples within the package directory itself, nor am I able to imagine why anybody would want that. It would just be polluting the package folder with unnecessary files. Examples are usually distributed in a separate directory, usually at the highest level of the distributable. As for tests, there are `unittest` blocks, and if necessary, they are placed in yet another separate directory. Anyway, your point is moot, because even if you were able to import `package.d`, it would still fail at: ``` public import waffle.testing1; public import waffle.testing2; ``` and for exactly the same reason: the compiler would look for `waffle/testing1.d` and it wouldn't find it withing `waffle/`. You simply can't expect to do `import waffle.foo` from within `waffle/` itself (unless you have another `waffle` folder in it, which is often the case). You always invoke the compiler from the outside the package structure, that's also how it works in java.
Re: Keyword "package" prevents from importing a package module "package.d"
On 02.11.23 12:57, BoQsc wrote: The current major problem is that it does not work on Windows operating system with either `rdmd` or `dmd`. While it does work on run.dlang.io. The problem is with your import path. If you say: ```d import waffles; ``` The compiler would search for either `waffles.d` or `waffles/package.d` **in your current working directory**. So you have three options: 1. You can compile from the parent directory, most likely what run.dlang.io does: `dmd -i -run waffles/program.d` 2. You can explicitly add all the files to the dmd invocation (I think this is what dub does), although that likely defeats the purpose of `rdmd` and `dmd -i`. 3. You can add `..` (the parent directory) to your search path: `dmd -I.. [..]` Actually, the cleanest (and in my view proper) solution would be to create a new `waffles` directory with the "package" itself, and take the main function out of it, so you'd have: ``` waffles | +-- program.d | +-- waffles | +-- package.d | +-- testing1.d | +-- testing2.d ```
Re: Keyword "package" prevents from importing a package module "package.d"
On 02.11.23 11:45, BoQsc wrote: Edit incorrect link to example: [Extensive run.dlang.io example](https://run.dlang.io/is/f3jURn) Correct link: https://run.dlang.io/is/Zbrn75 ``` --- waffles/program.d import waffles; ``` See https://dlang.org/spec/module.html#package-module
Re: Weird bug in std.logger? Possible memory corruption
On 1/11/23 18:26, Christian Köstlin wrote: It's really weird: https://run.dlang.io/is/fIBR2n I think I might have found out the issue. It's indeed related to the lazy parameter and reentrance. The usual logger functions consist of three parts: a header, the message, and the "finalizer". [1]. Internally this is implemented using a string appender [2, 3]. However, this string appender is a member of the class, and this is the source of the bug, because it's shared among all the calls to the logger. It's usually protected by a mutex, so different threads don't mess with each other, but it didn't take reentrancy into account. And so the call to the logging function within `foo` takes place once the appender is already populated, so this is what happens: 1. Header of first call to `log` (`info`, in this case, but it's irrelevant). 2. Body of first invocation -> Call to `foo()` -> Second call to `log`. 3. The buffer is cleared: The first header is discarded (and previous body, if there were any). 4. The second invocation proceeds normally, and the control returns to the first invocation 5. Now the buffer contains the full content of the previous call, and the return of `foo` is added. This is exactly what we see. If we do an assignment **before**, then the call is no longer reentrant and everything happens as expected. So I'd still call it a bug, but at least there seems to be no memory corruption. Also, it doesn't have an easy way to fix it without changing the interface (and potentially affecting already existing custom implementations). In my view, `writeLogMsg` should create a brand new appender (not a member of the class) that would be passed as a parameter to `beginLogMsg`, `logMsgPart` and `finishLogMsg()`. Anyway, I think the mystery is more or less solved. [1]: https://dlang.org/phobos/std_logger_core.html#.Logger [2]: https://github.com/dlang/phobos/blob/master/std/logger/core.d#L1401 [3]: https://github.com/dlang/phobos/blob/master/std/logger/core.d#L619-L641
Re: Weird bug in std.logger? Possible memory corruption
I can only imagine that it's related to the logging functions taking lazy arguments, although I cannot see why it would be a problem in a simple case like this. I've been thinking a bit more about it, and it must be indeed because of the lazy argument. `foo()` is an argument to `info`, but it also uses the logger. However, because it's a lazy argument, it's not called from `main`, but from `info` itself. I strongly suspect that the problem is that it's not reentrant. I'm not clear what it's supposed to happen, but assuming this case won't be supported, it should at least be documented. Should I open a bug about it?
Weird bug in std.logger? Possible memory corruption
Hi, Today I have just found a weird bug in std.logger. Consider: ```d import std.logger : info; void main() { info(foo()); } auto foo() { info("In foo"); return "Hello, world."; } ``` The output is: ``` 2023-10-31T20:41:05.274 [info] onlineapp.d:8:foo In foo 2023-10-31T20:41:05.274 [info] onlineapp.d:8:foo In fooHello, world. ``` The second line is obviously wrong, as it repeats the first line as its header. That's why I suspect memory corruption. Assigning the value to a variable works as expected: ```d import std.logger : info; void main() { auto s = foo(); info(s); } auto foo() { info("In foo"); return "Hello, world."; } ``` gets the proper output: ``` 2023-10-31T21:09:46.529 [info] onlineapp.d:9:foo In foo 2023-10-31T21:09:46.529 [info] onlineapp.d:5:main Hello, world. ``` I can only imagine that it's related to the logging functions taking lazy arguments, although I cannot see why it would be a problem in a simple case like this.
Re: Inheritance and arrays
On 3/7/23 17:41, Steven Schveighoffer wrote: If Java works, it means that Java either handles the conversion by making a copy, or by properly converting on element fetch/store based on type introspection. It also might use a different mechanism to point at interfaces. As I mentioned in another reply, Java does it by adding a runtime check to the type at insertion. This is a performance hit, and I'm glad it's not done in D. Also, I think I read in some other thread here that Java uses a single pointer for both objects and interfaces, but I think the biggest issue here is the covariance. I find it most unexpected and confusing and, assuming it won't change anytime soon, I think it should at the very least be **clearly** documented. And the cast forbidden, since it can't possible work between classes and interfaces. I want to answer to myself here: it was my knowledge of type theory that was lacking: TIL that even if two types are covariant, arrays of them needn't be, and usually aren't. So I was spoiled by Java ;-) [In case somebody else is interested](https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)#Arrays) (also just google it, there are tons of links on the subject). On a final note, thanks to everybody who answered! For all its quirks, D is a great language that I enjoy using, and the community here is great, even if at times it could be a bit more optimistic :-)
Re: Inheritance and arrays
On 3/7/23 13:03, Rene Zwanenburg wrote: On Monday, 3 July 2023 at 09:50:20 UTC, Arafel wrote: Is this a conscious design decision (if so, why?), or just a leak of some implementation detail, but that could eventually be made to work? Besides the pointer adjustment problem mentioned by FeepingCreature, it's an unsound conversion even with just class inheritance. Consider: ``` class A {} class B : A {} class C : A {} void main() { auto bArr = [new B()]; A[] aArr = bArr; // If this was allowed.. aArr[0] = new C(); // This would be a problem, because bArray would now contain a C. } ``` This is a really good point. I just checked out of curiosity what Java does (because it's allowed there). TIL it throws [an exception](https://docs.oracle.com/javase/8/docs/api/java/lang/ArrayStoreException.html) at runtime, which I guess is not a viable strategy for D. Although when using interfaces, if I cast the individual class instances to interfaces, it should work, right? Because then I'm storing the pointers to the interfaces, not to the actual class, so the arrays are actually different, and not two slices of the same array: ``` import std; interface I { } class C : I { } void main() { C c = new C; I i = c; assert (c is i); // OK, took me a while to notice that "is" is smart enough. assert (cast (void *) c != cast (void *) i); // This is what we really care about. C[] cc = [ c ]; I[] ii; ii = cc.map!( a => cast (I)a ).array; assert (ii[0] is c); // This can be unexpected, even if technically right! assert (cast (void*) ii[0] != cast (void*) c); // Now this is what we need. } ```
Re: Inheritance and arrays
That's very clearly an implementation detail leaking: the semantics of the language shouldn't depend on how interfaces and classes are implemented. So then I need to do something like: ```d ii = cc.map!(a => cast (I) a).array; ``` (I just tested it and it works) Any reason why it can't be done internally by the language? I find it most unexpected and confusing and, assuming it won't change anytime soon, I think it should at the very least be **clearly** documented. And the cast forbidden, since it can't possible work between classes and interfaces. Honestly, this severely limits the usefulness of interfaces, and for hierarchies of classes it might be much better to switch to abstract classes. BTW, even for (abstract) classes you need a cast: ```d import std; abstract class I { abstract void foo(); } class C : I { this(int i) { this.i = i; } override void foo() { writeln("In foo: ",i); } int i; } void main() { I i; C c = new C(1); i = c; // Works I[] ii; C[] cc; cc ~= c; i.foo(); // ii = cc; // Doesn't work: Error: cannot implicitly convert expression `cc` of type `C[]` to `I[]` ii = cast (I[]) cc; // Compiles, apparently works //ii = cc.map!(a => cast(I) a).array; ii[0].foo(); } ``` Shouldn't `ii = cc` work in this case? On 3/7/23 12:06, FeepingCreature wrote: On Monday, 3 July 2023 at 09:50:20 UTC, Arafel wrote: Hi! I am a D user coming from java, rather than from C/C++ (although obviously also have some exposure to them), and thus apparently one of the few people here who likes OO (within reason, of course). So while I appreciate the fact that D closely follows java's design, I wonder why there is no implicit inheritance for arrays (also the same applies to AAs): ```d interface I {} class C : I {} void main() { I i; C c = null; i = c; // Works I[] ii; C[] cc = null; // ii = cc; // Doesn't work: Error: cannot implicitly convert expression `cc` of type `C[]` to `I[]` ii = cast (I[]) cc; // Works, but why do I need to cast? } ``` The `cast` version "works", but will crash at runtime. In D, as opposed to Java, a reference to an object has a *different pointer value* than a reference to the interface-typed version of that object. This is necessary for efficient compiled virtual method calls on the interface. But for the same reason, you cannot reinterpret an array of objects to an array of interfaces; even if you can implicitly convert each object to that interface, there's a difference between automatically rewriting a value and automatically rewriting every element of an array: one is O(1), the other is O(n) and incurs a GC allocation.
Inheritance and arrays
Hi! I am a D user coming from java, rather than from C/C++ (although obviously also have some exposure to them), and thus apparently one of the few people here who likes OO (within reason, of course). So while I appreciate the fact that D closely follows java's design, I wonder why there is no implicit inheritance for arrays (also the same applies to AAs): ```d interface I {} class C : I {} void main() { I i; C c = null; i = c; // Works I[] ii; C[] cc = null; // ii = cc; // Doesn't work: Error: cannot implicitly convert expression `cc` of type `C[]` to `I[]` ii = cast (I[]) cc; // Works, but why do I need to cast? } ``` The equivalent java code compiles without issue: ```java interface I {} class C implements I {} public class MyClass { public static void main(String args[]) { I i; C c = null; i = c; // Works I[] ii; C[] cc = null; ii = cc; // Also works } } ``` Is this a conscious design decision (if so, why?), or just a leak of some implementation detail, but that could eventually be made to work?
Re: getSymbolsByUDA in constructor/member functions
On 16/6/22 10:55, frame wrote: On Thursday, 16 June 2022 at 08:23:20 UTC, Arafel wrote: This is not true. `getMember` can return the symbol to the instance or the type/alias, depending if you pass `this` or `Def`. The last is static. It makes no sense to use the attribute from a class without an instance. Classes can have static members just as structs, so I don't think you always need an instance for a class either. It seems the issue could be somewhere else: ``` import std.traits: getSymbolsByUDA; enum E; class C { @E int a; pragma(msg, __traits(getMember,C,"a").stringof); // `a` void foo() { pragma(msg, C.stringof); // `C` pragma(msg, __traits(getMember,C,"a").stringof); // `this.C.a` // Fails here //static foreach (sym; getSymbolsByUDA!(C, E)) { } } // But works here static foreach (sym; getSymbolsByUDA!(C, E)) { } } ``` So if you call `getMember` from a member function, it adds the hidden `this` reference, and this has subtle consequences later on, even if `this.C` is practically just an alias for `C`. I still think this is a bug in `getMember`, although perhaps not as obvious as I first thought.
Re: getSymbolsByUDA in constructor/member functions
On 15/6/22 14:26, cc wrote: ```d import std.traits; class XML {} class Def { @XML { int x; int y; } int z; this() { static foreach (sym; getSymbolsByUDA!(Def, XML)) { } } } void main() { auto def = new Def; } ``` ``` test.d(12): Error: value of `this` is not known at compile time test.d(12): Error: value of `this` is not known at compile time ``` Why doesn't this work? There is nothing in the foreach body. ```d alias ALL = getSymbolsByUDA!(Def, XML); pragma(msg, ALL.stringof); ``` reports `tuple(this.x, this.y)`. Why is `this.` added? I think it's a bug either in the `getSymbolsByUDA` implementation, or actually rather in the `__traits` system. A workaround bypassing `getSymbolsByUDA`: ```d import std.traits; import std.meta: Alias; class XML {} class Def { @XML { int x; int y; } int z; this() { static foreach (sym; __traits(allMembers, Def)) {{ alias member = Alias!(__traits(getMember, Def, sym)); static if (hasUDA!(member, XML)) { pragma(msg, member.stringof); pragma(msg, sym); } }} } } void main() { auto def = new Def; } ``` As you can see, it's `getMember` who is returning a reference to the `this` instance. In my view, this is a bug according the documentation and examples [1]. It might be that classes behave differently, but then it should be documented. In fact, it shouldn't work at all and you'd need to instantiate Def: `getMember` should fail because `x` and `y` are not static. Interestingly, `hasUDA` (or rather `__traits(getAttributes, ...)`) later doesn't care about the dangling `this` reference, so I'm not sure who is to blame here... in any case, at the very least the documentation doesn't match the actual behavior. [1]: https://dlang.org/spec/traits.html#getMember
Re: Why allow initializers of non-static members that allocate?
On 10/6/22 14:58, Salih Dincer wrote: On Friday, 10 June 2022 at 07:35:17 UTC, Bastiaan Veelo wrote: I have been foolish enough to make a mistake like this: ```d struct S { int[] arr = new int[](5); } ``` Well, if the b's may not be equal, there's a simple solution. But why are a's like that, they're not static! ```d void main() { struct S(size_t size) { int[] arr = new int[size]; } S!5 a1, a2; assert(a1.arr.ptr == a2.arr.ptr); S!5 b1; S!6 b2; assert(b1.arr.ptr != b2.arr.ptr); } ``` SDB@79 Because the `arr` are created for each instantiation of the template. All S!5 share one default value, so you could also: ```d assert (a1.arr.ptr == b1.arr.ptr); ``` However, S!6 becomes a completely different struct, and thus gets a different default `arr`. Note that this would also happen with static members: ```d struct S(int T) { static int foo; } static assert(!1.foo !is !2.foo); void main() { } ```
Re: How to get compatible symbol names and runtime typeid names for templated classes?
On 3/5/22 16:48, Adam D Ruppe wrote: Believe it or not, you don't need to touch the compiler. Open your druntime's object.d and search for `RTInfo` http://druntime.dpldocs.info/object.RTInfo.html That is instantiated for every user defined type in the program and you have the compile time info. all druntime uses it for is a tiny bit of GC info and even then only sometimes. But it could do so so so much more. Including doing custom factories and runtime reflection buildups! This looks nice, but I actually meant to allow "template this" in static contexts, as in the bug reports. I think that might indeed need compiler support? You'll make me happy if that's possible without touching the compiler!
Re: How to get compatible symbol names and runtime typeid names for templated classes?
On 3/5/22 15:57, Adam D Ruppe wrote: So doing things yourself gives you some control. Yes, it is indeed possible (I acknowledged it), but I think it's much more cumbersome than it should, and puts the load on the user. If templated this worked in static context (ideally everywhere else too), then we'd be able to implement RTTI in a 100% "pay as you go" way: just inherit from SerializableObject, or perhaps add a mixin to your own root class, and that'd be it. Actually, it would be cool to do it through an interface, although I don't think an interface's static constructors are invoked by the implementing classes... it would be cool, though. And, in one of the bugs, you argue yourself that according to the spec, it *should* work. So please let me just whine... I mean, raise awareness ;-), in case somebody thinks it's interesting and feels brave enough to have a go at it. I'd try it myself, but I wouldn't know where to start. Compiler internals are way beyond my comfort zone...
Re: How to get compatible symbol names and runtime typeid names for templated classes?
On 3/5/22 14:46, Adam D Ruppe wrote: Put a static constructor in the class which appends a factory delegate to an array or something you can use later. Then you can use your own thing to construct registered objects. I'd like to do a runtime registration system myself, using a "template this" static constructor. A simple version supporting only default constructors would be: ```d module test; import std.stdio : writeln; class MyObject { /* static */ this(this T)() { string type = typeid(T).name; if (type !in generators) { generators[type] = () => new T(); } } static MyObject factory(string type) { if(type in generators) { return generators[type](); } else { return null; } } private: static MyObject function()[string] generators; } class MyClass : MyObject { this() { writeln("Creating MyClass"); } } void main() { auto _ = new MyClass(); // Shouldn't be needed auto myClass = MyObject.factory("test.MyClass"); } ``` Unfortunately, this isn't currently possible: https://issues.dlang.org/show_bug.cgi?id=10488 https://issues.dlang.org/show_bug.cgi?id=20277 (notice the big number of duplicates). The closest feasible option is to put it in a non-static constructor, and that's suboptimal: it forces an instantiation of the class, and it will be run at every instantiation. Alternatively, instruct the users to create a static constructor for each of the classes they'd like registered (perhaps through a mixin), but that's also quite cumbersome.
Re: How to get compatible symbol names and runtime typeid names for templated classes?
On 3/5/22 12:48, bauss wrote: This is where compile-time has its limits compared to runtime type creation, because templates only live during compile-time then it isn't really that easy to do something like this, where it would be trivial in other languages like C#. That's something I don't really get. I totally understand that you can't instantiate the template during runtime, but why can't already instantiated classes be registered just like non-templated ones? I tried the following snippet, and couldn't find C!int.C anywhere, although it **must** be there: I can get the `TypeInfo_Class` object, so I can clearly create new instances at runtime: ```d import std.stdio : writeln; class C(T) {} class D {} void main() { auto c = new C!int(); auto c2 = typeid(c).create(); auto d = new D(); writeln(typeid(c).name); writeln(typeid(c2).name); writeln(typeid(d).name); writeln(""); writeln; writeln; foreach (m; ModuleInfo) { if (m) { writeln(m.name); writeln(""); foreach (c; m.localClasses) { if (c) { writeln(c.name); } } writeln; } } } ```
Re: Introspection of exceptions that a function can throw
You'd hit a very big wall with separate compilation unless you can inspect all the code, and know where to find it. But you'd have a problem, for instance, if you are writing a plugin (.so / DLL) for a product for which you only have .di files. Or even worse the other way round: if you want to allow people to write plugins for your product, you can't know what they'll throw, even if they have your code, unless you enforce a `nothrow` interface. But I guess that if you're not doing any of this, it should be possible... although I'd still do it as a separate pre-compilation step, so it could be cached. On 26/2/21 3:21, James Blachly wrote: On 2/24/21 2:38 PM, Mark wrote: Is there a way to obtain a list, at compile-time, of all the exception types that a function might throw (directly or through a call to another function)? Thanks. Crazy idea: Could a program import its own source file as a string (`string source = import('thisfile.d')`) and `-J` , then use a lexer/parser to generate AST of the source code and extract exceptions potentially thrown by given functions -- all at compile time?
Re: How do I get the output of the time bash command?
On 27/1/21 10:35, Anthony wrote: I'm trying to read the timed output of a pipeShell command but it only results in empty output. Does anyone know why this is? ``` auto p = pipeShell("time ls"); foreach(str; p.stdout.byLine) { writefln("%s",str); } ``` I'm not sure why you get an empty output, you should get at least the `ls` output unless it's an empty directory (or one with only "dot" files). However, in any case `time` returns the timing information through `stderr`, not `stdout`[1]. You can try [2,3] (untested): ``` auto p = pipeShell("time ls", Redirect.stderrToStdout); ``` Best, A. [1]: https://linux.die.net/man/1/time [2]: https://dlang.org/library/std/process/pipe_shell.html [3]: https://dlang.org/library/std/process/redirect.html
Re: Why many programmers don't like GC?
On 18/1/21 13:41, Ola Fosheim Grøstad wrote: Yes, it is natural that the current D population don't mind the current GC. Otherwise they would be gone... but then you have to factor in all the people that go through the revolving door and does not stay. If they stayed the eco system would be better. So the fact that they don't... is effecting everyone in a negative way (also those that har happy with the runtime). I must be in the minority here because one of the reasons why I started using D was precisely because it HAS a GC with full support. I wouldn't even have considered it if it hadn't. For what I usually do (non-critical server-side unattended processing) latency is most obviously not an issue, and I for me not having to worry about memory management and being able to focus on the task at hand is a requirement. So I think that several key people (in the community) have different, sometimes even contradicting issues they feel very strongly about, and think these are the most important ones, or the ones that move most people. This is quite OT (perhaps I should have split the topic), but I think that instead of focusing on what people dislike about D, it would help to ask people as well why they DID choose D. In my case, I'm coming from a mostly Java (with a touch of C/C++) and was looking for: * C/C++/Java-like syntax * OOP support (sorry, I'm too used to that ;-) ) * Proper meta-programing / templates (without Java's generics / type erasure) * Compiled language * GC (IOW, no worries about memory management) * Full linux support
Re: Member variables in method are null when called as delegate from thread
On 13/1/21 3:15, Tim wrote: Fantastic response, thank you! I did some more digging and properly narrowed down where the issue is and created a test script that demonstrates the problem. Let me know what you think and if it could still be a similar problem to what you have stated above. I'll still read that info you sent to sharpen up on these concepts. Basically, the program calls a function which modifies a document in the database. If it is called form it's own class' constructor, it works fine. If it is called by a thread, it never returns. I don't think that a member variable is going null or anything. But a strange problem that I can't seem to debug. The output is at the bottom. import vibe.db.mongo.mongo; import core.thread; import std.stdio; void main(){ auto callable = new Callable(); while(true){} } class Caller : Thread{ void delegate() mFunc; this(void delegate() func){ mFunc = func; super(); start(); } void loop(){ while(true){ mFunc(); } } } class Callable{ MongoClient db; Caller caller; this(){ db = connectMongoDB("127.0.0.1"); foo(); caller = new Caller(); } ~this(){ db.cleanupConnections(); } void foo(){ writeln("Started"); auto result = db.getCollection("test.collection").findAndModify([ "state": "running"], ["$set": ["state": "stopped"] ]); writeln(result); writeln("Finished"); } } Output: Started {"_id":"5ff6705e21e91678c737533f","state":"running","knowledge":true} Finished Started Something that you could try for debugging is to add a try / catch block around your call to `db.getCollection` (and printing out the exception details). IIRC, if a thread throws, it will just end without printing anything until the thread is joined, when the exception will be rethrown [1]. The program hanging would be then the main thread waiting. This kind of problem has already bitten me more than once... [1]: https://dlang.org/library/core/thread/osthread/thread.join.html
Re: Member variables in method are null when called as delegate from thread
On 11/1/21 17:10, Steven Schveighoffer wrote: A shared member is a sharable member of the class. It does not put the item in global storage. There are some... odd rules. struct S { static int a; // TLS shared static int b; // shared data storage shared int c; // local variable, but its type is shared(int) immutable int d; // local immutable variable, settable only in constructor immutable int e = 5; // stored in data segment, not per instance! __gshared int f; // stored in global segment, typed as int, not shared(int) } Thanks for the detailed explanation! I think this mixing of types and storage classes makes a very unfortunate combination: ``` import std; int i = 0; shared int j = 0; struct S { int i = 0; shared int j = 0; } S s; void main() { i = 1; j = 1; s.i = 1; s.j = 1; spawn(); } void f() { assert(i == 0); // Expected assert(j == 1); // Expected assert(s.i == 0); // Expected assert(s.j == 0); // Wait, what? } ``` I agree that once you know the inner workings it makes sense, but a naïve approach might suggest that `s.j` would be... well, shared, just like `j`.
Re: Member variables in method are null when called as delegate from thread
On 11/1/21 14:42, Steven Schveighoffer wrote: That isn't exactly true. Member variables are members of the object. If the object is shared, the member variables are shared. If the object is local the variables are local. Thread local really only applies to *static* variables, such as globals or members declared static. If that were the case, yes, the other thread would not see the object. I did not respond to the OP because I also don't know why it wouldn't work. But I also don't know what all the code is doing. -Steve Out of curiosity, what happens with members that are declared `shared` in a non-shared object? I thought that declaring an object `shared` "promotes" its members, but that it wasn't strictly needed for a non-shared object to have shared members, but I might be wrong here. ``` struct S {} class A { S s1; shared S s2; } void main() { A a1 = new A(); pragma(msg, typeof(a1.s1)); // S pragma(msg, typeof(a1.s2)); // shared(S) shared A a2 = new shared A(); pragma(msg, typeof(a2.s1)); // shared(S) pragma(msg, typeof(a2.s2)); // shared(S) } ``` https://run.dlang.io/is/skCfvE Of course I don't know the practical differences in the actual accessibility of the different members beyond the type system. Is there any way to check if a pointer is actually TLS or global storage? If `a1.s2` can't be properly accessed from different threads, I'd consider that a big bug in the `shared` implementation. Best, A.
Re: Member variables in method are null when called as delegate from thread
On 11/1/21 1:43, Tim wrote: Hi there, I have something like this: class Foo{ MongoClient db; this(){ db = connectMongoDB("127.0.0.1"); void delegate()[string] commands = ["start": ]; MessageService messenger = new MessageService(8081, commands); } void start(){ // Do something with db } MessageService is a thread that deals with socket communication. When a command comes in, it calls the appropriate delegate given to it by commands. When MessageService calls the delegate for start, db is null. If I call start() in the Foo constructor it works just fine. Am I missing something here? Do delegates get called outside of their class context? I know I could just pass the db into start but I want to work out exactly why this is happening Thanks in advance Hi, Member variables are thread-local by default. At the very least you'll need to make `db` `shared` and manually verify that it's safe to use them before casting it away. So your code could end up a bit like this: ``` class Foo{ shared MongoClient db; this(){ db = cast (shared) connectMongoDB("127.0.0.1"); void delegate()[string] commands = ["start": ]; MessageService messenger = new MessageService(8081, commands); } void start(){ // Make sure there's no other thread accessing the db // If db is a class, you'll be able to use `db_`: auto db_ = cast () db; // Otherwise you'll be making a copy and will have to use `cast() db` each time, or make a nasty workaround with pointers. } } ``` It's also possible that you'll have to make Foo itself `shared`, or at least convert your constructor into a `shared this ()` to get a shared instance that you can pass to a different thread, but I'm not sure how function pointers / delegates work across threads. Best, A.
Re: Calling function within class.
On 19/11/20 20:51, Vino wrote: Hi Ali, Thank you very much, your solution works for my example, but it does not work for the main goal, let me explain what we are trying to perform. Nut shell: Try to execute an aws command on sever accounts in parallel to get some data. Noe: each account has as separate username and password store in a database table(encrypted). Cod Logic Fetch the username/ password from the table for each account. Get the “awssecrete” key and “accesskey” for each account by calling an aws api using the above username/password. Set the fetched key’s as an environment variable. Execute the aws command and get the data for each of the account As we have many accounts what we are trying is to get the data in parallel(execute the aws command in parallel for each account and store the result in a array). At present our code is working fine(without parallel), the moment we enable parallelism, it is throwing an error on the SQL part (Fetch the username/ password from the table for each account), as it could not execute the SQL query in parallel for different account. If there is any other logic please do let me know will give it a try. Below is the SQL code. @trusted public auto getAwsconf(immutable string account) { auto con = new GetConnections(); Statement stmt = con.db.prepare("SELECT username,AES_DECRYPT(b.userpass,b.key,b.vector) AS passwd FROM config WHERE account = :account"); stmt.setParameter("account", account); RowSet awsaccount = stmt.query(); scope(exit) con.db.close(); return awsaccount; } From, Vino.B Hi, How does the `GetConnections()` constructor work? Is it creating a new connection to the DB each time it's called, or is it getting them from a pool? In any case it should probably return a different connection for each thread (so the pool size should be at least the same as the number of workers). Otherwise one thread will try to start a new SQL operation while there is another already running. In the best case, if the library is thread-safe, it could enqueue them thus negating the benefit of parallelism... if not, then you'll likely get errors like the ones you're seeing. Best, A.
Re: Why private methods cant be virtual?
On 22/9/20 15:04, claptrap wrote: The thread title is... "Why private methods cant be virtual?" IE Not... "how do I override private functions in a non-polymorphic manner." And what you suggest wont work because I was asking about virtual functions, so I specifically want polymorphism. And FWIW it's no big deal I can just use protected, i wasn't looking for a solution, I was looking for an explanation as to why it was done that way. But apparently there is none. TL;DR: Wouldn't `package` [1] visibility probably be a better option in any case? Long Answer: My guess is that this was taken from Java, as in fact most of the D class system seems to be (see `synchronized`, reference semantics, etc). There it makes sense, because there is only one class per compilation unit, so the `private` members are in effect hidden from any child classes and it wouldn't make sense to override them. The different (and to me still confusing, but I understand the reasoning behind it) factor in D is that the encapsulation unit is the module, not the class. Hence, you can have multiple classes in the same module inheriting from each other. These classes can then access the private members of the parent, but not override it, which as you say is somewhat strange. I personally would rather have the class as the encapsulation unit for classes, and then this point would be moot, but I come mostly from Java, so that might just be my bias, and, as I said, I understand there are also good reasons to keeps the module as the common encapsulation unit. Still, I think that when you design a class, if you declare something as `private` means that it's an internal implementation detail that you don't want to expose, much less any child class to override. In fact, to allow only the classes in the same module to override a private method looks to me like code smell. You likely have good reasons to do it, but, even if it were possible, I would probably try to do it in a way where the intent is clearer, either through `protected` or `package` visibility... the latter has the added benefit that you can split the module later if needed. A. [1]: https://dlang.org/spec/attribute.html#visibility_attributes
Re: how to assign to shared obj.systime?
On 14/7/20 10:45, Dominikus Dittes Scherkl wrote: This is generally true. Avoid sharing many variables! Tasks should be as independent from each other as possible. Anything else is bad design doomed to run into problems sooner or later. Also there is really almost never a good reason to share whole classes or nested structures. Sometimes you want to encapsulate your "shared" logic. For instance, a class (or struct) might be a thread-safe container responsible for storing shared data across multiple threads. Each thread would get a shared reference to the container, and all the synchronization would be managed internally by that class. In these cases I just marked the whole container class as `shared` instead of having to mark every single method, in fact there were no non-shared methods at all. Now I know that the members shouldn't be shared, just the methods, but the difference wasn't clear to me until now, because it only shows when you cast shared away from `this`. If you only instantiate shared variables, all the members become automatically shared as well, even if they originally weren't. So far I was just removing shared from the individual members, so I didn't notice: ``` import std; class S { SysTime a; shared SysTime b; synchronized shared void setIt(SysTime t) { // What I used to do cast () a = t; // Here you need to cast away shared anyway cast () b = t; // so it doesn't make any difference. // What I'll do from now on with(cast() this) { // You only notice the difference when you cast away `shared` from `this` a = t; // b = t; // FAILS } } } ```
Re: how to assign to shared obj.systime?
On 14/7/20 8:13, Kagamin wrote: --- import std; shared class TimeCount { void startClock() { auto me = cast()this; me.startTime = Clock.currTime; } void endClock() { auto me = cast()this; me.endTime = Clock.currTime; } void calculateDuration() { auto me = cast()this; me.elapsed = me.endTime - me.startTime; } private: SysTime startTime; SysTime endTime; Duration elapsed; } --- And this is shorter than your unshared member specification. It won't work if you need to do it inside a struct instead of a class, because you'll get a copy: ``` import std; shared struct S { void setA (int _a) { auto me = cast() this; me.a = _a; } int a; } void main() { shared S s; writeln("Before: ", s.a); // 0 s.setA(42); writeln("After: ", s.a); // still 0 } ``` That said, `with (cast() this) { ... }` *will* work, because there's no copying. This is a really nice idiom that I didn't know and that I'll use from now on. *However*, for this to work, you shouldn't use `shared` member variables unless absolutely necessary, much less whole `shared` classes/structs, and only declare the individual methods as shared, because casting away `shared` from `this` will only peel the external layer: ``` import std; struct S { SysTime a; shared SysTime b; synchronized shared void setIt(SysTime t) { with(cast() this) { a = t; // b = t; // FAILS, `b` is still `shared` even for non-shared `S` } } } ``` Also, I'm pretty sure there are still corner cases when you have to nest data structures, but so far this strategy seems good enough.
Re: how to assign to shared obj.systime?
On 14/7/20 8:05, Kagamin wrote: On Monday, 13 July 2020 at 07:26:06 UTC, Arafel wrote: That's exactly why what I propose is a way to *explicitly* tell the compiler about it, like @system does for safety. With __gshared you can opt out from sharing safety, then you're back to old good C-style multithreading. That's apples and oranges. I do agree in principle with the idea of `shared`, I just want a way to tell the compiler that `shared` doesn't apply *within a given block*, if possible also only for some specific variables, because I have already taken care of the synchronization, that's exactly what the system tries to promote. __gshared on the other hand is just dispensing with the `shared` system altogether and giving up the protections it offers. Furthermore it only works for global objects and static variables/members [1], so its use is limited. [1]: https://dlang.org/spec/attribute.html#gshared
Re: how to assign to shared obj.systime?
On 13/7/20 14:18, Steven Schveighoffer wrote: cast() will remove as little as possible, but for most cases, including classes and struts, this means the entire tree referenced is now unshared. Yeah, but the whole lvalue cast looks just non-obvious and ugly to me: ``` cast() foo = bar; ``` It looks like an ad-hoc hack, and I haven't seen it used anywhere else. I don't even think it's well-documented (it's probably somewhere in the grammar, without much explanation of what it does or what it would be useful for). I know I had to asks in the forums because I couldn't even assign to a shared SysTime! An AA does something really useless, which I didn't realize. If you have a shared int[int], and use cast() on it, it becomes shared(int)[int]. Which I don't really understand the point of. But in any case, casting away shared is doable, even if you need to type a bit more. Sure, it's doable, but the readability suffers a lot, and also it's just too error-prone. The intent is to cast away shared on the ENTIRE aggregate, and then use everything in the aggregate as unshared. I can imagine something like this: ref T unshared(T)(return ref shared(T) item) { return *(cast(T*)); } with(unshared(this)) { // implementation using unshared things } I wasn't suggesting that for each time you access anything in a shared object, you need to do casting. In essence, it's what you are looking for, but just opt-in instead of automatic. Yes, that would be nice as a workaround, although ideally I'd like a more comprehensive and general solution. Sometimes you don't need to strip shared only from `this`, sometimes only it's only from some parts, and sometimes also from some external objects. To be clear, I'm so far assuming it's explicitly opt-in by the user. I wouldn't mind seen something done with `synchronized` classes, but that's probably a much more complex issue. Yeah, this looks suspiciously like the with statement above. We seem to be on the same page, even if having different visions of who should implement it. I think we're in "violent agreement" territory here :-) I honestly would be happy if there were a reliable library solution that worked even now, because so far for any non-trivial situation I have to spend more time casting from and to shared than doing the actual work, and the code becomes a mess to follow afterwards. You are better off separating the implementation of the shared and unshared parts. That is, you have synchronized methods, but once you are synchronized, you cast away shared and all the implementation is normal looking. Compare: class TimeCount { public: void startClock() { startTime = Clock.currTime; } synchronized void startClock() shared { (cast()this).startClock(); } void endClock() { endTime = Clock.currTime; } synchronized void endClock() shared { (cast()this).endClock(); } void calculateDuration() { timeEllapsed = endTime - startTime; } synchronized void calculateDuration() shared { (cast()this).calculateDuration(); } private: SysTime startTime; SysTime endTime; Duration timeEllapsed; } I would imagine a mixin could accomplish a lot of this, but you have to be careful that the locking properly protects all the data. A nice benefit of this approach is that no locking is needed when the instance is thread-local. Just thinking of the amount of boilerplate makes my head spin. Even if a mixin could somehow automate it, I still think there should be a "proper" way to do it, without that much hacking around. Furthermore, In my case I'm trying to do fine-grained locking, and I might have to get different locks within the same function. Of course I could split the function, but it would be constantly interrupting the "natural flow" of what I'm trying to do, and it would become so much harder to understand and to reason about. And these functions wouldn't make sense by themselves, would probably need access to locals from the parent function, and would only be called from one place... so I see them as a kind of anti-pattern. Also, `shared` and `synchronized` would become in this case pretty much useless then when applied to a class / structure. I think we may have been battling a strawman here. I assumed you were asking for synchronized to be this mechanism, when it seems you actually were asking for *any* tool. I just don't want the locking to be conflated with "OK now I can safely access any data because something was locked!". It needs to be opt-in, because you understand the risks. I think those tools are necessary for shared to have a good story, whether the compiler implements it, or a library does. -Steve I totally agree with this. As I mentioned, I wouldn't mind `synchronized` classes becoming apt for the trivial cases (i.e. you just have a
Re: how to assign to shared obj.systime?
On 13/7/20 3:46, Steven Schveighoffer wrote: On 7/11/20 6:15 AM, Arafel wrote: What I really miss is some way of telling the compiler "OK, I know what I'm doing, I'm already in a critical section, and that all the synchronization issues have been already managed by me". You do. It's a cast. Yes, and that's what I'm doing (although with some helper function to make it look slightly less ugly), but for non-reference types I have to do it every single time you use the variables, and it's annoying for anything beyond trivial. There's no way to avoid it, because at best you can get a pointer that will be enough for most things, but it will show for instance if you want to use it as a parameter to another function. Also, with more complex data types like structs and AAs where not only the AA itself, but also the members, keys and values become shared, it's *really* annoying, because there's no easy way you can get a "fully" non-shared reference, because `cast()` will *often* only remove the external shared layer (I'm not sure it's always the case, it has happen semi-randomly to me, and what it's worse, I don't know the rules for that). Also, it becomes a real pain when you have to send those types to generic code that is not "share-aware". And for basic types, you'll be forced to use atomicOp all the time, or again resort to pointers. So yes, it's not impossible, but it's really, really inconvenient, to the point of making `shared` almost unusable beyond the most simple cases. In fact, I would be happy if it had to take a list of variables, and ignore `shared` just for them (and their members): Within this block, shared would implicitly convert to non-shared, and the other way round, like this (in a more complex setup with a RWlock): ``` setTime(ref SysTime t) shared { synchronized(myRWMutex.writer) critical_section { // From this point I can forget about shared time = t; } } ``` This isn't checkable by the compiler. That's exactly why what I propose is a way to *explicitly* tell the compiler about it, like @system does for safety. I used `critical_section`, but perhaps `@critical_section` would have been clearer. Here is be a more explicit version specifying the variables to which it applies (note that you'd be able to use "this", or leave it empty and have it apply to everything): ``` void setTime(ref SysTime t) shared { synchronized(myRWMutex.writer) { @critical_section(time) { // From this point I can forget about shared time = t; } } } ``` Here it doesn't make a difference because the critical section is a single line (so it's even longer), but if you had to use multiple variables like that in a large expression, it'd become pretty much impossible to understand without it: ``` import std; synchronized shared class TimeCount { // It's a synchronized class, so automatically locking public: void startClock() { cast() startTime = Clock.currTime; // Here I have to cast the lvalue // startTime = cast(shared) Clock.currTime; // Fails because opAssign is not defined for shared } void endClock() { cast() endTime = Clock.currTime; // Again unintuitively casting the lvalue } void calculateDuration() { timeEllapsed = cast (shared) (cast() endTime - cast() startTime); // Here I can also cast the rvalue, which looks more natural } private: SysTime startTime; SysTime endTime; Duration timeEllapsed; } ``` Non-obvious lvalue-casts all over the place, and even `timeEllapsed = cast (shared) (cast() end - cast() start);`. And that one is not even too complex... I know in this case you can reorganize things, but it was just an example of what happens when you have to use multiple shared variables in an expression. You could accidentally end up referencing shared things as unshared when the lock is unlocked. If you remove shared, you need to know and understand the consequences, and the compiler can't help there, because the type qualifier has been removed, so it's not aware of which things are going to become shared after the lock is gone. -Steve Well, it's meant as a low level tool, similar to what @system does for memory safety. You can't blame the compiler if you end up doing something wrong with your pointer arithmetic or with your casts from and to void* in your @system code, can you?
Re: how to assign to shared obj.systime?
On 10/7/20 20:30, mw wrote: On Friday, 10 July 2020 at 17:35:56 UTC, Steven Schveighoffer wrote: Mark your setTime as shared, then cast away shared (as you don't need atomics once it's locked), and assign: synchronized setTime(ref SysTime t) shared { (cast()this).time = t; } I know I can make it work by casting, my question is: we had a lock on the owning shared object already, WHY we still need the cast to make it compile. Because the system don't know if just this lock is enough to protect this specific access. When you have multiple locks protecting multiple data, things can become messy. What I really miss is some way of telling the compiler "OK, I know what I'm doing, I'm already in a critical section, and that all the synchronization issues have been already managed by me". Within this block, shared would implicitly convert to non-shared, and the other way round, like this (in a more complex setup with a RWlock): ``` setTime(ref SysTime t) shared { synchronized(myRWMutex.writer) critical_section { // From this point I can forget about shared time = t; } } ``` As a workaround, I have implemented the following trivial helpers: ``` mixin template unshareThis() { alias S = typeof(this); static if (is(S C == shared C)) {} static if (is(S == class) || is(S == interface)) { C unshared = cast(C) this; } else static if (is(S == struct)) { C* unshared = cast(C*) } else { static assert(0, "Only classes, interfaces and structs can be unshared"); } } pragma(inline, true); ref unshare(S)(return ref S s) { static if (is (S C == shared C)) { } return *(cast(C*) ); } ``` With them you should be able to do either: ``` synchronized setTime(ref SysTime t) shared { mixin unshareThis; unshared.time = t; } ``` (useful if you need multiple access), or: ``` synchronized setTime(ref SysTime t) shared { time.unshare = t; } ```
Re: D Plugin for Visual Studio Code [was Re: Visual D 1.0.0 released]
On 4/7/20 19:58, Paul Backus wrote: You're looking for code-d: https://github.com/Pure-D/code-d Thanks! I'm trying it, although at least with VSCodium and Linux I had to build from sources, it didn't show by searching in the marketplace.
D Plugin for Visual Studio Code [was Re: Visual D 1.0.0 released]
On 4/7/20 17:42, Rainer Schuetze wrote: Indeed, this is Windows only. Visual Studio Code is a different platform than Visual Studio. Not sure why Microsoft named them so that they are easily confused. (Moving to the learn forum, since it now seems more appropriate) It's certainly confusing that "Visual Studio" and "Visual Studio Code" are different platforms... Is there any D plugin that would work with the latter, specially in a linux environment?
Re: Garbage collection
On 27/6/20 13:21, Stanislav Blinov wrote: I would think collect + minimize should do the trick. Just keep in mind that that's grossly inefficient. If you are using linux, have in mind that the memory is often not returned to the OS even after a (libc) free. If you check with tools like `top`, it'll still show as assigned to the process. What I had to do (both in D and in C/C++) was to call malloc_trim [1] manually to have the memory actually sent back to the OS. [1]: https://man7.org/linux/man-pages/man3/malloc_trim.3.html
Re: Finding out ref-ness of the return of an auto ref function
On 12/6/20 20:34, Stanislav Blinov wrote: On Friday, 12 June 2020 at 17:50:43 UTC, Arafel wrote: All in all, I still think something like `__traits(isRef,return)` would still be worth adding! After all the compiler already has all the information, so it's just about exposing it. I'm trying to think of a library solution, but I find it very hard to express "the hypothetical result of calling the current function with the current parameters in the current context". A. If you're wrapping a function you can use a 'getFunctionAttributes' trait [1], which would contain a "ref" if that function returns ref. If, however, you're wrapping a function template, however, you won't know until you actually instantiate it, which is basically going back to Paul Backus' solution. So the compiler doesn't always have all the information :) [1] https://dlang.org/spec/traits.html#getFunctionAttributes Well, the compiler can know `typeof (return)`, so at that point and under the same circumstances it has know (and thus it could expose) the ref-ness. Also there could be a better and more straightforward way of checking if an expression would be an l-value... it seems it's not the first time it has appeared: https://issues.dlang.org/show_bug.cgi?id=15634 The forum thread linked in the bug report is also quite interesting.
Re: Finding out ref-ness of the return of an auto ref function
On 12/6/20 18:15, Paul Backus wrote: I think I have something that works: ref int foo(); int bar(); enum isLvalue(string expr) = q{ __traits(compiles, (auto ref x) { static assert(__traits(isRef, x)); }(} ~ expr ~ q{)) }; pragma(msg, mixin(isLvalue!"foo()")); // true pragma(msg, mixin(isLvalue!"bar()")); // false Basically, you can pass the result of the function call to a function with an `auto ref` parameter and check whether that parameter is inferred as ref or not. Thanks a lot! I have to say, it works, it's really, really clever... but it's also ugly as hell, and feels like a kludge. I had already tried something similar with an identity function, but I couldn't make it compile-time... I was missing the "compiles"+"static assert" trick. Also, it can become quite hard to check from within the function itself... in my case it's more or less doable because it's basically a forwarding, so I'm essentially doing a simple mixin, but in a more complex case (with perhaps even static ifs and whatever) I can see it becoming essentially unmanageable. All in all, I still think something like `__traits(isRef,return)` would still be worth adding! After all the compiler already has all the information, so it's just about exposing it. I'm trying to think of a library solution, but I find it very hard to express "the hypothetical result of calling the current function with the current parameters in the current context". A.
Finding out ref-ness of the return of an auto ref function
Hi all, I'm hitting a problem that it's making crazy... is there any way to find out if the return of an `auto ref` function is actually ref or not? So, according to the documentation [1] it depends on the return expressions... however in my case I'm implementing `opDispatch` in a wrapper type (and trying to carry over the ref-ness), so I don't know how can I check it. Now, the whole point of this wrapper is to act differently based on whether the return is a reference or not (it already checks for `hasIndirections`, which btw doesn't help here either). I've tried to use `__traits(isRef, ??? ) but I haven't been able to find out how to use it, it seems to be meant for parameters. Perhaps it would make sense to have something like `traits(isRef, return)`? Also I have tried making two different overloads, with and without ref, but it didn't work either... A. [1]: https://dlang.org/spec/function.html#auto-ref-functions
Re: How to disable/hide constructor when using factory method?
You are declaring the constructor, but not defining it, i.e. you're telling the compiler that it's in some other compilation unit. The compiler won't complain, but the linker will. If you replace: private this(); with: private this() {} it should work. A. On 1/24/19 1:48 PM, JN wrote: I expected that too, but it doesn't even work in the same module. class Foo { private this(); static Foo makeFoo() { Foo f = new Foo(); return f; } } void main() { } fails with: onlineapp.o:onlineapp.d:_D9onlineapp3Foo7__ClassZ: error: undefined reference to '_D9onlineapp3Foo6__ctorMFZCQzQr' onlineapp.d:7: error: undefined reference to '_D9onlineapp3Foo6__ctorMFZCQzQr' collect2: error: ld returned 1 exit status Error: linker exited with status 1 I don't understand why is this a linker problem. My understanding is that for some reason static methods don't have access to the private constructor (they're not considered same module?). But even though, it should error with something like "Foo.makeFoo() cannot access private Foo.this()" rather than fail at linking.
Re: Error: incompatible types for 'shared(SysTime)' and 'shared(SysTime)'
On 09/13/2018 06:59 PM, ag0aep6g wrote: On 09/13/2018 03:25 PM, Arafel wrote: // How can we update the timestamp? Neither of those work timestamp = Clock.currTime; timestamp = cast(shared) Clock.currTime; cast() timestamp = Clock.currTime; Still not there... it doesn't work with ref parameters (and probably other things, like AAs, or at least nested AAs / arrays): ``` import std.stdio; import std.datetime.systime; import core.time; void foo(ref SysTime t) { t += 1.dur!"minutes"; } shared synchronized class A { private SysTime s; this() { cast ()s = Clock.currTime; // OK, This works } void foo() { writeln("A.foo - Before: ", cast() s); // But how to do this?? //(cast () s).foo; //s.foo; writeln("A.foo - After: ", cast() s); } } void main() { SysTime s = Clock.currTime; writeln("main - Before: ", s); s.foo; writeln("main - After: ", s); shared A a = new shared A; a.foo; } ``` That makes me wonder if casting a lvalue makes sense at all, and how come that the result is not another lvalue... what it is, I don't know, because you can assign to it, but not take a reference.
Re: Error: incompatible types for 'shared(SysTime)' and 'shared(SysTime)'
(*(cast (SysTime*) ())).foo; Not exactly obvious or user-friendly...
Re: Error: incompatible types for 'shared(SysTime)' and 'shared(SysTime)'
On 07/05/2016 04:16 PM, ag0aep6g wrote: On 07/05/2016 07:25 AM, ketmar wrote: cast `shared` away. yes, this is how you supposed to use it now: cast it away. after having ensured thread safety that is Sorry to resurrect an old thread, but then how can one update a SysTime field in a shared class? Like this (using a synchronized class for simplicity, this part works and the mutex acts as expected): ``` import std.concurrency; import std.datetime.systime; import core.thread; public synchronized shared class A { public: void doSomething() { // Doing something takes a couple of seconds. Thread.sleep(2.dur!"seconds"); // How can we update the timestamp? Neither of those work timestamp = Clock.currTime; timestamp = cast(shared) Clock.currTime; } private: SysTime timestamp; } void main() { shared A a = new shared A; spawn( (shared A a) { a.doSomething;}, a ); Thread.sleep(1.dur!"seconds"); spawn( (shared A a) { a.doSomething;}, a ); } ``` Of course the kludge (and what I'll be doing) is just to use __gshared, but I expected this to be a convenience / hack to save you castings, rather than the only way to achieve it. A.
Weird (buggy) behaviour of "protected static" in classes
Hi all, I have noticed that "protected static" doesn't work currently with classes. In my case, I wanted to use "static immutable", but I have tried regular static members and methods, and the same issue happens. However, the puzzling part is that protected enums (which are a valid workaround for me, perhaps not for others) work. The spec [1] is a bit unclear about how that's supposed to work, although I tend to thing that it allows them: --- protected only applies inside classes (and templates as they can be mixed in) and means that a symbol can only be seen by members of the same module, or by a derived class. If accessing a protected instance member through a derived class member function, that member can only be accessed for the object instance which can be implicitly cast to the same type as ‘this’. protected module members are illegal. --- Is that a bug? I have to say, I think so, since it can also affect other symbols that are defined... for *grandchild* classes! You can see the multiple problems in the snippet [1]: ```dio.d module dio; public class A { public enum int a = 1; protected enum int b = 2; private enum int c = 3; public static immutable int d = 4; protected static immutable int e = 5; private static immutable int f = 6; protected static struct S { } } public class A2 { protected static struct S { } } ``` ```main.d import dio; class B : A { pragma(msg, "The value of A.a is: ", typeof(super).a); pragma(msg, "The value of A.b is: ", typeof(super).b); //pragma(msg, "The value of A.c is: ", typeof(super).c); // Expected failure pragma(msg, "The value of A.d is: ", typeof(super).d); pragma(msg, "The value of A.e is: ", typeof(super).e); // *BUG* Comment this line and *BOTH* errors will go away!! //pragma(msg, "The value of A.f is: ", typeof(super).f); // Expected failure S s; } class C : B { S s; } class B2 : A2 { S s; } class C2 : B2 { S s; } void main() { } ``` The most shocking thing is that it is C's access to A.S that gets affected, I think that must be a compiler bug. Still, it would be nice to confirm that "protected static" is supposed to work as intuitively expected. Best, A. [1]: https://glot.io/snippets/f4a1b3x4sf
Re: Runtime introspection, or how to get class members at runtime Fin D
On Thursday, 7 June 2018 at 13:07:21 UTC, evilrat wrote: I don't think so. It clearly states that children must mixin too, which can mean it just grabs symbols in scope only, and base class has no way of knowing about its subclasses. It also has "agressive mode" that will make metadata for all public symbols(?) it can walk, this may or may not be helpful depending on your requirements. Yes, that's what I understood from looking at it, but perhaps I was just missing something. I wonder though how the "agressive mode" would work with separate compilation / dlopen'ed libraries. Perhaps I should give it a try and see what happens. Besides there is no way(not that I am aware of) to make self registering stuff happen, you still need to call it somewhere. The most transparent option is probably just doing a mixin in each module that performs registration of all module symbols in module ctor. The point is that there is absolute requirement to make explicit call for that, be it a module ctor mixin, class mixin or even user provided registration both at compile time or run time. But since it is MIT licensed you can probably use the code as the starting point and adjust to your own needs. BTW plug-ins is something that is right now possible on Linux(not sure about support on other *NIX systems), but in a very primitive form on Windows. This is related to DLL support issues(such as type information not being passed across process/DLL boundaries), these issues also may include runtime issues as well such as inability to delegate the GC, which will mean there will be 2(or more) concurrent running GC's. But again I am not aware of the current situation. Well, I'm already tightly coupled to linux, so this is not a big concern for me :-) I'll keep trying, as I said, my intention was to let plugin writers do it as easily as possible, but well, adding some kind of "register" function might be necessary in the end... A.
Re: Runtime introspection, or how to get class members at runtime Fin D
Thanks for all the answers! Is it possible to register, say, a base class, and have all the subclasses then registered automatically? My idea would be to make it as transparent as possible for the plugin implementation, and also not having to depend on it. A. There is a library that creates reflection metadata for you. [1] It seems a bit outdated and has some not-that-obvious compilation errors(for example getting ctor and callling it with runtime known type, or some other non template stuff), but other than that seems to be working (note that I didn't thorougly tested it, but its unittests succeeds on DMD 2.080 for both Windows x86 mscoff & x64 ) [1] https://code.dlang.org/packages/witchcraft
Re: Runtime introspection, or how to get class members at runtime Fin D
On 06/06/2018 03:52 PM, Adam D. Ruppe wrote: It is possible to add it to the runtime library right now (there's a thing called rtInfo in there made to hold it, it is just null right now), just people fight over even a few bytes of added metadata. So if it is added, it would surely be some opt-in thing that will require your thing be recompiled anyway. If I wanted to add it myself, would I need to create a personalised D compiler and/or D Runtime? That would be probably way too much for me :) Also, it would have to be distributed and used to create the plugins... If you can recompile the library, you can add a parallel extended info thing (MyReflectionInfo[TypeInfo] array perhaps, populated by a static this() ctor created via compile time reflection) that gives what you need. Yeah, I had some ideas in this regard, still I would like it to be transparent from the plugin creator point of view, but I don't know if it would be possible to contain everything in the base class... so far I had though about a base class like this: ``` import std.traits; import std.meta; TypeInfo[string][TypeInfo_Class] RTInfo; class Base { this(this C)() { if (typeid(C) in RTInfo) return; RTInfo[typeid(C)] = (TypeInfo[string]).init; static foreach_reverse(Type; AliasSeq!(C, BaseClassesTuple!C)) { static foreach(string field; FieldNameTuple!Type) { RTInfo[typeid(Type)][field] = typeid(typeof(__traits(getMember, Type, field))); } } } } ``` But I think children classes can bypass this constructor, so I guess it's not so easy, will have to keep trying :-) A templated static this would be cool, though: class Base { static this(this C)() { // ... } } Apparently it's not possible :-(
Re: Runtime introspection, or how to get class members at runtime Fin D
On 06/06/2018 03:30 PM, rikki cattermole wrote: You don't want TypeInfo. Why not (genuine question)? There's even myObject.classinfo, and I can only assume that there's some reason why it's there... In this case, what I'm trying to do is to serialize / dump / print the contents of an object (class instance) without knowing its actual runtime type. Before somebody suggests compile time introspection, the "main" code where this routine lives only provides a base class, and it's up to dlopen'ed plugins to provide the actual implementation... so I'm sorry but no compile-time solution can possibly work. Also, having each derivative class provide their own dumping information is not practical, I'd rather have it automated. I know it might not be the most idiomatic D, but as somebody with mostly a Java background (with some C and just a bit of C++) it seems something really straightforward to me: myObject.getClass().getFields() [2]. Doesn't exist. Well, thanks for the quick and succinct answer... I guess the question now would be how realistic it would be to propose such an addition to the language... Has it already been discussed? (I tried searching the forum, but didn't find anything relevant) I know it's got a runtime penalty, but realistically speaking, spending some bytes for the field names in the TypeInfo of a class shouldn't be that much of a problem?
Runtime introspection, or how to get class members at runtime Fin D
Hi, What is the state of runtime introspection in D, specifically for classes? Is there any way to get *at runtime* the (public or otherwise accessible) members of a class? I have had a look as TypeInfo_Class [1], but apparently I can only get a list of types and offsets... which would be almost good enough, if not because the names of the members are missing, or at least I haven't been able to find them. In this case, what I'm trying to do is to serialize / dump / print the contents of an object (class instance) without knowing its actual runtime type. Before somebody suggests compile time introspection, the "main" code where this routine lives only provides a base class, and it's up to dlopen'ed plugins to provide the actual implementation... so I'm sorry but no compile-time solution can possibly work. Also, having each derivative class provide their own dumping information is not practical, I'd rather have it automated. I know it might not be the most idiomatic D, but as somebody with mostly a Java background (with some C and just a bit of C++) it seems something really straightforward to me: myObject.getClass().getFields() [2]. Also, I know I could add some UDA or even crawl the modules and have this information generated automatically at compilation time and added to the type itself in a member, and I might even end up doing it, but honestly, I think it's something that the language should provide in a kind of easy / accessible way. Powerful as compile-time introspection is, I think runtime shouldn't be forgotten either :-) Thanks, A. [1]: https://dlang.org/library/object/type_info__class.html [2]: https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getFields--
Re: Getting the overload set of a template
On Monday, 23 April 2018 at 16:52:11 UTC, Alex wrote: On Monday, 23 April 2018 at 16:16:09 UTC, Arafel wrote: ``` import std.meta; void main() { pragma(msg, __traits(getMember, A, "Foo1").stringof); // Foo1(int N) if (N & 1) pragma(msg, __traits(getAttributes, __traits(getMember, A, "Foo1"))[0]); // tuple("int", "odd") alias f1a = Instantiate!(__traits(getMember, A, "Foo1"), 1); // This is expected pragma(msg, f1a); // A alias f1b = Instantiate!(__traits(getMember, A, "Foo1"), "+"); // Why would I know that I can even instantiate?? Also, can I haz UDA plz? pragma(msg, f1b); // B } class A { @("int", "odd") template Foo1(int N) if (N & 1){ enum Foo1 = "A"; } @("string", "+") template Foo1(string op) if (op == "+") { enum Foo1 = "B"; } } ``` I'm not arguing about the case of different interfaces. It is more or less ok, as from different argument types it will be unambiguous which template will be instantiated. It is the case of differentiating templates by their structure and/or constraints. In this case, it is almost sure, that more then one form of implementation exists. However, the forms will yield the same semantic result. And I'm wondering why the implementation form alone leads to differentiation. Well, with templates the overload resolution must be always unambiguous: ``` import std.stdio; void main() { pragma(msg, A.Foo1!2); pragma(msg, A.Foo1!3); static assert(!is (typeof(A.Foo1!6))); // Compilation failure if there is any ambiguity } class A { template Foo1(int N) if ((N % 2) == 0){ enum Foo1 = "A"; } template Foo1(int N) if ((N % 3) == 0) { enum Foo1 = "B"; } } ``` Also, you can try without a constraint, it will still complain. But you are arguing from the point of view of a hypothetical semantical equivalence that I don't think it's so clear. Both are tools that in some cases can lead to the same result, but there are also cases where they don't math. You could also argue that function overloads are just semantically equivalent to a single function with variadic arguments. Whether the compiler actually lowers it like that or not should be just an implementation detail, and thus simply not relevant. And from a syntactical point of view, it wouldn't make any sense if the following "overloads" were treated differently: ``` class A { @("int", "odd") template Foo1(int N) if (N & 1){ enum Foo1 = "A"; } @("int", "even") template Foo1(int N) if (!(N & 1)){ enum Foo1 = "B"; } @("string", "+") template Foo1(string op) if (op == "+") { enum Foo1 = "C"; } @("multi", "string") template Foo1(T...) if (allSatisfy!(isSomeString, typeof(T)) && T.length > 1) { enum Foo1 = "D"; } @("multi", "double") template Foo1(T...) if (allSatisfy!(isFloatingPoint, typeof(T)) && T.length > 1) { enum Foo1 = "E"; } } ``` How would you know which ones are "real" overloads (in your meaning)? A.
Re: Getting the overload set of a template
I think both versions are not equivalent at all. Consider [1]: ``` import std.meta; void main() { pragma(msg, __traits(getMember, A, "Foo1").stringof); // Foo1(int N) if (N & 1) pragma(msg, __traits(getAttributes, __traits(getMember, A, "Foo1"))[0]); // tuple("int", "odd") alias f1a = Instantiate!(__traits(getMember, A, "Foo1"), 1); // This is expected pragma(msg, f1a); // A alias f1b = Instantiate!(__traits(getMember, A, "Foo1"), "+"); // Why would I know that I can even instantiate?? Also, can I haz UDA plz? pragma(msg, f1b); // B } class A { @("int", "odd") template Foo1(int N) if (N & 1){ enum Foo1 = "A"; } @("string", "+") template Foo1(string op) if (op == "+") { enum Foo1 = "B"; } } ``` In this case you could perhaps use an alias parameter to achieve a similar effect. I haven't tried it but it would be really messy, if it even works. What seems clear to me from this case is that *internally* the compiler sees a *set* of "overloads" (change that word if you think it should only apply to functions), but I can only get the first of the batch! For example, I might want to add some information in the UDA on how to instantiate the template, but then I can't get the UDA either. I'm sure it's somewhere, just that we get no access to it, and that shouldn't be too hard to add. I think this clarifies a bit the problem I see. A. [1]: https://run.dlang.io/is/zRDHGn On 04/23/2018 05:00 PM, Alex wrote: On Monday, 23 April 2018 at 14:22:13 UTC, Simen Kjærås wrote: As with all things D, the only real spec is the compiler source code. :p :( :p Proving that two templates are equivalent is in general impossible, since any amount of wasted computation could be performed before the end result is returned, and inputs must be tested exhaustively for the proof to be valid. The fact that two templates give the same result in one special case does not mean that they are equivalent in the general case, and the compiler needs to care about the general case. Ok, thats exactly the point. If you have functions void foo() {} void foo(int n) {} There is no ambiguity which function will be chosen if it will be called. If you have templates // form 1 template Foo(int N) if (N & 1) {} // A template Foo(int N) if (!(N & 1)) {} // B OR // form 2 template foo(int N) { static if(N & 1){} // A else{} // B } There is also no ambiguity which will be called. However, getOverloads will behave differently. This is not bad at all. But you have to admit, that while now, there is no way to distinguish form 1 and form 2, with the new getOverloads there will be. This seems strange to me, because there is no reason to distinguish form 1 and form 2. (Because the callable code, which will be generated is the same, I hope... ?) So, in particular, I'm not against the feature. And if the equivalence between form 1 and form 2 is gone, so what. But I don't understand the reasoning why something which is now equal won't be equal any more later?
Re: Getting the overload set of a template
Well, if that's the lowering, then it's indeed hard. That doesn't mean it shouldn't happen, though... perhaps changing the lowering? I'm no compiles expert, so no idea how). What I'd like to get is the same that I get using __traits(getMember,...), but repeated n times (AliasSeq perhaps?), like with regular overloads. Then, whatever I can do with the first entry (the only one I can get currently) should also be possible with the rest. In my case, I'd like to access the UDAs, but I can imagine that the use case that allows us to get a template keeps being valid for all the "hidden" alternatives. Also, I think that whether to use "getOverloads" or to add a new trait is rather an implementation detail It's a bit frustrating being able to access only the first of a set... A. > Would it be possible at all? I mean, if the two following codes are equivalent > ´´´ > @S("Has foo_A") template foo(string s) if (s == "a") { > enum foo = "foo_A"; > } > @S("Has foo_B") template foo(string s) if (s == "b") { > enum foo = "foo_B"; > } > ´´´ > > > ´´´ > template foo(string s) > { > static if (s == "a") > { > @S("Has foo_A") enum foo = "foo_A"; > } > else static if (s == "b") > { >@S("Has foo_B") enum foo = "foo_B"; > } > } > ´´´ > > How would you define a "template overload"? > And which "overloads" would you like to get if constraints are more general? > And last but not least, the getOverloads is defined on functions, which are callable, whereas templates are not, in general...
getSymbolByUDA and inheritance
Hi, getSymbolsByUDA doesn't work when inheritance is involved [1]: ``` import std.traits; void main() { pragma(msg, getSymbolsByUDA!(A,S).length); pragma(msg, getSymbolsByUDA!(B,S).length); } class A { @S("A") int a; } class B : A { @S("B") int b; } struct S { string name; } ``` The error message seems a bit weird, and after some tinkering, I've been able to reproduce it when creating an AliasSeq with members of both the parent and the child class [2]: ``` import std.meta; void main() { pragma(msg, AliasSeq!(__traits(getMember, B, "a"), __traits(getMember, B, "b"))); } class A { int a; } class B : A { int b; } ``` It seems that using __traits(getMember,...) is introducing some kind of hidden context to the actual class that defines the member... It looks like a bug to me, but there might be a reason to do it this way. Still, whatever the reason, it definitely breaks getSymbolsByUDA when inheritance is involved. According to the documentation [3] only nested members are excluded (and in any they are just excluded, but still compile), so is this a bug? [1]: https://run.dlang.io/is/502CUB [2]: https://run.dlang.io/is/9wOIsa [3]: https://dlang.org/library/std/traits/get_symbols_byuda.html
Getting the overload set of a template
Hi! Is there any way to get the full set of templates that are "overloaded" (in my case, based on constraints)? Basically, I'd like to make something like https://run.dlang.io/is/z2LeAj return both versions of the template (and then retrieve their UDAs)... If it's not possible, I can still work around it, but I find it a bit frustrating that you can only access the first version of a template. Thanks! Arafel
Re: Access derived type in baseclass static function template
What you are looking for is virtual static methods, and D doesn't have those. I don't know if there's a way to make it work with existing features. Well, there are interesting things to do: https://dpaste.dzfl.pl/ed826ae21473 I don't know if that's what one would call "virtual static", but I'd say it comes close...
Re: Access derived type in baseclass static function template
On 08/02/2017 02:07 PM, Timoses wrote: Hey, wondering whether it's possible to access the derived type from a function template in the base class or interface. this T does not seem to be working, I guess because it's a static function and this does not exists?! [...] Any way I could accomplish this? Well, it's a clumsy workaround, but the only thing missing seems to be the "this T" automatic deduction. I was recently hit by something similar: the "this" parameter deduction only works for instance methods. It was not totally clear if it was a bug or a feature... The documentation [1] is however quite clear: TemplateThisParameters are used in member function templates to pick up the type of the this reference. So, static functions doesn't seem to be covered. You can, however, make it explicit: ``` B.test!B(); C.test!C(); ``` And then even alias it to prevent accidental mismatches: ``` import std.stdio; interface I { static void test(this T)() { writeln(T.type.stringof); } } abstract class A { static void test(this T)() { writeln(T.type.stringof); } } class B : A { alias type = uint; } class C : I { alias type = int; } void main() { test!B(); test!C(); } alias test(T) = T.test!T; ``` [1]: http://dlang.org/spec/template.html#TemplateThisParameter
Re: Taking the address of an eponymous template
On 07/31/2017 12:14 PM, ag0aep6g wrote: > > You'd have to instantiate the inner template, too. Something like > `!"a".baz!()`, but that doesn't work. I don't know how you could > make it work. > I tried this as well, and couldn't make it work either. Do you know if it's supposed to work? I mean, do the spec mention this? The funny part is that id does work with nested, non-eponymous templates: ``` class A { template foo(string S) { void bar()() { import std.stdio; writeln ("I'm bar"); } void baz(this T)() { import std.stdio; writeln ("I'm baz at class ", typeid(T)); } } } class B : A { } void main() { A a = new A(); B b = new B(); void delegate() aBar = !("a").bar!(); aBar(); void delegate() aBaz = !("a").baz!(typeof (a)); aBaz(); void delegate() bBaz = !("a").baz!(typeof (b)); bBaz(); } ``` OK, not directly with the "this" parameter... those you have to include explicitly. However, this seems to be an unrelated problem: the "this T" parameter seems to be only automatically deducted during function calls. Even this doesn't work: ``` class A { template foo(this T) { void bar() { import std.stdio; writeln(typeid(T)); } } } void main() { A a = new A(); a.foo.bar(); } ``` But I think that's a completely separate issue (would that be a bug, btw?) It's of course a trivial issue here, but it's just an example. What use is it to allow "this T" parameters in raw template declarations if they are not going to be automatically filled? > > (Assuming the inner baz is supposed to be `void baz(this T)() {}`.) > Sure :-) > You'd still have to instantiate the inner baz in order to get a delegate > of it. But even if we figured out how to do that, my guess is you don't > want to specify `this T` explicitly. > > So how about a function literal: > > void delegate() aBaz = () => c.baz!(int, float)(); > Yeah, that's the solution I was thinking about, but I don't know how much of a performance hit the extra function call would be... would the function literal extra indirection layer be eventually optimised out? > That's right if you want to pass `args` explicitly, but `this` > implicitly. If specifying `args` via IFTI is an option, then this works, > too: > > > class C { > void baz(this T, args...)(args) {} > } > > void main() { > C c = new C(); > void delegate() aBaz = () => c.baz(1, 2.3, "four"); > } > > > A function literal again, because you have to call baz in order to > instantiate it (or you have specify `this T` and `args` explicitly). But > you can't get a delegate from a call. This wouldn't work in my case because the arguments ("args") are string themselves, so the function call needs to look like: c.baz!("one","two"); and not: c.baz("one","two"); The reasons for that are a bit more complex, but let's say that in this case I need the strings to be in the template parameters, I use those strings to create an AliasSeq of values of different types that is then sent to a "proper" variadic templated function. Off topic, if anyone knows how to create a va_list dynamically, that would save me a lot of problems!!
Taking the address of an eponymous template
Hi! I want to create a delegate out of a method that happens to be an eponymous (nested) template, like this: ``` class C { void foo() {} void bar(string S)() { } template baz(string S) { void baz()() { } } } void main() { C c = new C(); void delegate() aFoo = void delegate() aBar = !"a"; void delegate() aBaz = !"a"; // This doesn't compile. } ``` If I try !"a".baz it doesn't work either (I get a different error message. Do you know if this works (and if so, what should I do), or if it's supposed to? Of course in this case I don't need to use an eponymous template at all, bit it's just a simplification to try to get everything else out of the way... In case anyone is interested, the real case is something more like this: ``` class C { template baz(args...) if (someCondition!args) { void baz(this T) { } } } ``` As far as I know, that's the only way to combine a "this" template parameter with variadic template parameters. As usual, thanks for the great support, D hast got a great community! P.S.: When the function inside the eponymous template is not templated itself, then it does work: ´´´ class C { template baz(string S) { void baz() { } } } void main() { C c = new C(); auto aBaZ = !"a"; } ´´´
Re: It makes me sick!
On 07/28/2017 03:29 PM, Mike Parker wrote: The D installer completely uninstalls the previous installation. Anyone who chooses to instead manually extract the zip file should manually delete the previous installation to avoid potential problems. As Jonathan said earlier, overwriting works most of the time, but whenever anything is removed, issues like this can crop up. To me the only issue would be that (one of) the documentation pages [1] only talks about the zip file. I think it should be made clearer that the installer is the recommended / supported way, and that the zip is only meant for experts (with a recommendation to uncompress to a clean directory to avoid problems). I know this page is not the MAIN "download" [2] page, but it's both reached from the "About" link, and as the first google hit for "dlang download windows", so it should be kept as up to date as possible. [1]: https://dlang.org/dmd-windows.html#installation [2]: https://dlang.org/download.html
Re: "shared" woes: shared instances of anonymous classes
Well, in both snippets there and extra closing parenthesis... they are obviously a typo, blame copy and pasting and not cleaning up afterwards :) On 07/07/2017 11:14 AM, Arafel wrote: Hi! I'm trying to wrap my mind around "shared", and I think I have managed to more or less grasp it. However I'm having a problem, and it seems it's just a missing feature (or rather combination of features) in the language (or I haven't found the right keyword combination). Is there any way to create a shared instance of an anonymous class? Let's say: ``` class C { shared this() { } } void main() { shared C c = new /* shared */ C { shared this() { super(); } }); } ``` This doesn't compile because, of course, the instantiation of the anonymous class is not shared. However, the following code doesn't compile either, and I'm not even sure what "shared" is supposed to mean in this context: ``` class C { shared this() { } } void main() { shared C c = new shared(C) { shared this() { super(); } }); } ``` I tried playing a bit around [1] (the non-shared constructors are needed), and ended up even more confused than before!! Of course if I create a proper named class D, I can instantiate shared instances of D, so it's not like there's no workaround... still, coming from Java I like anonymous classes, and I think it'd be cool to be able to use them in this context. If somebody knows how this works / is supposed to work, I'd be thankful! [1]: https://dpaste.dzfl.pl/ce2ba93111a0
"shared" woes: shared instances of anonymous classes
Hi! I'm trying to wrap my mind around "shared", and I think I have managed to more or less grasp it. However I'm having a problem, and it seems it's just a missing feature (or rather combination of features) in the language (or I haven't found the right keyword combination). Is there any way to create a shared instance of an anonymous class? Let's say: ``` class C { shared this() { } } void main() { shared C c = new /* shared */ C { shared this() { super(); } }); } ``` This doesn't compile because, of course, the instantiation of the anonymous class is not shared. However, the following code doesn't compile either, and I'm not even sure what "shared" is supposed to mean in this context: ``` class C { shared this() { } } void main() { shared C c = new shared(C) { shared this() { super(); } }); } ``` I tried playing a bit around [1] (the non-shared constructors are needed), and ended up even more confused than before!! Of course if I create a proper named class D, I can instantiate shared instances of D, so it's not like there's no workaround... still, coming from Java I like anonymous classes, and I think it'd be cool to be able to use them in this context. If somebody knows how this works / is supposed to work, I'd be thankful! [1]: https://dpaste.dzfl.pl/ce2ba93111a0
Re: Undefined symbol for, apparently, valid code?
On 07/06/2017 05:11 PM, unleashy wrote: Maybe it was an error on my part for not declaring the function as abstract? My view was that the abstract attribute on a class marks all its members as virtual unless they have a body, which is how it works in, say, Java. Still, kinda odd that the linker is the one to call me out, and not the compiler. Pretty unexpected. I think an "abstract" class is only one that cannot be directly instantiated, only through a derived class. It might be "complete", though. This doesn't mean (and that's the confusion) that there couldn't be a function with a body defined in another compilation unit. In java there are no forward declarations, so there is no ambiguity here: a bodyless method means an abstract method. In D a bodyless method can mean: * An abstract method (to be provided by derived classes), which of course must then be virtual. This is indicated with the keyword "abstract". * A method whose body is provided by some other compilation unit. Of course, an abstract class without abstract methods isn't usually the intended idea... But the compiler will silently accept it and let the linker complain afterwards. Again, "final by default" to me is more confusing when anything else, specially when it's usually "virtual by default"... and it's clearly a bug that it compiles after explicitly overriding!
Re: Undefined symbol for, apparently, valid code?
Well, it happened to me once [1], and the reason is that templated functions are final by default (since, as you said, it doesn't make sense for them to be anything else). This way the body of the function is assumed to be in a different compilation unit (which is not, hence the linker error). If the variable had been declared of type "Foo" instead of "Asd" it would probably had worked, although this kind of defeat the purpose. Whether it makes sense that this construction is allowed, is a different question. I personally it makes sense to have the user explicitly ask for "final", since we have otherwise "virtual by default", so this behaviour is completely unexpected be most users. The whole thing makes even less sense if you take into account that a explicit request to override is just silently ignored. Finally, have also in mind that if the function had been declared abstract (as it arguably should), a compile-time error would have been generated [2]. [1]: http://forum.dlang.org/post/kgxwfsvznwzlnhrdp...@forum.dlang.org [2]: https://dpaste.dzfl.pl/22f7e0840f01 On 07/06/2017 08:48 AM, rikki cattermole wrote: > > Templates+classes = require function body. > > Why? Templated methods are not virtual, they are final and cannot be > inherited (so its a little strange that the override is valid).
Re: Weird template instantiation problem
Well, I had kind of found a workaround (changing the return type to return the element and not the index) which I didn't like too much (what if there are duplicates?). Now that I've found a "proper" workaround well, I'm still interested in knowing the reason, if possible, or if it's a bug. On 06/12/2017 09:49 PM, ketmar wrote: yeah, sorry for not proposing a workaround: i thought that you already did it, and now you're just interested why the original code doesn't work. ;-) i think that this is a bug (or, rather, unimplemented feature).
Re: Weird template instantiation problem
On Monday, 12 June 2017 at 19:23:10 UTC, ketmar wrote: p.s.: while i understand the technical reason for second error message, it is still random and confusing. I think the reason for the typeof problem is that it works with expressions, not with types (so, typeof (int) is also not valid), and the alias resolves ultimately to a type. I actually found a workaround for the original issue: ``` enum defaultChooser(T) = function size_t(T[] queue) { return 0; }; struct S(T, alias chooser = defaultChooser!int) if (is(typeof(chooser) : size_t function(T[]))) { } void main() { S!(int, defaultChooser!int) s; } ``` This works, but strangely if I try "==" instead of ":" in the template condition, then it fails again. Honestly I don't know why it makes a difference, I guess attribute inference might be at fault... but in the version with the "static assert" I was explicitly checking them, and they apparently matched... Also, this is just a(n ugly) workaround, and there might be side effects of using an alias parameter that I'm not aware of... and more importantly, I still think the original version should work! ;-)
Re: Weird template instantiation problem
On 06/12/2017 05:31 PM, Arafel wrote: Hi, I've found a strange problem, and I'm not sure if it's a bug. To give a bit of background, I'm implementing a multi-threaded producer-consumer where the next work item to be picked depends not only on the "waiting queue", but also on what else is being run (and potentially where) at the same moment, so things like "sort"'ing the queue won't probably work, because I don't think you use a delegate as a predicate for "sort" (that's what I think it would be needed to get the extra context information). The idea here is that the "chooser" function returns the *index* of the work item to be picked. So, the reduced problem looks like this (I've removed the extra information about the running jobs to make the example simpler): ``` enum defaultChooser(T) = function size_t(T[] queue) { return 0; }; struct S(T, size_t function(T[]) chooser = defaultChooser!T) { } void main() { S!int s; } ``` this fails and I get this: Error: template instance S!int does not match template declaration S(T, ulong function(T[]) chooser = defaultChooser!T) If instead of returning the index the actual item is returned, it works! ``` enum defaultChooser(T) = function T(T[] queue) { return queue[0]; }; struct S(T, T function(T[]) chooser = defaultChooser!T) { } void main() { S!int s; } ``` As you can see, the only change is the type the function returns, but I don't see how it should make any difference. Also, changing from "enum" to "static immutable", or even removing the "enum" and directly embedding the function literal doesn't seem to make any difference. Any ideas on what might be going on?? Even more strange: ``` enum defaultChooser(T) = function size_t(T[] queue) { return 0; }; static assert (is (typeof(defaultChooser!int) == size_t function(int[] queue) pure nothrow @nogc @safe)); struct S(T, size_t function(T[] queue) pure nothrow @nogc @safe chooser) { } void main() { S!(int, defaultChooser!int) s; } ``` The static assert passes (tried with the wrong values), yet I get this error message: Error: template instance S!(int, function ulong(int[] queue) => 0LU) does not match template declaration S(T, ulong function(T[] queue) pure nothrow @nogc @safe chooser) Am I missing something fundamental? But then, why does it work if I change the return type in the template parameter?
Weird template instantiation problem
Hi, I've found a strange problem, and I'm not sure if it's a bug. To give a bit of background, I'm implementing a multi-threaded producer-consumer where the next work item to be picked depends not only on the "waiting queue", but also on what else is being run (and potentially where) at the same moment, so things like "sort"'ing the queue won't probably work, because I don't think you use a delegate as a predicate for "sort" (that's what I think it would be needed to get the extra context information). The idea here is that the "chooser" function returns the *index* of the work item to be picked. So, the reduced problem looks like this (I've removed the extra information about the running jobs to make the example simpler): ``` enum defaultChooser(T) = function size_t(T[] queue) { return 0; }; struct S(T, size_t function(T[]) chooser = defaultChooser!T) { } void main() { S!int s; } ``` this fails and I get this: Error: template instance S!int does not match template declaration S(T, ulong function(T[]) chooser = defaultChooser!T) If instead of returning the index the actual item is returned, it works! ``` enum defaultChooser(T) = function T(T[] queue) { return queue[0]; }; struct S(T, T function(T[]) chooser = defaultChooser!T) { } void main() { S!int s; } ``` As you can see, the only change is the type the function returns, but I don't see how it should make any difference. Also, changing from "enum" to "static immutable", or even removing the "enum" and directly embedding the function literal doesn't seem to make any difference. Any ideas on what might be going on??
Virtual nested classes and "this"
Hi, I have been poking around with overriding internal classes, and after reading [1] it was actually not clear to me whether it could be done or not, so I started trying. The good news (for me, at least) is that it can mostly be done [2], whoever I have found a bit intriguing that I need to explicitly use "this.i" instead of just "i" in B.fb() [3]. Just in case, the code I got to work is this: ``` class A { public static class I { public string fai() { return "A.I.fai"; } } public string fa() { return i.fai(); } public this(this C)() { i_ = new C.I(); } protected I i_; public @property T.I i(this T)() { return cast(T.I) this.i_; } } class B : A { override public static class I : A.I { override public string fai() { return "B.I.fai"; } public string fbi() { return "B.I.fbi"; } } public this(this C)() { super(); } public string fb() { return this.i.fbi(); // Why is "this" needed here? } } void main() { A a = new A(); A ab = new B(); B b = new B(); assert (a.fa() == "A.I.fai"); assert (ab.fa() == "B.I.fai"); assert (b.fa() == "B.I.fai"); assert (b.fb() == "B.I.fbi"); } ``` Is there a reason for that? Why cannot it be inferred as in the regular case? Also, if there's a way to do it without using the property wrapper, I'd be glad to know it :) I tried something like: ``` template i(this T) { T.I i; } ``` but it didn't like it... I guess members have to be better defined... Best, A [1]: https://forum.dlang.org/thread/siwjqxiuocqtrldcz...@forum.dlang.org [2]: https://dpaste.dzfl.pl/8f4e0df438e5 [3]: https://dpaste.dzfl.pl/8f4e0df438e5#line-34
Re: Variable-Length Bit-Level Encoding
On Saturday, 12 November 2016 at 19:13:13 UTC, Nordlöw wrote: I'm looking for libraries/snippets (either in D or similar languages) that perform variable-length encoding of unsigned integers onto a bit-stream. Requirement is that smaller inputs (integer values) should be encoded with equal or fewer bits. This 0 => [0] 1 => [1,0] 2 => [1,1,0] is easy but assumes a too extreme input value distribution. Does anybody have a suggestion for an encoder that is more suitable for real-world values that are, for instance, normally distributed? If you have a sample of your data, perhaps Huffman codes (https://en.wikipedia.org/wiki/Huffman_coding) might be an option?
Re: Template method in interfaces
On Wednesday, 10 August 2016 at 15:52:29 UTC, Lodovico Giaretta wrote: On Wednesday, 10 August 2016 at 15:48:10 UTC, Lodovico Giaretta wrote: On Wednesday, 10 August 2016 at 15:39:19 UTC, Arafel wrote: Would it even make sense to "force" (deprecation warning) a "final" keyword in any implicitly-final function (I wasn't even aware of those, I have to admit)? It would make things much clearer, like with "override"... I read the spec again, and found out that it says interfaces cannot contain templated functions... So either my interpretation is the intended one and the spec is outdated, or the spec is right and the compiler is bugged. Anyway what I said about implicit final is true for classes. In classes, I don't like the idea of having to put an explicit final, but this is debatable. For interfaces, I'm ok with forcing an explicit final attribute (but as I said the spec does not allow templated functions in interfaces, even if the compiler does). I have to say that the fact that this compiles at all seems like a bug to me according to [1], even more so that the method in A is called: --- import std.stdio; public class A { public void func(T)(T t) { writeln("Within A"); } } public class B : A { override public void func(T)(T t) { writeln("Within B"); } } void main() { A a = new B(); a.func(1); } --- https://dpaste.dzfl.pl/f3d5beff2e51 If the function is "final", even if implicitly so, the "override" should fail according to the spec as I, and I guess 99% of the people[2], understand it. [1]: https://dlang.org/spec/function.html#virtual-functions [2]: OK, technically not, since it just says that "Functions marked as final may not be overridden in a derived class [...]" and this function is not *marked* as final, but implicitly final... still...
Re: Template method in interfaces
On Wednesday, 10 August 2016 at 15:25:40 UTC, Lodovico Giaretta wrote: Because templated functions cannot be virtual, it follows that I.func is final. Having no body, the compiler thinks that its body will be found by the linker in another object file, but this does not happen, so the linker complains. Being I.func final, C.func just hides it, so you would not incur any problem if you called func explicitly on an object of type C. So what you found is not a bug, but some unintuitive behaviour due to templated functions being implicitly final and forward declarations. Maybe the compiler should emit a warning about implicitly-final functions in interfaces. Would it even make sense to "force" (deprecation warning) a "final" keyword in any implicitly-final function (I wasn't even aware of those, I have to admit)? It would make things much clearer, like with "override"...
Template method in interfaces
I'm not sure if the following is even expected to work, since I'm not sure how the vtable for the interface would look like (well, that would be applicable to any overriden templated method, though): --- public interface I { void func(T)(T t); } public class C : I { void func(T)(T t) { } } void main() { I i = new C(); i.func(1); } --- But since the error I get is in the linker, and not in the compiler, I guess that's somehow a bug? Or how should it work then? https://dpaste.dzfl.pl/7a14fa074673 /d31/f76.o: In function `_Dmain': /d31/f76.d:(.text._Dmain+0x24): undefined reference to `_D3f761I11__T4funcTiZ4funcMFiZv' collect2: error: ld returned 1 exit status --- errorlevel 1 PS: Now I see [1] that it shouldn't, so perhaps the compiler should reject templated methods in interfaces from the beginning? [1]: http://forum.dlang.org/post/jg504s$1f7t$1...@digitalmars.com
Re: Best way of checking for a templated function instantiation
On Wednesday, 10 August 2016 at 13:40:30 UTC, Meta wrote: On Wednesday, 10 August 2016 at 13:37:47 UTC, Meta wrote: static assert(__traits(compiles, auto _ = S.init.opBinary!"+"(int.init)); Made a typo, this should be: static assert(__traits(compiles, { auto _ = S.init.opBinary!"+"(int.init); })); Hi! Thanks, that would do! Just out of curiosity, would there be any way to check just that the function is defined, like what "hasMember" would do, without caring about argument number, types, etc.? Ideally something like: __traits(hasMember, S, "opBinary!\"+\"")
Best way of checking for a templated function instantiation
Hi, I'm trying to check at compilation time if a given type implements some operator (let's assume it's '+' in this case), without caring about the type of the parameters it accepts. Since operator overloading is expressed in D through templated functions, what is the preferred way of checking if a template is / can be instantiated with a given parameter list? So far I've come with a solution using __trait(compiles, ...), but perhaps it's not 100% reliable -I'm no expert in template wizardry-, or there are better options. I also tried with hasMember, but it apparantly only shows that "opBinary" is indeed present, but nothing more: --- void main() { struct S { int opBinary(string op)(int i) if (op == "+") { return 0; } } static assert(__traits(compiles, S.opBinary!"+")); static assert(!__traits(compiles, S.opBinary!"-")); } ---
Re: Problems with -fPIC, libraries and exceptions (in linux?)
Just as a follow-up, I think it's looking more and more like a compiler bug. It works properly both with gdc and ldmd2. Should I make a bug report about that?
Problems with -fPIC, libraries and exceptions (in linux?)
Hi! I've stumbled across the following problem: when I raise an exception from a (statically linked) library that was compiled with -fPIC, I get a segmentation fault. Example: -- libfoo/dub.json { "name" : "foo", "description" : "Exception raising lib", "dflags" : [ "-fPIC" ] } -- -- libfoo/source/foo.d module foo; public void throwIt() { throw new Exception("This is an exception!"); } -- -- bar/dub.json { "name" : "bar", "description" : "uses libfoo", "dependencies" : { "foo" : "*" }, } -- bar/source/app.d import foo; void main() { throwIt(); } -- If I run "bar" (after libfoo is added through "dub add-local", of course), I get a segmentation fault (the exception cannot even be caught). If I remove "-fPIC" I get the usual stack trace and I can catch the exception as well. Is this a compiler bug or is there a reason for that? I'm using "DMD64 D Compiler v2.071.1", I haven't tried yet with ldc or gdc. P.S.: This is simplified test case, the reason why I'm trying -fPIC is because I want to link a dependency statically in a .so file which in turn will be dynamically loaded as a plugin.
Re: Strange rbtree behaviour
On Thursday, 7 July 2016 at 09:46:25 UTC, Lodovico Giaretta wrote: On Thursday, 7 July 2016 at 09:40:57 UTC, Lodovico Giaretta wrote: Initially it looks very surprising, but then if you add `writeln(B.init.col[]);` you can easily find out what's going on. And I'm quite sure it's expected behaviour. An RBTree is just a pointer to the memory containing the actual tree. Your `col`s have different addresses because they are different copies of the same pointer. If you cast `col` to a pointer and write the address it's pointing at, you find out that the two structures are pointing to the same memory. This is because assignments used in a structure declaration are not re-executed for each instantiation. Instead, they are executed one to create the `.init` member of the type, which is then bit-copied on every other instance. So your code does this: B.init.col = new RBTree(...); B b1 = B.init; B b2 = B.init; Still, if I make B a class instead of a struct (and instantiate using new), I get the same result... Do classes behave the same as structs in this regard? I mean, static initializers compared to "this" (now that I write it, I guess the word "static" gives an important clue...). Anyway, glad to know what was happening... still a bit unintuitive, but I guess it makes sense after all. Thanks!!
Strange rbtree behaviour
Hi! I am seeing what it seems to me a very strange behavior with rbtrees: --- import std.stdio; import std.container.rbtree; public struct A { public int i; public this (int _i) { this.i = _i; } } public struct B { public auto col = new RedBlackTree!(A,"a.i < b.i"); } void main() { B b1; B b2; b1.col.insert(A(5)); static assert( != ); assert(&(b1.col) != &(b2.col)); writeln(b1.col.length, " ", b2.col.length); writeln(b1.col[], " ", b2.col[]); } --- I get the (to me) surprising result of: --- 1 1 [A(5)] [A(5)] --- Is this the expected result? If so, why? I'd have expected two new empty but different rbtrees to be created, and in fact that's what happen if I declare them as local variables inside main(). In this case, two different rbtrees are indeed created (as seen with the assertion), but they apparently point to the same underlying data... Thanks!