Re: Line numbers in backtraces (2017)
On Thursday, 2 November 2017 at 19:05:46 UTC, Tobias Pankrath wrote: Including Phobos? Your posted backtrace looks to me like templates instantiated within Phobos, so I think you'd need Phobos with debug symbols for those lines. --- int main(string[] argv) { return argv[1].length > 0; } --- ~ [i] % rdmd -g -debug test.d core.exception.RangeError@test.d(3): Range violation No difference when I compile with 'dmd -g -debug' and run in manually. That Error is thrown from within druntime. If you want to see line numbers for backtraces locations within druntime, you need to compile druntime with debug symbols. Also `-debug` only changes conditional compilation behaviour[1]. [1] https://dlang.org/spec/version.html#DebugCondition
Re: Line numbers in backtraces (2017)
On Wednesday, 1 November 2017 at 06:44:44 UTC, Tobias Pankrath wrote: On Tuesday, 31 October 2017 at 11:21:30 UTC, Moritz Maxeiner wrote: On Tuesday, 31 October 2017 at 11:04:57 UTC, Tobias Pankrath wrote: [...] ??:? pure @safe void std.exception.bailOut!(Exception).bailOut(immutable(char)[], ulong, const(char[])) [0xab5c9566] ??:? pure @safe bool std.exception.enforce!(Exception, bool).enforce(bool, lazy const(char)[], immutable(char)[], ulong) [0xab5c94e2] I've found this StackOverflow Question from 2011 [1] and if I remember correctly this could be fixed by adding -L--export-dynamic which already is part of my dmd.conf [...] [1] https://stackoverflow.com/questions/8209494/how-to-show-line-numbers-in-d-backtraces Does using dmd's `-g` option (compile with debug symbols) not work[1]? [1] This is also what the answer in your linked SO post suggest? Of course I've tried this. Including Phobos? Your posted backtrace looks to me like templates instantiated within Phobos, so I think you'd need Phobos with debug symbols for those lines.
Re: Line numbers in backtraces (2017)
On Tuesday, 31 October 2017 at 11:04:57 UTC, Tobias Pankrath wrote: [...] ??:? pure @safe void std.exception.bailOut!(Exception).bailOut(immutable(char)[], ulong, const(char[])) [0xab5c9566] ??:? pure @safe bool std.exception.enforce!(Exception, bool).enforce(bool, lazy const(char)[], immutable(char)[], ulong) [0xab5c94e2] I've found this StackOverflow Question from 2011 [1] and if I remember correctly this could be fixed by adding -L--export-dynamic which already is part of my dmd.conf [...] [1] https://stackoverflow.com/questions/8209494/how-to-show-line-numbers-in-d-backtraces Does using dmd's `-g` option (compile with debug symbols) not work[1]? [1] This is also what the answer in your linked SO post suggest?
Re: Why do I have to cast arguments from int to byte?
On Tuesday, 10 October 2017 at 19:55:36 UTC, Chirs Forest wrote: I keep having to make casts like the following and it's really rubbing me the wrong way: void foo(T)(T bar){...} byte bar = 9; [...] Why? Because of integer promotion [1], which is inherited from C. [1] https://dlang.org/spec/type.html#integer-promotions
Re: scope(exit) and destructor prioity
On Monday, 18 September 2017 at 20:55:21 UTC, Sasszem wrote: If I write "auto a = new De()", then it calls the scope first, no matter where I place it. Because with `new` a) your struct object is located on the heap (and referred to by pointer - `De*`) instead of the stack (which means no destructors for it are called at function scope end), and b) the lifetime of your struct object is determined by D's garbage collector, which may or may not eventually collect it, finalizing it in the process (calling the destructor, as D doesn't separate finalizers and destructors a.t.m.). In your case, it sounds like the GC collection cycle that (in the current implementation) occurs just before druntime shutdown collects it. I highly recommend reading The GC Series on the D blog [1]. [1] https://dlang.org/blog/the-gc-series/
Re: My friend can't install DMD 2.076.0 after he deleted contents of C:\D
On Sunday, 17 September 2017 at 05:33:12 UTC, rikki cattermole wrote: Skip Revo-Uninstaller, no idea why you'd ever use such trial software. Anyway what you want is CCleaner, standard software that all Windows installs should have on hand. http://blog.talosintelligence.com/2017/09/avast-distributes-malware.html https://www.piriform.com/news/blog/2017/9/18/security-notification-for-ccleaner-v5336162-and-ccleaner-cloud-v1073191-for-32-bit-windows-users
Re: OpIndex/OpIndexAssign strange order of execution
On Monday, 18 September 2017 at 15:11:34 UTC, Moritz Maxeiner wrote: gets rewritten to --- t.opIndex("b").opIndexAssign(t["a"].value, "c"); --- Sorry, forgot one level of rewriting: --- t.opIndex("b").opIndexAssign(t.opIndex("a").value, "c"); ---
Re: OpIndex/OpIndexAssign strange order of execution
On Sunday, 17 September 2017 at 18:52:39 UTC, SrMordred wrote: struct Test{ [...] } Test t; As described in the spec [1] t["a"] = 100; gets rewritten to --- t.opIndexAssign(100, "a"); --- , while t["b"]["c"] = t["a"].value; gets rewritten to --- t.opIndex("b").opIndexAssign(t["a"].value, "c"); --- , which has to result in your observed output (left-to-right evaluation order): //OUTPUT: opIndexAssign : index : a , value : 100 opIndex : index : b opIndex : index : a property value : 100 opIndexAssign : index : c , value : 100 //EXPECTED OUTPUT opIndexAssign : index : a , value : 100 opIndex : index : a property value : 100 opIndex : index : b opIndexAssign : index : c , value : 100 Is this right? AFAICT from the spec, yes. Your expected output does not match D's rewriting rules for operator overloading. I find unexpected this mix of operations on left and right side of an equal operator. Adding some more examples to the spec to show the results of the rewriting rules could be useful, but AFAICT it's unambiguous. On Monday, 18 September 2017 at 13:38:48 UTC, SrMordred wrote: Should I report this as a bug? Not AFAICT. I tried a C++ equivalent code and it execute in the expected order. D does not (in general) match C++ semantics. [1] https://dlang.org/spec/operatoroverloading.html
Re: extern(C) enum
On Monday, 18 September 2017 at 02:04:49 UTC, bitwise wrote: The following code will run fine on Windows, but crash on iOS due to the misaligned access: Interesting, does iOS crash such a process intentionally, or is it a side effect? char data[8]; int i = 0x; int* p = (int*)[1]; Isn't this already undefined behaviour (6.3.2.3 p.7 of C11 [1] - present in earlier versions also, IIRC)? *p++ = i; *p++ = i; *p++ = i; The last of these is also a buffer overflow. [1] http://iso-9899.info/n1570.html
Re: Assertion Error
On Wednesday, 13 September 2017 at 15:12:57 UTC, Vino.B wrote: On Wednesday, 13 September 2017 at 11:03:38 UTC, Moritz Maxeiner wrote: On Wednesday, 13 September 2017 at 07:39:46 UTC, Vino.B wrote: Hi Max, [...] Program Code: [...] foreach (string Fs; parallel(SizeDirlst[0 .. $], 1)) { auto FFs = Fs.strip; auto MSizeDirList = task(, FFs, SizeDir); MSizeDirList.executeInNewThread(); auto MSizeDirListData = MSizeDirList.workForce; MSresult.get ~= MSizeDirListData; } [...] --- foreach (string Fs; parallel(SizeDirlst[0 .. $], 1)) { MSresult.get ~= coSizeDirList(Fs.strip, SizeDir); } --- Hi Max, It's Moritz, not Max. ;) Below is the explanation of the above code. [...] AFAICT that's a reason why you want parallelization of coSizeDirList, but not why you need to spawn another thread inside of an *already parallelelized" task. Try my shortened parallel foreach loop vs your longer one and monitor system load (threads, memory, etc).
Re: Assertion Error
On Wednesday, 13 September 2017 at 07:39:46 UTC, Vino.B wrote: On Tuesday, 12 September 2017 at 21:01:26 UTC, Moritz Maxeiner wrote: On Tuesday, 12 September 2017 at 19:44:19 UTC, vino wrote: Hi All, I have a small piece of code which executes perfectly 8 out of 10 times, very rarely it throws an assertion error, so is there a way to find which line of code is causing this error. You should be getting the line number as part of the crash, like here: --- test.d --- void main(string[] args) { assert(args.length > 1); } -- - $ dmd -run test.d core.exception.AssertError@test.d(3): Assertion failure [Stack trace] - If you don't what are the steps to reproduce? Hi Max, I tried to run the code for at least 80+ time the code ran without any issue, will let you know in case if I hit the same issue in feature, Below is the piece of code, plese do let me know if you find any issue with the below code. Program Code: [...] foreach (string Fs; parallel(SizeDirlst[0 .. $], 1)) { auto FFs = Fs.strip; auto MSizeDirList = task(, FFs, SizeDir); MSizeDirList.executeInNewThread(); auto MSizeDirListData = MSizeDirList.workForce; MSresult.get ~= MSizeDirListData; } From reading I don't see anything that I would expect to assert, but I am wondering why you first parallelize your work with a thread pool (`parallel(...)`) and then inside each (implicitly created) task (that is already being serviced by a thread in the thread pool) you create another task, have it executed in a new thread, and make the thread pool thread wait for that thread to complete servicing that new task. This should yield the same result, but without the overhead of spawning additional threads: --- foreach (string Fs; parallel(SizeDirlst[0 .. $], 1)) { MSresult.get ~= coSizeDirList(Fs.strip, SizeDir); } ---
Re: Assertion Error
On Tuesday, 12 September 2017 at 19:44:19 UTC, vino wrote: Hi All, I have a small piece of code which executes perfectly 8 out of 10 times, very rarely it throws an assertion error, so is there a way to find which line of code is causing this error. You should be getting the line number as part of the crash, like here: --- test.d --- void main(string[] args) { assert(args.length > 1); } -- - $ dmd -run test.d core.exception.AssertError@test.d(3): Assertion failure [Stack trace] - If you don't what are the steps to reproduce?
Re: Adding empty static this() causes exception
On Tuesday, 12 September 2017 at 19:59:52 UTC, Joseph wrote: On Tuesday, 12 September 2017 at 10:08:11 UTC, Moritz Maxeiner wrote: On Tuesday, 12 September 2017 at 09:11:20 UTC, Joseph wrote: I have two nearly duplicate files I added a static this() to initialize some static members of an interface. On one file when I add an empty static this() it crashes while the other one does not. The exception that happens is Cyclic dependency between module A and B. Why does this occur on an empty static this? Is it being ran twice or something? Anyway to fix this? The compiler errors because the spec states [1] Each module is assumed to depend on any imported modules being statically constructed first , which means two modules that import each other and both use static construction have no valid static construction order. One reason, I think, why the spec states that is because in theory it would not always be possible for the compiler to decide the order, e.g. when executing them changes the ("shared") execution environment's state: --- module a; import b; static this() { // Does something to the OS state syscall_a(); } --- --- module b; import a; static this() { // Also does something to the OS state syscall_b(); } --- The "fix" as I see it would be to either not use static construction in modules that import each other, or propose a set of rules for the spec that define a always solvable subset for the compiler. [1] https://dlang.org/spec/module.html#order_of_static_ctor The compiler shouldn't arbitrarily force one to make arbitrary decisions that waste time and money. My apologies, I confused compiler and runtime when writing that reply (the detection algorithm resulting in your crash is built into druntime). The runtime, however, is compliant with the spec on this AFAICT. The compiler should only run the static this's once per module load anyways, right? Static module constructors are run once per module per thread [1] (if you want once per module you need shared static module constructors). If it is such a problem then some way around it should be included: @force static this() { } ? The only current workaround is what Biotronic mentioned: You can customize the druntime cycle detection via the --DRT-oncycle command line option [2]. The compiler shouldn't make assumptions about the code I write and always choose the worse case, it becomes an unfriendly relationship at that point. If your point remains when replacing 'compiler' with 'runtime': It makes no assumptions in the case you described, it enforces the language specification. [1] https://dlang.org/spec/module.html#staticorder [2] https://dlang.org/spec/module.html#override_cycle_abort
Re: Adding empty static this() causes exception
On Tuesday, 12 September 2017 at 09:11:20 UTC, Joseph wrote: I have two nearly duplicate files I added a static this() to initialize some static members of an interface. On one file when I add an empty static this() it crashes while the other one does not. The exception that happens is Cyclic dependency between module A and B. Why does this occur on an empty static this? Is it being ran twice or something? Anyway to fix this? The compiler errors because the spec states [1] Each module is assumed to depend on any imported modules being statically constructed first , which means two modules that import each other and both use static construction have no valid static construction order. One reason, I think, why the spec states that is because in theory it would not always be possible for the compiler to decide the order, e.g. when executing them changes the ("shared") execution environment's state: --- module a; import b; static this() { // Does something to the OS state syscall_a(); } --- --- module b; import a; static this() { // Also does something to the OS state syscall_b(); } --- The "fix" as I see it would be to either not use static construction in modules that import each other, or propose a set of rules for the spec that define a always solvable subset for the compiler. [1] https://dlang.org/spec/module.html#order_of_static_ctor
Re: Ranges seem awkward to work with
On Tuesday, 12 September 2017 at 01:13:29 UTC, Hasen Judy wrote: Is this is a common beginner issue? I remember using an earlier version of D some long time ago and I don't remember seeing this concept. D's ranges can take getting used to, so if you haven't already, these two articles are worth the read to get familiar with them imho [1][2]. One way to look at it is that input ranges (empty,front,popFront) model iteration of the elements of some data source (another is that they model a monotonic advancing data source). [1] http://www.drdobbs.com/architecture-and-design/component-programming-in-d/240008321 [2] https://wiki.dlang.org/Component_programming_with_ranges
Re: Address of data that is static, be it shared or tls or __gshared or immutable on o/s
On Monday, 11 September 2017 at 22:38:21 UTC, Walter Bright wrote: If an address is taken to a TLS object, any relocations and adjustments are made at the time the pointer is generated, not when the pointer is dereferenced. Could you elaborate on that explanation more? The way I thought about it was that no matter where the data is actually stored (global, static, tls, heap, etc.), in order to access it by pointer it must be mapped into virtual memory (address) space. From that it follows that each thread will have its own "slice" of that address space. Thus, if you pass an address into such a slice (that happens to be mapped to the TLS of a thread) to other threads, you can manipulate the first thread's TLS data (and cause the usual data races without proper synchronization, of course).
Re: betterC and struct destructors
On Monday, 11 September 2017 at 10:18:41 UTC, Oleg B wrote: Hello. I try using destructor in betterC code and it's work if outer function doesn't return value (void). Code in `scope (exit)` works as same (if func is void all is ok). In documentation I found https://dlang.org/spec/betterc.html#consequences 12 paragraph: Struct deconstructors. [...] It's an implementation isssue [1][2][3]. [1] https://issues.dlang.org/show_bug.cgi?id=17603 [2] https://github.com/dlang/dmd/pull/6923 [3] https://www.reddit.com/r/programming/comments/6ijwek/dlangs_dmd_now_compiles_programs_in_betterc_mode/dj7dncc/
Re: Can attributes trigger functionality?
On Wednesday, 6 September 2017 at 02:43:20 UTC, Psychological Cleanup wrote: I'm having to create a lot of boiler plate code that creates "events" and corresponding properties(getter and setter). I'm curious if I can simplify this without a string mixin. If I create my own attribute like @Event double foo(); and I write any code that will trigger when the event is used and add more code(such as the setter property and events that I need? Obviously I could write some master template that scans everything, but that seems to be far too much over kill. A string mixin is probably my only option but is a bit ulgy for me. Since attributes can be defined by structures it seems natural that we could put functionality in them that are triggered when used but I'm unsure if D has such capabilities. Thanks. User defined attributes (UDAs) are in and of themselves only (compile time) introspectable decoration [1] (they only carry information). If you want to trigger specific behaviour for things that are attributed with a UDA you indeed need to some custom written active component that introspects using `__traits(getAttributes, symbol) and generates injects generates the behaviour (e.g. using a string mixin as you noted). [1] https://dlang.org/spec/attribute.html#UserDefinedAttribute
Re: Bug in D!!!
On Monday, 4 September 2017 at 03:08:50 UTC, EntangledQuanta wrote: On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: [...] The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and [...] Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ... Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand. That's true. But I don't see how it matters to much in the current argument. Remember, I'm not advocating using 'in' ;) [...] It matters, because that makes it not be _early_ optimization. If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time. Their speed generally does not affect the time one has to spend to understand a piece of code. Yes, but you are picking and choosing. [...] I'm not (in this case), as the picking is implied by discussing PL syntax. So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case. Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading. I know, You just haven't convinced me enough to change my opinion that it really matters at the end of the day. It's going to be hard to convince me since I really don't feel as strongly as you do about it. That might seem like a contradiction, but I'm not trying to convince you of anything. Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword. My proposal remains to not use a keyword and just upgrade existing template specialization. [...] You just really haven't stated that principle in any clear way for me to understand what you mean until now. i.e., Stating something like "... of a matter of principle" without stating which principle is ambiguous. Because some principles are not real. Some base their principles on fictitious things, some on abstract ideals, etc. Basing something on a principle that is firmly established is meaningful. I've stated the principle several times in varied forms of "syntax changes need to be worth the cost". I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other. It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion. [...] To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis? Again, you seem to mix ambiguity and context sensitivity. W.r.t. the latter: I have a problem with those occurences where I don't think the costs I associate with it are outweighed by its benefits (e.g. with the `in` keyword overloaded meaning for AA's). Not mixing, I exclude real ambiguities because have no real meaning. I thought I mentioned something about that way back when, but who knows... Although, I'd be curious if any programming languages existed who's grammar was ambiguous and actually could be realized? Sure, see the dangling else problem I mentioned. It's just that people basically all agree on one of the choices and all stick with it (despite the grammar being formally
Re: Bug in D!!!
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: [...] The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and [...] Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ... Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand. If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time. Their speed generally does not affect the time one has to spend to understand a piece of code. Maybe the language itself should be designed so there are no ambiguities at all? A single simple for each function? A new keyboard design should be implemented(ultimately a direct brain to editor interface for the fastest time, excluding the time for development and learning)? I assume you mean "without context sensitive meanings" instead of "no ambiguities", because the latter should be the case as a matter of course (and mostly is, with few exceptions such as the dangling else ambiguity in C and friends). Assuming the former: As I stated earlier, it needs to be worth the cost. So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case. Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading. Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword. My proposal remains to not use a keyword and just upgrade existing template specialization. When I see something I try to see it at once rather [...] To really counter your argument: What about parenthesis? They too have the same problem with in. They have perceived ambiguity... but they are not ambiguity. So your argument should be said about them too and you should be against them also, but are you? [To be clear here: foo()() and (3+4) have 3 different use cases of ()'s... The first is templated arguments, the second is function arguments, and the third is expression grouping] That doesn't counter my argument, it just states that parentheses have these costs, as well (which they do). The primary question would still be if they're worth that cost, which imho they are. Regardless of that, though, since they are already part of the language syntax (and are not going to be up for change), this is not something we could do something about, even if we agreed they weren't worth the cost. New syntax, however, is up for that kind of discussion, because once it's in it's essentially set in stone (not quite, but *very* slow to remove/change because of backwards compatibility). [...] Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used. Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right. As I countered that in the above, I don't think your rebuttal is valid. Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;) Not as far as I see it, though I'm willing to agree to disagree :) I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other. It's not an absolute restriction, it's an absolute position from which I argue against including such overloading
Re: Bug in D!!!
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: [...] The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and [...] Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. Again, I'm not necessarily arguing for them, just saying that one shouldn't avoid them just to avoid them. [...] It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read. I don't think that is true. Everything is hard to read. It's about experience. The more you experience something the more clear it becomes. Only with true ambiguity is something impossible. I realize that in one can design a language to be hard to parse due to apparent ambiguities, but am I am talking about cases where they can be resolved immediately(at most a few milliseconds). Experience helps, of course, but it doesn't change that it's still just that little bit slower. And everytime we encourage such overloading encourages more, which in the end sums up. You are making general statements, and it is not that I disagree, but it depends on context(everything does). In this specific case, I think it is extremely clear what in means, so it is effectively like using a different token. Again, everyone is different though and have different experiences that help them parse things more naturally. I'm sure there are things that you might find easy that I would find hard. But that shouldn't stop me from learning about them. It makes me "smarter", to simplify the discussion. I am, because I believe it to be generally true for "1 keyword |-> 1 meaning" to be easier to read than "1 keyword and 1 context |-> 1 meaning" as the former inherently takes less time. [...] Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used. Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right. As I countered that in the above, I don't think your rebuttal is valid. I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other. It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion. [...] I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'): --- foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C --- Yes, if this worked, I'd be fine with it. Again, I could care less. `:` == `in` for me as long as `:` has the correct meaning of "can be one of the following" or whatever. But AFAIK, : is not "can be one of the following"(which is "in" or "element of" in the mathematical sense) but can also mean "is a derived type of". Right, ":" is indeed an overloaded symbol in D (and ironically, instead of with "in", I think all its meanings are valuable enough to be worth the cost). I don't see how that would interfere in this context, though, as we don't actually overload a new meaning (it's still "restrict this type to the thing to the right"). If that is the case then go for it ;) It is not a concern of mine. You tell me the syntax and I will use it. (I'd have no choice, of course, but if it's short and sweet then I won't have any problem). I'm discussing this as a matter of theory, I don't have a use for it. [...] Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of
Re: Bug in D!!!
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote: On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote: I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck. I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it. But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking. First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy. Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications) With those three simple changes to the language I think that this feature will work for every T. Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in )() would work or foo(T in )() the `in` keyword makes sense here and is not used nor ambiguous, I believe. While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read). Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous. The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). For me, and this is just me, I do not find it ambiguous. I don't find different meanings ambiguous unless the context overlaps. Perceived ambiguity is not ambiguity, it's just ignorance... which can be overcome through learning. Hell, D has many cases where there are perceived ambiguities... as do most things. It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read. But in any case, I could care less about the exact syntax. It's just a suggestion that makes the most logical sense with regard to the standard usage of in. If it is truly unambiguous then it can be used. Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used. Another alternative is foo(T of Typelist) which, AFAIK, of is not used in D and even most programming languages. Another could be foo(T -> Typelist) or even foo(T from Typelist) I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'): --- foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C --- Strictly speaking, this is exactly what template specialization is for, it's just that the current one only supports a single type instead of a set of types. Looking at the grammar rules, upgrading it like this is a fairly small change, so the cost there should be minimal. or whatever. Doesn't really matter. They all mean the same to me once the definition has been written in stone. Could use `foo(T eifjasldj Typelist)` for all I care. That's okay, but it does matter to me. The import thing for me is that such a simple syntax exists rather than the "complex syntax's" that have already been given(which are ultimately syntax's as
Re: nested module problem
On Saturday, 2 September 2017 at 23:02:18 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 21:56:15 UTC, Jean-Louis Leroy wrote: [...] Hmmm I see...I was thinking of spinning the runtime part of my openmethods library into its own module (like here https://github.com/jll63/openmethods.d/tree/split-runtime/source/openmethods) but it looks like a bad idea... Why does it look like a bad idea (I don't see an immediate issue the module structure either way)? * in the module structure
Re: nested module problem
On Saturday, 2 September 2017 at 21:56:15 UTC, Jean-Louis Leroy wrote: [...] Hmmm I see...I was thinking of spinning the runtime part of my openmethods library into its own module (like here https://github.com/jll63/openmethods.d/tree/split-runtime/source/openmethods) but it looks like a bad idea... Why does it look like a bad idea (I don't see an immediate issue the module structure either way)?
Re: nested module problem
On Saturday, 2 September 2017 at 21:24:19 UTC, Jean-Louis Leroy wrote: On Saturday, 2 September 2017 at 20:48:22 UTC, Moritz Maxeiner wrote: So the compiler wants you to import it by the name it has inferred for you (The fix being either specifying the module name in foo/bar.d as `module foo.bar`, or importing it as via `import bar;` in foo.d). [1] https://dlang.org/spec/module.html I thought of doing that, it merely changed the error. OK now I have: in foo.d: module foo; import foo.bar; in foo/bar.d: module foo.bar; $ dmd -c foo.d foo/bar.d foo/bar.d(1): Error: package name 'foo' conflicts with usage as a module name in file foo.d If I compile separately: jll@ORAC:~/dev/d/tests/modules$ dmd -I. -c foo.d foo/bar.d(1): Error: package name 'foo' conflicts with usage as a module name in file foo.d Yes, these now both fail because you cannot have a module `foo` and a package `foo` at the same time (they share a namespace), I forgot about that. jll@ORAC:~/dev/d/tests/modules$ dmd -I. -c foo/bar.d (same as before, no issue here) It believes that 'foo' is a package...because there is a 'foo' directory? You created the 'foo' package by specifying `module foo.bar` in foo/bar.d. I see that a workaround is to move foo.d to foo/package.d but I would like to avoid that. AFAIK you can't; consider: -- baz.d --- import foo; in the same directory as foo.d. If foo/package.d exists (with `module foo` inside), what should baz.d import? foo.d or foo/package.d? The point being that we could have either used foo/package.d or foo.d for a package file, but not both (as that would allow ambiguity) and package.d was chosen. [1] https://dlang.org/spec/module.html#package-module
Re: Bug in D!!!
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote: On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote: I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck. I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it. But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking. First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy. Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications) With those three simple changes to the language I think that this feature will work for every T. Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in )() would work or foo(T in )() the `in` keyword makes sense here and is not used nor ambiguous, I believe. While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read). W.r.t. to the idea in general: I think something like that could be valuable to have in the language, but since this essentially amounts to syntactic sugar (AFAICT), but I'm not (yet) convinced that with `static foreach` being included it's worth the cost. [1] https://dlang.org/spec/expression.html#InExpression
Re: nested module problem
On Saturday, 2 September 2017 at 20:03:48 UTC, Jean-Louis Leroy wrote: So I have: jll@ORAC:~/dev/d/tests/modules$ tree . ├── foo │ └── bar.d └── foo.d foo.d contains: import foo.bar; bar.d is empty. This means bar.d's module name will be inferred by the compiler [1], which will ignore the path you put it under, yielding the module name "bar", not "foo.bar" (one of the issues of doing otherwise would be how the compiler should know at which path depth the inference should start - and any solution to that other than simply ignoring the path would be full of special cases): Modules have a one-to-one correspondence with source files. The module name is, by default, the file name with the path and extension stripped off, and can be set explicitly with the module declaration. Now I try compiling: jll@ORAC:~/dev/d/tests/modules$ dmd -c foo.d This looks like a compiler bug to me (accepts invalid), though I'm not certain. jll@ORAC:~/dev/d/tests/modules$ dmd -c foo/bar.d (No issue here, just an empty module being compiled separately) So far so good. Now I try it the way dub does it: jll@ORAC:~/dev/d/tests/modules$ dmd -c foo.d foo/bar.d foo.d(1): Error: module bar from file foo/bar.d must be imported with 'import bar;' What's up? This doesn't work, because of the inferred module name for foo/bar.d being "bar". So the compiler wants you to import it by the name it has inferred for you (The fix being either specifying the module name in foo/bar.d as `module foo.bar`, or importing it as via `import bar;` in foo.d). [1] https://dlang.org/spec/module.html
Re: string to character code hex string
On Saturday, 2 September 2017 at 20:02:37 UTC, bitwise wrote: On Saturday, 2 September 2017 at 18:28:02 UTC, Moritz Maxeiner wrote: In UTF8: --- utfmangle.d --- void fun_ༀ() {} pragma(msg, fun_ༀ.mangleof); --- --- $ dmd -c utfmangle.d _D6mangle7fun_ༀFZv --- Only universal character names for identifiers are allowed, though, as per [1] [1] https://dlang.org/spec/lex.html#identifiers What I intend to do is this though: void fun(string s)() {} pragma(msg, fun!"ༀ".mangleof); which gives: _D7mainMod21__T3funVAyaa3_e0bc80Z3funFNaNbNiNfZv where "e0bc80" is the 3 bytes of "ༀ". Interesting, I wasn't aware of that (though after thinking about it, it does make sense, as identifiers can only have visible characters in them, while a string could have things such as control characters inside), thanks! That behaviour is defined here [1], btw (the line `CharWidth Number _ HexDigits`). [1] https://dlang.org/spec/abi.html#Value
Re: Using closure causes GC allocation
On Saturday, 2 September 2017 at 18:59:30 UTC, Vino.B wrote: On Saturday, 2 September 2017 at 18:32:55 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 18:08:19 UTC, vino.b wrote: On Saturday, 2 September 2017 at 18:02:06 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote: [...] Line 25 happens because of `[a.name]`. You request a new array: the memory for this has to be allocated (the reason why the compiler says "may" is because sometimes, e.g. if the array literal itself contains only literals, the allocations needn't happen at runtime and no GC call is necessary). Since you don't actually use the array, get rid of it: [...] Hi, Thank you for your help and the DMD version that i am using is DMD 2.076.0 and yes I am on windows. Please post a compilable, minimal example including how that function gets called that yields you that compiler output. Hi, Please find the example code below, [...] Cannot reproduce under Linux with dmd 2.076.0 (with commented out Windows-only check). I'll try to see what happens on Windows once I have a VM setup. Another similar issue : I removed the [a.name] and the issue in line 25 has resolved, but for another function i am getting the same error string[][] cleanFiles(string FFs, string Step) { auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => a.isFile).map!(a =>[a.name , a.timeCreated.toSimpleString[0 .. 20]]).array; -> Issue in this line if (Step == "run") dFiles.each!(a => a[0].remove); return dFiles; } if the replace the line in error as below then i am getting the error "Error: cannot implicitly convert expression dFiles of type Tuple!(string, string)[] to string[][]" auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => a.isFile).map!(a => tuple(a.name, a.timeCreated.toSimpleString[0 .. 20])).array; You changed the type of dFiles, which you return from cleanFiles, without changing the return type of cleanFiles. Change the return type of cleanFiles to the type the compiler error above tells you it should be (`Tuple!(string, string)[]` instead of `string[][]`), or let the compiler infer it via auto (`auto cleanFiles(...`).
Re: Using closure causes GC allocation
On Saturday, 2 September 2017 at 18:08:19 UTC, vino.b wrote: On Saturday, 2 September 2017 at 18:02:06 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote: [...] Line 25 happens because of `[a.name]`. You request a new array: the memory for this has to be allocated (the reason why the compiler says "may" is because sometimes, e.g. if the array literal itself contains only literals, the allocations needn't happen at runtime and no GC call is necessary). Since you don't actually use the array, get rid of it: [...] Hi, Thank you for your help and the DMD version that i am using is DMD 2.076.0 and yes I am on windows. Please post a compilable, minimal example including how that function gets called that yields you that compiler output.
Re: string to character code hex string
On Saturday, 2 September 2017 at 18:07:51 UTC, bitwise wrote: On Saturday, 2 September 2017 at 17:45:30 UTC, Moritz Maxeiner wrote: If this (unnecessary waste) is of concern to you (and from the fact that you used ret.reserve I assume it is), then the easy fix is to use `sformat` instead of `format`: Yes, thanks. I'm going to go with a variation of your approach: private string toAsciiHex(string str) { import std.ascii : lowerHexDigits; import std.exception: assumeUnique; auto ret = new char[str.length * 2]; int i = 0; foreach(c; str) { ret[i++] = lowerHexDigits[(c >> 4) & 0xF]; ret[i++] = lowerHexDigits[c & 0xF]; } return ret.assumeUnique; } If you never need the individual character function, that's probably the best in terms of readability, though with a decent compiler, that and the two functions one should result in the same opcode (except for bitshift swap). I'm not sure how the compiler would mangle UTF8, but I intend to use this on one specific function (actually the 100's of instantiations of it). In UTF8: --- utfmangle.d --- void fun_ༀ() {} pragma(msg, fun_ༀ.mangleof); --- --- $ dmd -c utfmangle.d _D6mangle7fun_ༀFZv --- Only universal character names for identifiers are allowed, though, as per [1] [1] https://dlang.org/spec/lex.html#identifiers
Re: Using closure causes GC allocation
On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote: Hi All, Request your help on how to solve the issue in the below code as when i execute the program with -vgc it state as below: NewTD.d(21): vgc: using closure causes GC allocation NewTD.d(25): vgc: array literal may cause GC allocation void logClean (string[] Lglst, int LogAge) { //Line 21 if (!Lglst[0].exists) { mkdir(Lglst[0]); } auto ct1 = Clock.currTime(); auto st1 = ct1 + days(-LogAge); auto dFiles = dirEntries(Lglst[0], SpanMode.shallow).filter!(a => a.exists && a.isFile && a.timeCreated < st1).map!(a =>[a.name]).array; // Line 25 dFiles.each!(f => f[0].remove); } Line 25 happens because of `[a.name]`. You request a new array: the memory for this has to be allocated (the reason why the compiler says "may" is because sometimes, e.g. if the array literal itself contains only literals, the allocations needn't happen at runtime and no GC call is necessary). Since you don't actually use the array, get rid of it: --- void logClean (string[] Lglst, int LogAge) { //Line 21 if (!Lglst[0].exists) { mkdir(Lglst[0]); } auto ct1 = Clock.currTime(); auto st1 = ct1 + days(-LogAge); auto dFiles = dirEntries(Lglst[0], SpanMode.shallow).filter!(a => a.exists && a.isFile && a.timeCreated < st1).array; // Line 25 dFiles.each!(f => f.remove); } --- I cannot reproduce the line 21 report, though. Since you use `timeCreated` I assume you're on Windows, but what's your D compiler, which D frontend version are you using, etc. (all the things needed to attempt to reproduce the error).
Re: string to character code hex string
On Saturday, 2 September 2017 at 16:23:57 UTC, bitwise wrote: On Saturday, 2 September 2017 at 15:53:25 UTC, bitwise wrote: [...] This seems to work well enough. string toAsciiHex(string str) { import std.array : appender; auto ret = appender!string(null); ret.reserve(str.length * 2); foreach(c; str) ret.put(format!"%x"(c)); return ret.data; } Note: Each of those format calls is going to allocate a new string, followed by put copying that new string's content over into the appender, leaving you with \theta(str.length) tiny memory chunks that aren't used anymore for the GC to eventually collect. If this (unnecessary waste) is of concern to you (and from the fact that you used ret.reserve I assume it is), then the easy fix is to use `sformat` instead of `format`: --- string toHex(string str) { import std.format : sformat; import std.exception: assumeUnique; auto ret = new char[str.length * 2]; size_t len; foreach (c; str) { auto slice = sformat!"%x"(ret[len..$], c); //auto slice = toHex(ret[len..$], c); assert (slice.length <= 2); len += slice.length; } return ret[0..len].assumeUnique; } --- If you want to cut out the format import entirely, notice the `auto slice = toHex...` line, which can be implemented like this (always returns two chars): --- char[] toHex(char[] buf, char c) { import std.ascii : lowerHexDigits; assert (buf.length >= 2); buf[0] = lowerHexDigits[(c & 0xF0) >> 4]; buf[1] = lowerHexDigits[c & 0x0F]; return buf[0..2]; } ---
Re: gcd with doubles
On Friday, 1 September 2017 at 09:33:08 UTC, Alex wrote: On Sunday, 27 August 2017 at 23:13:24 UTC, Moritz Maxeiner wrote: On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote: [...] To expand on the earlier workaround: You can also adapt a floating point to string algorithm in order to dynamically determine an upper bound on the number of after decimal point digits required. Below is an untested adaption of the reference C implementation of errol0[1] for that purpose (MIT license as that is what the original code is under): [...] Hey, cool! Thanks for the efforts :) No problem, two corrections to myself, though: 1) It's a lower bound, not an upper bound (you need at least that many digits in order to not lose precision) 2) The code is missing `_ > ulong.min` checks along the existing `_ < ulong.max` checks
Re: Output range with custom string type
On Thursday, 31 August 2017 at 07:06:26 UTC, Jacob Carlborg wrote: On 2017-08-29 19:35, Moritz Maxeiner wrote: void put(T t) { if (!store) { // Allocate only once for "small" vectors store = alloc.makeArray!T(8); if (!store) onOutOfMemoryError(); } else if (length == store.length) { // Growth factor of 1.5 auto expanded = alloc.expandArray!char(store, store.length / 2); if (!expanded) onOutOfMemoryError(); } assert (length < store.length); moveEmplace(t, store[length++]); } What's the reason to use "moveEmplace" instead of just assigning to the array: "store[length++] = t" ? The `move` part is to support non-copyable types (i.e. T with `@disable this(this)`), such as another owning container (assigning would generally try to create a copy). The `emplace` part is because the destination `store[length]` has been default initialized either by makeArray or expandArray and it doesn't need to be destroyed (a pure move would destroy `store[length]` if T has a destructor).
Re: Output range with custom string type
On Tuesday, 29 August 2017 at 09:59:30 UTC, Jacob Carlborg wrote: [...] But if I keep the range internal, can't I just do the allocation inside the range and only use "formattedWrite"? Instead of using both formattedWrite and sformat and go through the data twice. Then of course the final size is not known before allocating. Certainly, that's what dynamic arrays (aka vectors, e.g. std::vector in C++ STL) are for: --- import core.exception; import std.stdio; import std.experimental.allocator; import std.algorithm; struct PoorMansVector(T) { private: T[]store; size_t length; IAllocator alloc; public: @disable this(this); this(IAllocator alloc) { this.alloc = alloc; } ~this() { if (store) { alloc.dispose(store); store = null; } } void put(T t) { if (!store) { // Allocate only once for "small" vectors store = alloc.makeArray!T(8); if (!store) onOutOfMemoryError(); } else if (length == store.length) { // Growth factor of 1.5 auto expanded = alloc.expandArray!char(store, store.length / 2); if (!expanded) onOutOfMemoryError(); } assert (length < store.length); moveEmplace(t, store[length++]); } char[] release() { auto elements = store[0..length]; store = null; return elements; } } char[] sanitize(string value, IAllocator alloc) { import std.format : formattedWrite, sformat; auto r = PoorMansVector!char(alloc); ().formattedWrite!"'%s'"(value); // do not copy the range return r.release(); } void main() { auto s = sanitize("foo", theAllocator); scope (exit) theAllocator.dispose(s); writeln(s); } --- Do be aware that the above vector is named "poor man's vector" for a reason, that's a hasty write down from memory and is sure to contain bugs. For better vector implementations you can use at collection libraries such as EMSI containers; my own attempt at a DbI vector container can be found here [1] [1] https://github.com/Calrama/libds/blob/6a1fc347e1f742b8f67513e25a9fdbf79f007417/src/ds/vector.d
Re: Accessing outer class attribute from inner struct
On Tuesday, 29 August 2017 at 07:59:40 UTC, Andre Pany wrote: On Monday, 28 August 2017 at 23:12:40 UTC, Moritz Maxeiner wrote: In both cases S doesn't inherently how about C, which means a solution using default initialization is not feasible, as S.init can't know about any particular instance of C. I don't think there's any way for you to avoid using a class constructor. Thanks for the explanation. I now tried to use a class and use a static opIndex. But it seems from a static method you also cannot access the attributes of a outer class :) A nested class' outer property (when nested inside another class) is a class reference, which means we not only require a class instance of the outer class to reference, but also a class instance of the nested class to store said class reference to the other class in. A static class method (by definition) is invoked without a class instance. The two are inherently incompatible. [...] This seems like an unnecessary limitation... I can only recommend reading the language specification w.r.t, nested classes [1] if it seems that way to you, because it is not. [1] https://dlang.org/spec/class.html#nested
Re: C callbacks getting a value of 0! Bug in D?
On Tuesday, 29 August 2017 at 02:47:34 UTC, Johnson Jones wrote: [...] Seems only long and ulong are issues. With respect to the currently major platforms you can reasonable expect software to run on, yes. Just don't try to use D on something with e.g. 32 bit C shorts unless you bind to it via c_short.
Re: C callbacks getting a value of 0! Bug in D?
On Tuesday, 29 August 2017 at 01:34:40 UTC, Johnson Jones wrote: [...] produces 4 on both x86 and x64. So, I'm not sure how you are getting 8. There are different 64bit data models [1] and it seems your platform uses LLP64, which uses 32bit longs. Am I correct in assuming you're on Windows (as they are the only major modern platform that I'm aware of that made this choice)? [1] https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models
Re: Accessing outer class attribute from inner struct
On Monday, 28 August 2017 at 22:47:12 UTC, Andre Pany wrote: On Monday, 28 August 2017 at 22:28:18 UTC, Moritz Maxeiner wrote: On Monday, 28 August 2017 at 21:52:58 UTC, Andre Pany wrote: [...] To make my question short:) If ColumnsArray is a class I can access the attribute "reference" but not if it is a struct. I would rather prefer a struct, but with a struct it seems I cannot access "reference". How can I access "reference" from my inner struct? [...] Add an explicit class reference member to to it: --- class TCustomGrid: TCustomPresentedScrollBox { struct ColumnsArray { TCustomGrid parent; TColumn opIndex(int index) { int r = getIntegerIndexedPropertyReference(reference, "Columns", index); return new TColumn(r); } } ColumnsArray Columns; this() { Columns = ColumnsArray(this); } ... } --- Nesting structs inside anything other than functions[1] is for visibility/protection encapsulation and namespacing only. [1] non-static structs in functions are special as they have access to the surrounding stack frame Unfortunately thats not possible. ColumnsArray and the attribute will become a string mixin to avoid boilerplate. It would be error prone if I have to initialize them in the constructor too. I want just 1 single coding line for this property. That is also the reason I do not want to use a class, as I would have to initialize them in the constructor. --- class C { struct S { } S s; } --- is semantically equivalent to --- struct S { } class C { S s; } --- with the two differences being - namespacing (outside of C one has to use C.S to access S) - you can protect the visibility of the S from outside the module C resides in via private,public, etc. In both cases S doesn't inherently how about C, which means a solution using default initialization is not feasible, as S.init can't know about any particular instance of C. I don't think there's any way for you to avoid using a class constructor.
Re: C callbacks getting a value of 0! Bug in D?
On Monday, 28 August 2017 at 22:21:18 UTC, Johnson Jones wrote: On Monday, 28 August 2017 at 21:35:27 UTC, Steven Schveighoffer wrote: On 8/27/17 10:17 PM, Johnson Jones wrote: [...] For C/C++ interaction, always use c_... types if they are available. The idea is both that they will be correctly defined for the width, and also it will mangle correctly for C++ compilers (yes, long and int are mangled differently even when they are the same thing). -Steve and where are these c_ types defined? The reason I replaced them was precisely because D was not finding them. core.stdc.config , which unfortunately doesn't appear in the online documentation AFAICT (something that ought to be fixed). A common workaround is to use pattern searching tools like grep if you know the phrase to look for: $ grep -Er c_long /path/to/imports , or in this case, since these things are usually done with aliases: $ grep -Er 'alias\s+\w*\s+c_long' /path/to/imports
Re: Accessing outer class attribute from inner struct
On Monday, 28 August 2017 at 21:52:58 UTC, Andre Pany wrote: [...] To make my question short:) If ColumnsArray is a class I can access the attribute "reference" but not if it is a struct. I would rather prefer a struct, but with a struct it seems I cannot access "reference". How can I access "reference" from my inner struct? [...] Add an explicit class reference member to to it: --- class TCustomGrid: TCustomPresentedScrollBox { struct ColumnsArray { TCustomGrid parent; TColumn opIndex(int index) { int r = getIntegerIndexedPropertyReference(reference, "Columns", index); return new TColumn(r); } } ColumnsArray Columns; this() { Columns = ColumnsArray(this); } ... } --- Nesting structs inside anything other than functions[1] is for visibility/protection encapsulation and namespacing only. [1] non-static structs in functions are special as they have access to the surrounding stack frame
Re: Output range with custom string type
On Monday, 28 August 2017 at 14:27:19 UTC, Jacob Carlborg wrote: I'm working on some code that sanitizes and converts values of different types to strings. I thought it would be a good idea to wrap the sanitized string in a struct to have some type safety. Ideally it should not be possible to create this type without going through the sanitizing functions. The problem I have is that I would like these functions to push up the allocation decision to the caller. Internally these functions use formattedWrite. I thought the natural design would be that the sanitize functions take an output range and pass that to formattedWrite. [...] Any suggestions how to fix this or a better idea? If you want the caller to be just in charge of allocation, that's what std.experimental.allocator provides. In this case, I would polish up the old "format once to get the length, allocate, format second time into allocated buffer" method used with snprintf for D: --- test.d --- import std.stdio; import std.experimental.allocator; struct CountingOutputRange { private: size_t _count; public: size_t count() { return _count; } void put(char c) { _count++; } } char[] sanitize(string value, IAllocator alloc) { import std.format : formattedWrite, sformat; CountingOutputRange r; ().formattedWrite!"'%s'"(value); // do not copy the range auto s = alloc.makeArray!char(r.count); scope (failure) alloc.dispose(s); // This should only throw if the user provided allocator returned less // memory than was requested return s.sformat!"'%s'"(value); } void main() { auto s = sanitize("foo", theAllocator); scope (exit) theAllocator.dispose(s); writeln(s); } --
Re: gcd with doubles
On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote: [..] Is there a workaround, maybe? To expand on the earlier workaround: You can also adapt a floating point to string algorithm in order to dynamically determine an upper bound on the number of after decimal point digits required. Below is an untested adaption of the reference C implementation of errol0[1] for that purpose (MIT license as that is what the original code is under): --- void main() { assert(gcd(0.5, 32) == 0.5); assert(gcd(0.2, 32) == 0.2); assert(gcd(1.3e2, 3e-5) == 1e-5); } template gcd(T) { import std.traits : isFloatingPoint; T gcd(T a, T b) { static if (isFloatingPoint!T) { return fgcd(a, b); } else { import std.numeric : igcd = gcd; return igcd(a, b); } } static if (isFloatingPoint!T) { import std.math : nextUp, nextDown, pow, abs, isFinite; import std.algorithm : max; T fgcd(T a, T b) in { assert (a.isFinite); assert (b.isFinite); assert (a < ulong.max); assert (b < ulong.max); } body { short a_exponent; int a_digitCount = errol0CountOnly(abs(a), a_exponent); short b_exponent; int b_digitCount = errol0CountOnly(abs(b), b_exponent); a_digitCount -= a_exponent; if (a_digitCount < 0) { a_digitCount = 0; } b_digitCount -= b_exponent; if (b_digitCount < 0) { b_digitCount = 0; } auto coeff = pow(10, max(a_digitCount, b_digitCount)); assert (a * coeff < ulong.max); assert (b * coeff < ulong.max); return (cast(T) euclid(cast(ulong) (a * coeff), cast(ulong) (b * coeff))) / coeff; } ulong euclid(ulong a, ulong b) { while (b != 0) { auto t = b; b = a % b; a = t; } return a; } struct HighPrecisionFloatingPoint { T base, offset; void normalize() { T base = this.base; this.base += this.offset; this.offset += base - this.base; } void mul10() { T base = this.base; this.base *= T(10); this.offset *= T(10); T offset = this.base; offset -= base * T(8); offset -= base * T(2); this.offset -= offset; normalize(); } void div10() { T base = this.base; this.base /= T(10); this.offset /= T(10); base -= this.base * T(8); base -= this.base * T(2); this.offset += base / T(10); normalize(); } } alias HP = HighPrecisionFloatingPoint; enum epsilon = T(0.001); ushort errol0CountOnly(T f, out short exponent) { ushort digitCount; T ten = T(1); exponent = 1; auto mid = HP(f, T(0)); while (((mid.base > T(10)) || ((mid.base == T(10)) && (mid.offset >= T(0 && (exponent < 308)) { exponent += 1; mid.div10(); ten /= T(10); } while (((mid.base < T(1)) || ((mid.base == T(1)) && (mid.offset < T(0 && (exponent > -307)) { exponent -= 1; mid.mul10(); ten *= T(10); } auto inhi = HP(mid.base, mid.offset + (nextUp(f) - f) * ten / (T(2) + epsilon)); auto inlo = HP(mid.base, mid.offset + (nextDown(f) - f) * ten / (T(2) + epsilon)); inhi.normalize(); inlo.normalize(); while (inhi.base > T(10) || (inhi.base == T(10) && (inhi.offset >= T(0 { exponent += 1; inhi.div10(); inlo.div10(); } while (inhi.base < T(1) || (inhi.base == T(1) && (inhi.offset < T(0 { exponent -= 1; inhi.mul10(); inlo.mul10(); } while (inhi.base != T(0) || inhi.offset != T(0)) { auto hdig = cast(ubyte) inhi.base; if ((inhi.base == hdig) && (inhi.offset < T(0))) { hdig -= 1; } auto ldig = cast(ubyte) inlo.base; if ((inlo.base == ldig) && (inlo.offset < 0)) { ldig -= 1;
Re: gcd with doubles
On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote: Hi, all. Can anybody explain to me why void main() { import std.numeric; assert(gcd(0.5,32) == 0.5); assert(gcd(0.2,32) == 0.2); } fails on the second assert? I'm aware, that calculating gcd on doubles is not so obvios, as on integers. But if the library accepts doubles, and basically the return is correct occasionally, why it is not always the case? If the type isn't a builtin integral and can't be bit shifted, the gcd algorithm falls back to using the Euclidean algorithm in order to support custom number types, so the above gdc in the above reduces to: --- double gcd(double a, double b) { while (b != 0) { auto t = b; b = a % b; a = t; } return a; } --- The issue boils down to the fact that `32 % 0.2` yield `0.2` instead of `0.0`, so the best answer I can give is "because floating points calculations are approximations". I'm actually not sure if this is a bug in fmod or expected behaviour, but I'd tend to the latter. Is there a workaround, maybe? If you know how many digits of precision after the decimal dot you can multiply beforehand, gcd in integer realm, and div afterwards (be warned, the below is only an example implementation for readability, it does not do the required overflow checks for the double -> ulong conversion!): --- import std.traits : isFloatingPoint; T gcd(ubyte precision, T)(T a, T b) if (isFloatingPoint!T) { import std.numeric : _gcd = gcd; immutable T coeff = 10 * precision; return (cast(T) _gcd(cast(ulong) (a * coeff), cast(ulong) (b * coeff))) / coeff; } ---
Re: Confusion over enforce and assert - both are compiled out in release mode
On Sunday, 27 August 2017 at 10:46:53 UTC, Andrew Chapman wrote: [...] Oh interesting. Does DUB support passing through the --enable-contracts flag to ldc? Also, if this is an ldc specific thing it's probably not a good idea i'd imagine, since in the future one may want to use a GDC, or DMD? Also, with regards to gdc, its release mode `-frelease` option is explicitly specified in the manual as being shorthand for a specific set of options: This is equivalent to compiling with the following options: gdc -fno-assert -fbounds-check=safe -fno-invariants \ -fno-postconditions -fno-preconditions -fno-switch-errors As it doesn't seem to turn on/off any other options / optimizations, you can use `"dflags-gdc": [...]` to specify your own set of "release" options without losing anything. In particular, I would overwrite dub's default "release" build type [1] and add your own per compiler build settings, so dub won't pass `-frelease` to gdc when using `dub --build=release`. [1] https://code.dlang.org/package-format?lang=json#build-types
Re: Confusion over enforce and assert - both are compiled out in release mode
On Sunday, 27 August 2017 at 10:46:53 UTC, Andrew Chapman wrote: On Sunday, 27 August 2017 at 10:37:50 UTC, Moritz Maxeiner wrote: [...] Oh interesting. Does DUB support passing through the --enable-contracts flag to ldc? Sure, using platform specific build settings [1] such as `"dflags-ldc": ["--enable-contracts"]`. Also, if this is an ldc specific thing it's probably not a good idea i'd imagine, since in the future one may want to use a GDC, or DMD? If you want to use another compiler that supports it, add the appropriate "dflags-COMPILER" setting to your package file. With regards to dmd: Don't use it for release builds, use gdc or ldc (better optimizations). https://code.dlang.org/package-format?lang=json#build-settings
Re: Confusion over enforce and assert - both are compiled out in release mode
On Sunday, 27 August 2017 at 10:17:47 UTC, Andrew Chapman wrote: On Sunday, 27 August 2017 at 10:08:15 UTC, ag0aep6g wrote: On 08/27/2017 12:02 PM, Andrew Chapman wrote: However, I am finding that BOTH enforce and assert are compiled out by dmd and ldc in release mode. Is there a standard way of doing what enforce does inside an "in" contract block that will work in release mode? I'm guessing I should write my own function for now. The whole `in` block is ignored in release mode. Doesn't matter what you put in there. Nothing of it will be compiled. Thanks, that explains it. I think it's a bit of a shame that the "in" blocks can't be used in release mode as the clarity they provide for precondition logic is wonderful. If you need that, you could compile using ldc in release mode (which you probably want to do anyway): --- test.d --- import std.exception; import std.stdio; void foo(int x) in { enforce(x > 0); } body { } void bar(int x) in { assert(x > 0); } body { } void baz(int x) in { if (!(x > 0)) assert(0); } body { } void main() { (-1).foo.assertThrown; (-1).bar; (-1).baz; } -- $ ldc2 test.d -> failed assert in bar's in contract terminates the program $ ldc2 -release test.d -> failed assertThrown in main terminates the program $ ldc2 -release -enable-contracts test.d -> failed assert in baz's in contract terminates the program $ ldc2 -release -enable-contracts -enable-asserts test.d -> failed assert in bar's in contract terminates the program
Re: Appending data to array results in duplicate's.
On Friday, 25 August 2017 at 16:45:16 UTC, Vino.B wrote: Hi, Request your help on the below issue, Issue : While appending data to a array the data is getting duplicated. Program: import std.file: dirEntries, isFile, SpanMode; import std.stdio: writeln, writefln; import std.algorithm: filter, map; import std.array: array; import std.typecons: tuple; string[] Subdata; void main () { auto dFiles = dirEntries("C:\\Temp\\TEAM", SpanMode.shallow).filter!(a => a.isFile).map!(a => tuple(a.name , a.timeCreated)).array; foreach (d; dFiles) { Subdata ~= d[0]; Subdata ~= d[1].toSimpleString; writeln(Subdata); } } Output: ["C:\\Temp\\TEAM\\test1.pdf", "2017-Aug-24 18:23:00.8946851"] - duplicate line ["C:\\Temp\\TEAM\\test1.pdf", "2017-Aug-24 18:23:00.8946851", "C:\\TempTEAM\\test5.xlsx", "2017-Aug-25 23:38:14.486421"] From, Vino.B You are consecutively appending to an array in each loop iteration step, i.e. you are buffering, and you're printing the current state of the buffer (all previously buffered elements) in each loop iteration. I can't see any duplication going on, what exactly to you wish to accomplish?
Re: Long File path Exception:The system cannot find the path specified
On Wednesday, 23 August 2017 at 13:04:28 UTC, Vino.B wrote: The line it complains is std.file.FileException@std\file.d(3713):even after enabling debug it points to the same Output: D:\DScript>rdmd -debug Test.d -r dryrun std.file.FileException@std\file.d(3713): N:\PROD_TEAM\TST_BACKUP\abcyf0\TST_BATS\j2ee_backup\cluster\states0\apps\bat.com\tc~bat~agent~application~e2emai~std~collectors\servlet_jsp\tc~bat~agent~application~e2emai~std~collectors\root\WEB-INF\entities\DataCollectionPushFileContentScannerTypeBuilder: The system cannot find the path specified. 0x00431A56 0x00429801 You need to compile with debug info (option `-g`), not compile in debug code (option `-debug`). What's the (full) stack trace when compiling with debug info?
Re: Long File path Exception:The system cannot find the path specified
On Wednesday, 23 August 2017 at 12:01:20 UTC, Vino.B wrote: On Wednesday, 23 August 2017 at 11:29:07 UTC, Moritz Maxeiner wrote: On which line do you get the Exception? Does it happen with shorter paths, as well? Assuming it happens with all paths: Just to be sure, is each of those backslashes actually encoded as a backslash? If you specified the path in the D source like `path = "N:\PROD_TEAM..."`, then it won't be, because backslash is an escape character (you would need to write `path = "N:\\PROD_TEAM..."`, or better yet path = "N:/PROD_TEAM..."`). The above program scan for files/directories under the main folder N:\PROD_TEAM\ and reports the size of each of the sub folders eg: "TST_BACKUP", under the main folder "N:\PROD_TEAM\" there are more than 9000+ files/directories, eg: (N:\PROD_TEAM\TST_BACKUP,N:\PROD_TEAM\PRD_BACKUP\) and the above program will output the size of the sub folders "TST_BACKUP,PRD_BACKUP", there is no issue is the path is shorter, the issue arises only when the path is bigger, eg the program prints the size of the sub folder PRD_BACKUP but when it tries to scan the sub folder TST_BACKUP the issue arises and the program terminates with the exception "The system cannot find the path specified", hence it not not be possible to provide the path explicitly, so can you help me on this. While that is good to know, you still haven't answered my initial question: On which line do you get the Exception? If your program terminates because of an uncaught exception (as you stated), then you should've received a stack trace containing the line number on which the exception was thrown (remember to compile with debug info). You should also consider providing a compilable, minimal example (with test data) that can be used to reproduce the issue.
Re: Long File path Exception:The system cannot find the path specified
On Wednesday, 23 August 2017 at 05:06:50 UTC, Vino.B wrote: Hi All, When i run the below code in windows i am getting "The system cannot find the path specified" even though the path exist , the length of the path is 516 as below, request your help. Path : N:\PROD_TEAM\TST_BACKUP\abcyf0\TST_BATS\j2ee_backup\cluster\states0\apps\bat.com\tc~bat~agent~application~e2emai~std~collectors\servlet_jsp\tc~bat~agent~application~e2emai~std~collectors\root\WEB-INF\entities\DataCollectionPushFileContentScannerTypeBuilder Program: [...] On which line do you get the Exception? Does it happen with shorter paths, as well? Assuming it happens with all paths: Just to be sure, is each of those backslashes actually encoded as a backslash? If you specified the path in the D source like `path = "N:\PROD_TEAM..."`, then it won't be, because backslash is an escape character (you would need to write `path = "N:\\PROD_TEAM..."`, or better yet path = "N:/PROD_TEAM..."`).
Re: ore.exception.RangeError
On Wednesday, 23 August 2017 at 05:53:46 UTC, ag0aep6g wrote: On 08/23/2017 07:45 AM, Vino.B wrote: Execution : rdmd Summary.d - Not working rdmd Summary.d test - Working Program: void main (string[] args) { if(args.length != 2 ) writefln("Unknown operation: %s", args[1]); } When args.length == 1, then the one element is args[0], not args[1]. args[1] only exists when args.length >= 2. To expand on that: argv[0] is what is passed to the process the D program runs at in the system call it was spawned from (e.g. execve), which usually corresponds to the program's name.
Re: Parameter File reading
On Wednesday, 23 August 2017 at 10:25:48 UTC, Vino.B wrote: Hi All, Can anyone provide me a example code on how to read a parameter file and use those parameter in the program. From, Vino.B For small tools I use JSON files via asdf[1]. As an example you can look at the tunneled settings structure here[2] and how it's loaded and parsed here[3]; afterwards, you just use the struct as normal in D. [1] https://github.com/tamediadigital/asdf [2] https://github.com/Calrama/tunneled/blob/master/source/tunneled.d#L3 [3] https://github.com/Calrama/tunneled/blob/master/source/tunneled.d#L45
Re: Different Output after each execution
On Friday, 18 August 2017 at 15:46:13 UTC, Vino.B wrote: On Friday, 18 August 2017 at 11:24:24 UTC, Moritz Maxeiner wrote: On Friday, 18 August 2017 at 10:50:28 UTC, Moritz Maxeiner wrote: On Friday, 18 August 2017 at 10:06:04 UTC, Vino wrote: On Friday, 18 August 2017 at 08:34:39 UTC, ikod wrote: On Friday, 18 August 2017 at 08:00:26 UTC, Vino.B wrote: Hi All, I have written a small program to just list the directories, but when i run the program each time i am getting different output, hence request you help, below is the code [...] Do you expect some strict execution order when you run 'parallel' foreach? Yes, the order of execution should be the same as the order of the directory provided to scan. Then you cannot parallelize the work[1], use: --- auto dFiles = dirEntries(Dirlist[i], SpanMode.shallow).filter!(a => a.isDir); foreach (d; dFiles) { writefln("%-63s %.20s", d, d.timeCreated().toSimpleString); } --- [1] You cannot parallelize computations that depend on each other, which you make yours do by requiring a specific order of execution. Small correction: You *could* parallelize the conversion to string `d.timeCreated().toSimpleString`, but then you'd need to merge the resulting sets of strings generated in each work unit to regain the original order. Hi, Thank you very much, it worked and need one more help, with the below line i am able to list all directories which contains the pattern *DND*, now i need the revers, list all the directories expect those containing the pattern *DND*. dirEntries(i, SpanMode.shallow).filter!(a => a.isDir).filter!(a => globMatch(a.baseName, "*DND*")) Negating the filtering rule should yield you the inverse set: --- dirEntries(i, SpanMode.shallow).filter!(a => a.isDir).filter!(a => !globMatch(a.baseName, "*DND*")) --- Also I don't see a reason to use two filter invocations here, you can join the conditions to a single filter (same for the unnegated one): --- dirEntries(i, SpanMode.shallow).filter!(a => a.isDir && !globMatch(a.baseName, "*DND*")) ---
Re: Different Output after each execution
On Friday, 18 August 2017 at 10:50:28 UTC, Moritz Maxeiner wrote: On Friday, 18 August 2017 at 10:06:04 UTC, Vino wrote: On Friday, 18 August 2017 at 08:34:39 UTC, ikod wrote: On Friday, 18 August 2017 at 08:00:26 UTC, Vino.B wrote: Hi All, I have written a small program to just list the directories, but when i run the program each time i am getting different output, hence request you help, below is the code [...] Do you expect some strict execution order when you run 'parallel' foreach? Yes, the order of execution should be the same as the order of the directory provided to scan. Then you cannot parallelize the work[1], use: --- auto dFiles = dirEntries(Dirlist[i], SpanMode.shallow).filter!(a => a.isDir); foreach (d; dFiles) { writefln("%-63s %.20s", d, d.timeCreated().toSimpleString); } --- [1] You cannot parallelize computations that depend on each other, which you make yours do by requiring a specific order of execution. Small correction: You *could* parallelize the conversion to string `d.timeCreated().toSimpleString`, but then you'd need to merge the resulting sets of strings generated in each work unit to regain the original order.
Re: Different Output after each execution
On Friday, 18 August 2017 at 10:06:04 UTC, Vino wrote: On Friday, 18 August 2017 at 08:34:39 UTC, ikod wrote: On Friday, 18 August 2017 at 08:00:26 UTC, Vino.B wrote: Hi All, I have written a small program to just list the directories, but when i run the program each time i am getting different output, hence request you help, below is the code [...] Do you expect some strict execution order when you run 'parallel' foreach? Yes, the order of execution should be the same as the order of the directory provided to scan. Then you cannot parallelize the work[1], use: --- auto dFiles = dirEntries(Dirlist[i], SpanMode.shallow).filter!(a => a.isDir); foreach (d; dFiles) { writefln("%-63s %.20s", d, d.timeCreated().toSimpleString); } --- [1] You cannot parallelize computations that depend on each other, which you make yours do by requiring a specific order of execution.
Re: if (auto x = cast(C) x)
On Wednesday, 9 August 2017 at 21:54:46 UTC, Q. Schroll wrote: For a class/interface type `A` and a class `C` inheriting from `A` one can do A a = getA(); if (auto c = cast(C) a) { .. use c .. } to get a `C` view on `a` if it happens to be a `C`-instance. Sometimes one cannot find a good new name for `c` while there is no advantage of accessing `a` when `c` is available. D does not allow to shadow `a` in the if-auto declaration for good reasons. How often do you need this? I wouldn't go as far as saying downcasting is (always) evil, but it can be indicative of suboptimal abstractions [1]. How about relaxing the rule for cases like these, where the rhs is the lhs with a cast to derived? if (auto a = cast(C) a) { .. use a typed as C .. } One can think of `a` being *statically* retyped to `C` as this is a (strictly) better type information. Internally, it would be a shadowing, but it does not matter as the disadvantages don't apply (if I didn't miss something). While I can't see an obvious semantic issue, I would vote against such syntax because it introduces more special cases (and in this case an inconsistency w.r.t. variable shadowing) into the language and I don't see it providing enough of a benefit (downcasting should be used rarely) to justify that. [1] http://codebetter.com/jeremymiller/2006/12/26/downcasting-is-a-code-smell/
Re: Get Dll functions at compile time
On Wednesday, 9 August 2017 at 02:11:13 UTC, Johnson Jones wrote: I like to create code that automates much of the manual labor that we, as programmers, are generally forced to do. D generally makes much of this work automatable. For example, I have created the following code which makes loading dlls similar to libs: [...] Yes, this is essentially what people are doing right now, see e.g. [1] where all of the dynamic loading components are generated from the declarations used for linking. But this got me thinking that we don't even need to have to specify the function in D, hell, they already exist in the lib and we are just duplicating work. What if, at compile time, D could get all the functions and their type information and build a class for them for us? We could then just write something like struct DLLImports { @("DLLImport") string libgdk = "libgdk-3-0.dll"; } and have some ctfe meta functions extract all the function from libgdk and insert them in to the struct. There are two problems with this, one easy and one hard/impossible(which would be easy if people were intelligent enough to have foresight): There's a third one: This is not what object code is for, you'd have to write code for every object code format, because they're inherently platform specific. 1. Get the dll function by name from the dll at compile time. This would probably require manually reading the dll file and scanning for the function. And you'd have to demangle all the symbol names for anything that's not C mangled. Such as D. Or C++. 2. Get the type information to build a declaration. This is probably impossible since dll's do not contain the type information about their parameters and return type(or do they?). If they did, it would be easy. I would suggest that all dll's generated by D include this information somewhere and an easy way to extract it for future programmers so such things could be implemented. Again, this is not what object code is for: If you want to bind to anything that was written in D, you're going to have the declarations, anyway (either in .d or .di files), so you wouldn't need the information in the object code. If it's not written in D, it's not going to be in the object code, anyway. Alternatively, maybe a master database could be queried for such information by using the function names and dll name? I don't know if D has network capabilities at compile time though. D doesn't have arbitrary I/O at compile time. [1] https://github.com/Calrama/llvm-d/blob/master/source/llvm/functions/load.d#L124
Re: Specify dmd or ldc compiler and version in a json dub file?
On Tuesday, 8 August 2017 at 09:31:49 UTC, data pulverizer wrote: On Tuesday, 8 August 2017 at 09:21:54 UTC, Moritz Maxeiner wrote: On Tuesday, 8 August 2017 at 09:17:02 UTC, data pulverizer wrote: Hi, I would like to know how to specify dmd or ldc compiler and version in a json dub file. Thanks in advance. You can't [1]. You can specify the compiler to use only on the dub command line via `--compiler=`. [1] https://code.dlang.org/package-format?lang=json How do you distribute packages with specific compiler dependencies? I guess I could write it in the readme. If your code depends on capabilities of a specific D compiler, I wouldn't depend on build tools for that, I'd make it clear in the source code via conditional compilation [1]: --- version (DigitalMars) { } else version (LDC) { } else { static assert (0, "Unsupported D compiler"); } --- There's no equivalent for the frontend version, though AFAIK. If it's not your code that needs something compiler specific, but you just want to control which is used, don't use dub as a build tool, use another (cmake, meson, write your own compilation "script" in D), and set it's invocation as a prebuildcommand in the dub package file (make sure dub's source file list is empty). [1] http://dlang.org/spec/version.html#predefined-versions
Re: Specify dmd or ldc compiler and version in a json dub file?
On Tuesday, 8 August 2017 at 09:17:02 UTC, data pulverizer wrote: Hi, I would like to know how to specify dmd or ldc compiler and version in a json dub file. Thanks in advance. You can't [1]. You can specify the compiler to use only on the dub command line via `--compiler=`. [1] https://code.dlang.org/package-format?lang=json
Re: Create class on stack
On Tuesday, 8 August 2017 at 05:37:41 UTC, ANtlord wrote: On Sunday, 6 August 2017 at 15:47:43 UTC, Moritz Maxeiner wrote: If you use this option, do be aware that this feature has been > scheduled for future deprecation [1]. It's likely going to continue working for quite a while (years), though. [1] https://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack I can't understand. Why is moved a scope allocation to a library. I'm pretty sure it should be a language feature. The reason is given at the link under "Rationale": --- scope was an unsafe feature. A reference to a scoped class could easily be returned from a function without errors, which would make using such an object undefined behavior due to the object being destroyed after exiting the scope of the function it was allocated in. To discourage it from general-use but still allow usage when needed a library solution was implemented. Note that scope for other usages (e.g. scoped variables) is unrelated to this feature and will not be deprecated. --- Do note that - as Mike pointed out - this rationale does predate DIP1000 escape analysis and is largely invalidated by it for @safe code. Another reason to use the library type is the ability to move the class object around via std.algorithm.move (if you need such C++ style behaviour); I'm not sure whether scope classes will get this feature (I have argued for it at the bug report linked to in my response to Mike), but I wouldn't count on it.
Re: gtk interface responsiveness
On Monday, 7 August 2017 at 22:02:21 UTC, Johnson Jones wrote: I have an icon that I toggle which clicked. It seems that I can't toggle it any faster than about a second. The handler is being called each click but it seems the gui is not updated more than about 1fps in that case? Although, I'm sure it update faster than 1fps, just seems the icon/image isn't. The code I use to set the image is: Image.setFromStock("gtk-go-up", GtkIconSize.SMALL_TOOLBAR); or Image.setFromStock("gtk-go-down", GtkIconSize.SMALL_TOOLBAR); [...] Could you please post the complete minimal code (and compiler options) (or a link to them) required to reproduce the issue?
Re: x64 build time 3x slower?
On Monday, 7 August 2017 at 22:19:57 UTC, Johnson Jones wrote: Why would that be. Program take about 4 seconds to compile and 12 for x64. There is fundamentally no difference between the two versions. I do link in gtk x86 and gtk x64 depending on version, and that's it as far as I can tell. Debug x86 4 x64 12 Release x86 3 x64 5 The timings are pretty steady. Split up the build time in compile time and link time and see how the difference is distributed between the two. If it's distributed overwhelmingly to the link time it could be that you're using Microsoft's linker for x64 and OPTLINK for x86?
Re: Create class on stack
On Monday, 7 August 2017 at 22:02:07 UTC, Mike wrote: On Monday, 7 August 2017 at 13:42:33 UTC, Moritz Maxeiner wrote: You can still create a (scope) class on the stack, escape a reference to it using `move` and use it afterwards, all within the rules of @safe, so I'm not convinced that the reason for deprecating scoped classes is gone yet. Compare this to `scoped`, which behaves as expected (since it wraps the reference type object in a value type): Looks like a bug to me. I recommend submitting a bug report and tag it somehow with "scope" and/or "DIP1000". It appears Walter is giving any bugs with scope/DIP1000 priority. Thanks for the feedback, done: https://issues.dlang.org/show_bug.cgi?id=17730
Re: Create class on stack
On Monday, 7 August 2017 at 10:42:03 UTC, Jacob Carlborg wrote: On 2017-08-06 17:47, Moritz Maxeiner wrote: If you use this option, do be aware that this feature has been scheduled for future deprecation [1]. It's likely going to continue working for quite a while (years), though. It's used all over the place in the DMD code base. I don't see how that's a reason for increasing the amount of code that needs to be changed if/when scope classes are deprecated. Mike's argument holds, though (if the loophole I pointed out gets fixed and scope classes are removed from the future deprecation list).
Re: Create class on stack
On Monday, 7 August 2017 at 13:40:18 UTC, Moritz Maxeiner wrote: Thanks, I wasn't aware of this. I tried fooling around scope classes and DIP1000 for a bit and was surprised that this is allowed: --- import core.stdc.stdio : printf; import std.algorithm : move; class A { int i; this() @safe { i = 0; } } void inc(scope A a) @safe { a.i += 1; } void print(scope A a) @trusted { printf("A@%x: %d\n", cast(void*) a, a.i); } auto makeA() @safe { scope a = new A(); a.print(); return move(a); } void main() @safe { auto a = makeA(); foreach (i; 0..10) { a.print(); a.inc(); } } --- You can still create a (scope) class on the stack, escape a reference to it using `move` and use it afterwards, all within the rules of @safe, so I'm not convinced that the reason for deprecating scoped classes is gone yet. Compare this to `scoped`, which behaves as expected (since it wraps the reference type object in a value type): --- import std.typecons : scoped; auto makeA() @trusted { auto a = scoped!A(); a.print(); return move(a); } void main() @trusted { auto a = makeA(); foreach (i; 0..10) { a.print(); a.inc(); } } --- Forgot to add the runtime output after compiling with `dmd a.d -dip1000`: For `scope A`: A@198d1568: 0 A@198d1568: 0 A@198d1568: 1 A@198d1568: 2 A@198d1568: 3 A@198d1568: 4 A@198d1568: 5 A@198d1568: 6 A@198d1568: 7 A@198d1568: 8 A@198d1568: 9 For `scoped!A`: A@8de538b8: 0 A@8de53940: 0 A@8de53940: 1 A@8de53940: 2 A@8de53940: 3 A@8de53940: 4 A@8de53940: 5 A@8de53940: 6 A@8de53940: 7 A@8de53940: 8 A@8de53940: 9
Re: Create class on stack
On Monday, 7 August 2017 at 10:50:21 UTC, Mike wrote: On Sunday, 6 August 2017 at 15:47:43 UTC, Moritz Maxeiner wrote: If you use this option, do be aware that this feature has been scheduled for future deprecation [1]. It's likely going to continue working for quite a while (years), though. [1] https://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack FYI: http://forum.dlang.org/post/np1fll$ast$1...@digitalmars.com "Yes, it will have to be updated - but I didn't want to adjust it before DIP1000 spec is finalized. Rationale that was driving deprecation of scope storage class is becoming obsolete with DIP1000 implemented but not before." Thanks, I wasn't aware of this. I tried fooling around scope classes and DIP1000 for a bit and was surprised that this is allowed: --- import core.stdc.stdio : printf; import std.algorithm : move; class A { int i; this() @safe { i = 0; } } void inc(scope A a) @safe { a.i += 1; } void print(scope A a) @trusted { printf("A@%x: %d\n", cast(void*) a, a.i); } auto makeA() @safe { scope a = new A(); a.print(); return move(a); } void main() @safe { auto a = makeA(); foreach (i; 0..10) { a.print(); a.inc(); } } --- You can still create a (scope) class on the stack, escape a reference to it using `move` and use it afterwards, all within the rules of @safe, so I'm not convinced that the reason for deprecating scoped classes is gone yet. Compare this to `scoped`, which behaves as expected (since it wraps the reference type object in a value type): --- import std.typecons : scoped; auto makeA() @trusted { auto a = scoped!A(); a.print(); return move(a); } void main() @trusted { auto a = makeA(); foreach (i; 0..10) { a.print(); a.inc(); } } ---
Re: Create class on stack
On Sunday, 6 August 2017 at 15:24:55 UTC, Jacob Carlborg wrote: On 2017-08-05 19:08, Johnson Jones wrote: using gtk, it has a type called value. One has to use it to get the value of stuff but it is a class. Once it is used, one doesn't need it. Ideally I'd like to treat it as a struct since I'm using it in a delegate I would like to minimize unnecessary allocations. Is there any way to get D to allocate a class on the stack like a local struct? Prefix the variable declaration with "scope": scope foo = new Object; If you use this option, do be aware that this feature has been scheduled for future deprecation [1]. It's likely going to continue working for quite a while (years), though. [1] https://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack
Re: Create class on stack
On Sunday, 6 August 2017 at 02:19:19 UTC, FoxyBrown wrote: [...] I don't think you understand what I'm saying. If I use this method to create a "reference" type on the stack rather than the heap, is the only issue worrying about not having that variable be used outside that scope(i.e., have it "escape")? It's the only one I'm aware of OTTOMH. If you encounter others, a bug report would be appreciated.
Re: Create class on stack
On Sunday, 6 August 2017 at 01:18:50 UTC, Johnson Jones wrote: On Saturday, 5 August 2017 at 23:09:09 UTC, Moritz Maxeiner wrote: On Saturday, 5 August 2017 at 17:08:32 UTC, Johnson Jones wrote: using gtk, it has a type called value. One has to use it to get the value of stuff but it is a class. Once it is used, one doesn't need it. Ideally I'd like to treat it as a struct since I'm using it in a delegate I would like to minimize unnecessary allocations. Is there any way to get D to allocate a class on the stack like a local struct? The easy way is through std.typecons.scoped [1]. Here be dragons, though, because classes are reference types. [1] https://dlang.org/phobos/std_typecons.html#.scoped Thanks, I didn't think it created on the stack but it makes sense to do so. See the source [1] as to why: typeof(scoped!T) is a (non-copyable) struct that holds the memory for the T object inside it. The only issue is that it escaping the reference? Yes, don't escape references, that's the reason for my comment: Here be dragons, though, because classes are reference types. [1] https://github.com/dlang/phobos/blob/v2.075.0/std/typecons.d#L6613
Re: Create class on stack
On Saturday, 5 August 2017 at 17:08:32 UTC, Johnson Jones wrote: using gtk, it has a type called value. One has to use it to get the value of stuff but it is a class. Once it is used, one doesn't need it. Ideally I'd like to treat it as a struct since I'm using it in a delegate I would like to minimize unnecessary allocations. Is there any way to get D to allocate a class on the stack like a local struct? The easy way is through std.typecons.scoped [1]. Here be dragons, though, because classes are reference types. [1] https://dlang.org/phobos/std_typecons.html#.scoped
Re: Adding deprecated to an enum member
On Tuesday, 1 August 2017 at 01:12:28 UTC, Jeremy DeHaan wrote: I got an error today because I added deprecated to an enum member. Is there a way to achieve this, or am I out of luck? If it isn't doable, should it be? Here's what I want: [...] It's a bug [1]. [1] https://issues.dlang.org/show_bug.cgi?id=9395
Re: this r-value optimizations
On Tuesday, 1 August 2017 at 22:47:24 UTC, Nordlöw wrote: Given the `struct S` with lots of data fields, I've written the following functional way of initializing only a subset of the members in an instance of `S`: struct S { [...] } Now the question becomes: will the S-copying inside `withF` be optimized out in the case when `this` is an r-value such as in the call auto y = S(32).withF(42.0); ? You're going to have to be specific about optimized out by whom. By the frontend? Doesn't seem that way to me by looking at the `-O0` assembly generated by ldc [1]. If not, one solution of doing this manually is to write `withF` as a free function [...] Is this the preferred way of solving this until we (ever) get named parameters in D? Preferred by whom? The people who want named parameters in D seem to be a minority in the community from my personal observation, so you would most likely get personal preferred way as an answer (instead of "the" unanimous preferred way). In any case, this looks like a case of evil early optimization to me, because it's statistically unlikely that this is going to be the bottleneck of your program (though profiling would be in order to confirm / disprove that assumption for your specific use case). [1] https://godbolt.org/g/Htdtht
Re: Struct Postblit Void Initialization
On Sunday, 30 July 2017 at 19:22:07 UTC, Jiyan wrote: Hey, just wanted to know whether something like this would be possible sowmehow: struct S { int m; int n; this(this) { m = void; n = n; } } So not the whole struct is moved everytime f.e. a function is called, but only n has to be "filled" I'll assume you mean copying (as per the title) not moving (because moving doesn't make sense to me in this context); use a dedicated method: struct S { int m, n; S sparseDup() { S obj; obj.n = n; return obj; } }
Re: D move semantics
On Sunday, 30 July 2017 at 16:12:41 UTC, piotrekg2 wrote: What is the idiomatic D code equivalent to this c++ code? There's no direct equivalent of all your code to D using only druntime+phobos AFAIK. class Block { [...] }; Since you don't seem to be using reference type semantics or polymorphism this should be mapped to a struct, such as --- import std.experimental.allocator; import std.experimental.allocator.mallocator; struct Block { public: static Block create() { Block obj; obj.data_ = Mallocator.instance.makeArray!char(4096); return obj; } ~this() nothrow { if (data_ !is null) { Mallocator.instance.dispose(data_); data_ = null; } } @disable this(this); // Forbid copying private: char[] data_; } --- // What is the equivalent of std::vector, the closest thing I could find is // std.container.array std::vector blocks; for (int i = 0; i < 100; ++i) { // NOTE: blocks are moved when relocation happens // because of move-ctor and move-assign-operator marked noexcept blocks.emplace_back(); } That's the closest one in Phobos AFAIK. There are custom container implementations out there such as the emsi containers [1]. If you use one of them, the above should be as simple as --- Array!Block blocks; foreach (i; 0..100) { blocks ~= Block.create(); } --- I've added your example as a unittest to my own dynamic array implementation, should you wish to have a look [2]. A little bit of background: Classes are reference types, structs are value types i.e there's no copy/move mechanics for classes w.r.t. your code. The one for structs is roughly like this: Whenever the compiler sees a struct object `obj` being assigned a new value `other`, it will run the destructor for `obj` (should one exist), then copy `other` over `obj`, followed by calling the postblit constructor `this(this) { ... }` (should it exist) on `obj`. In some instances (such as return from function, or first assignment in constructor, i.e. initialization) the compiler may automatically optimize the copy to a move. Assuming the compiler tries to do a copy, it will only work if `typeof(obj)` is copyable (doesn't have the postblit disabled via `@disable this(this)`), if it isn't, the compiler will error out; you can force a move by using `std.algorithm : move`. There's also `std.algorithm : moveEmplace` in case you don't wish the target to be destroyed. [1] https://github.com/economicmodeling/containers [2] https://github.com/Calrama/libds/blob/83211c5d7cb866a942dc9dd8ba1c622573611ccd/src/ds/dynamicarray.d#L351
Re: GC
On Sunday, 30 July 2017 at 09:12:53 UTC, piotrekg2 wrote: I would like to learn more about GC in D. [...] It would be great if you could point me out to articles on this subject. The primary locations to get information are the language specification [1] and the druntime documentation [2]. I also suggest reading the excellent GC series on the official language blog [3]. Additionally, in this [5] - despite being mostly about the Go GC - gives a good overview of garbage collection in general. [1] https://dlang.org/spec/garbage.html [2] https://dlang.org/phobos/core_memory.html [3] https://dlang.org/blog/the-gc-series/ [4] http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html [5] https://blog.plan99.net/modern-garbage-collection-911ef4f8bd8e?gi=78635e05a6ac#.6zz5an77a
Re: Problem with dtor behavior
On Friday, 28 July 2017 at 11:39:56 UTC, SrMordred wrote: On Thursday, 27 July 2017 at 20:28:47 UTC, Moritz Maxeiner wrote: On Thursday, 27 July 2017 at 19:19:27 UTC, SrMordred wrote: //D-CODE struct MyStruct{ int id; this(int id){ writeln("ctor"); } ~this(){ writeln("dtor"); } } MyStruct* obj; void push(T)(auto ref T value){ obj[0] = value; } void main() { obj = cast(MyStruct*)malloc( MyStruct.sizeof ); push(MyStruct(1)); } OUTPUT: ctor dtor dtor I didnt expected to see two dtors in D (this destroy any attempt to free resources properly on the destructor). AFAICT it's because opAssign (`obj[0] = value` is an opAssign) creates a temporary struct object (you can see it being destroyed by printing the value of `cast(void*) ` in the destructor). Can someone explain why is this happening and how to achieve the same behavior as c++? Use std.conv.emplace: --- import std.conv : emplace; void push(T)(auto ref T value){ emplace(obj, value); } --- It worked but isnt this odd? Here's the summary: Because D uses default initialization opAssign assumes its destination is an initialized (live) object (in this case located at `obj[0]`) and destructs this object before copying the source over it. Emplace is designed to get around this by assuming that its destination is an uninitialized memory chunk (not a live object). `MyStruct(1)` is a struct literal, not a struct object, i.e. (in contrast to struct objects) it's never destroyed. When passing the struct literal into `push`, a new struct object is created and initialized from the struct literal; this struct object is then passed into `push` instead of the struct literal, used as the source for the opAssign, and then finally destroyed after `push` returns. When assigning the struct literal directly to `obj[0]` no such extra struct object gets created, `obj[0]` still gets destroyed by opAssign and then overwritten by the struct literal. W.r.t to `auto ref`: To paraphrase the spec [1], an auto ref parameter is passed by reference if and only if it's an lvalue (i.e. if it has an accessible address). (Struct) literals are not lvalues (they do not have an address) and as such cannot be passed by reference. [1] https://dlang.org/spec/template.html#auto-ref-parameters
Re: Problem with dtor behavior
On Thursday, 27 July 2017 at 19:19:27 UTC, SrMordred wrote: //D-CODE struct MyStruct{ int id; this(int id){ writeln("ctor"); } ~this(){ writeln("dtor"); } } MyStruct* obj; void push(T)(auto ref T value){ obj[0] = value; } void main() { obj = cast(MyStruct*)malloc( MyStruct.sizeof ); push(MyStruct(1)); } OUTPUT: ctor dtor dtor I didnt expected to see two dtors in D (this destroy any attempt to free resources properly on the destructor). AFAICT it's because opAssign (`obj[0] = value` is an opAssign) creates a temporary struct object (you can see it being destroyed by printing the value of `cast(void*) ` in the destructor). Can someone explain why is this happening and how to achieve the same behavior as c++? Use std.conv.emplace: --- import std.conv : emplace; void push(T)(auto ref T value){ emplace(obj, value); } ---
Re: Prevent destroy() from calling base deconstructor of a derived class?
On Tuesday, 25 July 2017 at 17:50:18 UTC, Dragonson wrote: I need to call only the deconstructor of the derived class I have an instance of, not every deconstructor in the inheritance chain. Putting `override` before the destructor doesn't compile so I'm not sure how to achieve this? Call the finalizer directly: --- import std.stdio; import std.experimental.allocator; import std.experimental.allocator.mallocator; class A { ~this() { writeln("A.~this"); } } class B : A { ~this() { writeln("B.~this"); } } void main() { B b = Mallocator.instance.make!B; b.__dtor(); } --- You're violating how inheritance is designed to work, though, so this will leave the object in an alive state (the finalizer may be called a second time on manual destroy call or GC finalization, at which point the parent class' finalizer will still be called).
Re: Creating a new type, to get strong-ish type checking and restrict usage to certain operations, using struct perhaps
On Saturday, 22 July 2017 at 06:08:59 UTC, Cecil Ward wrote: On Saturday, 22 July 2017 at 03:18:29 UTC, Cecil Ward wrote: [...] I saw David Nadlinger's units package. I'd like to know how the strong typing works. By wrapping in structs and overloading operators [1][2][3][4]. [1] https://github.com/klickverbot/phobos/blob/units/std/units.d#L727 [2] https://github.com/klickverbot/phobos/blob/units/std/units.d#L736 [3] https://github.com/klickverbot/phobos/blob/units/std/units.d#L756 [4] https://github.com/klickverbot/phobos/blob/units/std/units.d#L765
Re: Creating a new type, to get strong-ish type checking and restrict usage to certain operations, using struct perhaps
On Saturday, 22 July 2017 at 03:18:29 UTC, Cecil Ward wrote: I guess part of my question, which I didn't really highlight well enough, is the issue of strong typing. [...] Going back to the original example of packed bcd stored in a uint64_t say, first thing is that I want to ban illegal mixing of arbitrary binary values in ordinary uint64_tmtypes with decimal types, again no assignment, addition, comoarisons etc across types at all allowed. And no friendly automagically conversions [...] All of this should be covered by wrapping in structs and overloading the appropriate operators for the types in question [1][2][3], which is why the BCDInteger struct shell I wrote has the "Overload operators" comment. [1] https://dlang.org/spec/operatoroverloading.html#binary [2] https://dlang.org/spec/operatoroverloading.html#assignment [3] https://dlang.org/spec/operatoroverloading.html#op-assign
Re: Creating a new type, to get strong-ish type checking and restrict usage to certain operations, using struct perhaps
On Friday, 21 July 2017 at 18:49:21 UTC, Cecil Ward wrote: I was think about how to create a new type that holds packed bcd values, of a choice of widths, that must fit into a uint32_t or a uint64_t (not really long multi-byte objects). I am not at all sure how to do it. I thought about using a templated struct to simply wrap a uint of a chosen width, and perhaps use alias this to make things nicer. That's usually how this is done. Take a look at the new std.experimental.checkedint for inspiration [1]. Here's a shell to start with (fill in: --- struct BCDInteger(ubyte bitWidth) if (bitWidth <= 128) { private: enum byteWidth = // Add bit to byte width conversion (or just take byte width as template parameter) ubyte[byteWidth] store; public: // Add constructor(s)/static factory functions // Overload operators, // Add conversions to other integer formats // Add alias this for conversion to two complements format } --- [1] https://github.com/dlang/phobos/blob/v2.075.0/std/experimental/checkedint.d#L213
Re: Adding flags to dub build
On Tuesday, 18 July 2017 at 20:12:13 UTC, Jean-Louis Leroy wrote: On Tuesday, 18 July 2017 at 20:00:48 UTC, Guillaume Piolat wrote: On Tuesday, 18 July 2017 at 19:49:35 UTC, Jean-Louis Leroy wrote: Hi, I want to add a few flags while building with dub. I tried: DFLAGS='-d-version=explain' dub test ... ...but DFLAGS does not seem to be honored. In fact I wouldn't mind adding a builtType to my dub.sdl (but then will it be inherited by the subpackages) but I don't see how to specify flags there either...maybe because it tries to hide variations in compiler switches? J-L Use "dflags" or "lflags" (linker) It doesn't work either... It's a build setting [1], not a command line option, you set it in the package file (dub.json/dub.sdl)... https://code.dlang.org/package-format?lang=json#build-settings
Re: WTF is going on! Corrupt value that is never assigned
On Thursday, 13 July 2017 at 23:30:39 UTC, Moritz Maxeiner wrote: Okay, I'll setup a Windows VM when I have time and check it out (unless someone solves it beforehand). I have been unable to reproduce your reported behaviour with dmd 2.074.1 (same as Adam).
Re: Exception handling
On Friday, 14 July 2017 at 23:09:23 UTC, Stefan Koch wrote: On Friday, 14 July 2017 at 23:02:24 UTC, Moritz Maxeiner wrote: On Friday, 14 July 2017 at 21:20:29 UTC, Jonathan M Davis wrote: Basically, the compiler _never_ looks at the bodies of other functions when determining which attributes apply. It always [...]. I'm well aware of that, but it doesn't mean that it can't be enhanced to do so (i.e. what it can do, not what it does do). "Enhancing" the compiler to do so comes at a very very high cost. That depends on if, how, and when the compiler frontend currently does other (unrelated to exceptions) semantic analysis of function bodies. Which would force the compiler to look at every body it can look at to maybe discover a closed set of execptions. This would kill fast compile-times! Again, this depends on the exact internals available at the semantic analysis time, but in theory, it should be possible that when a ThrowStatement is encountered, the surrounding scope aggregates the exception's type in it's aggregated exception set (ignoring things not inherited from Exception). I don't think this would necessarily kill fast compile times.
Re: Exception handling
On Friday, 14 July 2017 at 21:20:29 UTC, Jonathan M Davis wrote: On Friday, July 14, 2017 9:06:52 PM MDT Moritz Maxeiner via Digitalmars-d- learn wrote: On Friday, 14 July 2017 at 20:22:21 UTC, Ali Çehreli wrote: > Although it's obvious to us that there are only those two > exceptions, the compiler cannot in general know that. Not in general, no, but if the function's body (and the body of all functions it calls) are available, the compiler can aggregate the exception set and indeed perform a more precise nothrow analysis. Except that that's not how it actually works, and it would probably violate the language spec if it did. That the compiler currently does not do so is not relevant to the fact that it can do so, if implemented - AFAICT it wouldn't violate the spec. Basically, the compiler _never_ looks at the bodies of other functions when determining which attributes apply. It always [...]. I'm well aware of that, but it doesn't mean that it can't be enhanced to do so (i.e. what it can do, not what it does do). For it to work otherwise would actually cause a lot of problems with .di files. The compiler would simply skip declarations without bodies, i.e. things for them would be exactly as they are now; that's precisely why I wrote that all bodies of called functions must be available for it to work. If one is missing, it just collapsed to what we have today (unless we introduced optional exception set declaring in function signatures, which is controversial).
Re: Exception handling
On Friday, 14 July 2017 at 20:22:21 UTC, Ali Çehreli wrote: On 07/14/2017 12:36 PM, ANtlord wrote: > Hello! I've tried to use nothrow keyword and I couldn't get a state of > function satisfied the keyword. I have one more method that can throw an > exception; it is called inside nothrow method. Every type of an > exception from the throwable method is handled by the nothow method. > > ubyte throwable_fn(ubyte state) { > if(state < 2) { > return 1; > } else if(state == 3) { > throw new MyException1("qwe"); > } else { > throw new MyException2("asd"); > } > } Although it's obvious to us that there are only those two exceptions, the compiler cannot in general know that. Not in general, no, but if the function's body (and the body of all functions it calls) are available, the compiler can aggregate the exception set and indeed perform a more precise nothrow analysis.
Re: How to get value of type at CT given only an alias
On Friday, 14 July 2017 at 18:06:49 UTC, Steven Schveighoffer wrote: .init is the default value. I'm not sure you can get the default value of a non-default initializer, My attempts using init didn't work. e.g.: void foo(alias T)() { pragma(msg, T.init); } struct S { int y = 5; void bar() { foo!y; } // prints 0 } See spec [1]: "If applied to a variable or field, it is the default initializer for that variable or field's type." If you want to get at the 5 in a static context, you'd have to use --- S.init.y --- i.e. get the default initializer for the struct and get its y member's value. [1] https://dlang.org/spec/property.html#init
Re: WTF is going on! Corrupt value that is never assigned
On Thursday, 13 July 2017 at 22:53:45 UTC, FoxyBrown wrote: On Thursday, 13 July 2017 at 20:35:19 UTC, Moritz Maxeiner wrote: On Thursday, 13 July 2017 at 18:22:34 UTC, FoxyBrown wrote: The following code is pretty screwed up, even though it doesn't look like it. I have a buf, a simple malloc which hold the results of a win32 call. I am then trying to copy over the data in buf to a D struct. But when copying the strings, the buf location changes, screwing up the copying process. It shouldn't happen, buf never changes value anywhere except the first malloc(which is once). Somehow it is getting changed, but where? [...] The buf value changes when calling cstr2dstr but I've had it with other values to(any function call such as to!string, etc seems to trigger it). [...] - Does this happen every time, or only sometimes? yes, but I've been having this problem and not sure if it was quite as consistent as before or that I just recognized it. - At which loop iteration does it occur? Now it seems to occur after the first iteration, but I've add it happen after a while and in other cases it's worked.. depends on if I use malloc, or a D array, or what. - Which compiler (+version) are you using (with what flags)? Latest DMD official.. whatever default flags exist in debug mode with visual D... why should it matter? [...] Because it's part of the usual "Steps to reproduce" you are supposed to provide so others can verify what you're encountering. - What are the steps to reproduce (i.e. does this e.g. happen with a main that consist of one call to EnumServices) ? Yes, It is basically the first thing I do when I run my program. [...] Okay, I'll setup a Windows VM when I have time and check it out (unless someone solves it beforehand). because D is not interfacing well with C. First, the win32 function does not simply fill in an array but adds additional junk at the end(didn't know that until after a few wasted hours trying to get it to fill in an array properly). To be fair, that's neither C nor D fault; that's Microsoft providing unintuitive, horrible APIs and doing an amazing job of providing documentation (MSDN) that *appears* to be exhaustive and well written, but misses all these little important details that you actually have to know in order to program correct control logic, driving you to the edge of sanity. Been there, done that. I don't know how any stack corruption could be occurring but that is exactly what it looks like. "Return from function call and "static variables"(with respect to the call) are changed.". But that seems really hard to sell given that it's pretty simple and D should have all those basics well covered. It's always possible for the D compiler to generate wrong code (though I'm not convinced that this is the case here), you should have a look at the generated assembly.
Re: Read from terminal when enter is pressed, but do other stuff in the mean time...
On Thursday, 13 July 2017 at 15:52:57 UTC, Dustmight wrote: How do I read in input from the terminal without sitting there waiting for it? I've got code I want to run while there's no input, and then code I want to act on input when it comes in. How do I do both these things? As Stefan mentions, the single threaded version is basically OS specific (and as others have said there are some wrappers available) the multithreaded solution is fairly simple (have one thread blocked on read(stdin), the other working, synchronize as necessary). If you are interested, on Linux one low level (single threaded) version would essentially consist of: - check on program startup whether the stdin file descriptor refers to something that (sanely) supports readiness events (tty, sockets, pipes, etc. - *not* regular files) using calls like `isatty`[1] and co. - if it's a tty, put it into "raw" mode - get yourself an epoll instance and register stdin with it - get a file descriptor, e.g. an eventfd, for "there's work to be done now" and register it with the epoll instance - have the thread wait for readiness events on the epoll instance and deal with stdin being readable and "there's work to be done now" events for their respective fd. - Queue work on the eventfd as necessary (e.g. from within the readiness handling of the previous step) [1] http://man7.org/linux/man-pages/man3/isatty.3.html
Re: WTF is going on! Corrupt value that is never assigned
On Thursday, 13 July 2017 at 18:22:34 UTC, FoxyBrown wrote: The following code is pretty screwed up, even though it doesn't look like it. I have a buf, a simple malloc which hold the results of a win32 call. I am then trying to copy over the data in buf to a D struct. But when copying the strings, the buf location changes, screwing up the copying process. It shouldn't happen, buf never changes value anywhere except the first malloc(which is once). Somehow it is getting changed, but where? [...] The buf value changes when calling cstr2dstr but I've had it with other values to(any function call such as to!string, etc seems to trigger it). [...] - Does this happen every time, or only sometimes? - At which loop iteration does it occur? - Which compiler (+version) are you using (with what flags)? - What are the steps to reproduce (i.e. does this e.g. happen with a main that consist of one call to EnumServices) ?
Re: Bad file descriptor in File destructor
On Thursday, 13 July 2017 at 10:56:20 UTC, unDEFER wrote: Seems I have found. I must do: try{ File file; try { file = File(path); } catch (Exception exp) { return; } //Some actions with file } catch (ErrnoException) { return; } Well, yes, you can also encompass your entire function body in a try catch, though that makes your code somewhat hard to read[1]. With these many try/catches you may want to take a look at std.exception.collectException[2]. [1] https://en.wikipedia.org/wiki/Spaghetti_code [2] https://dlang.org/phobos/std_exception.html#.collectException
Re: Bad file descriptor in File destructor
On Thursday, 13 July 2017 at 11:15:56 UTC, Moritz Maxeiner wrote: --- ubyte[File.sizeof] _file; ref File file() { return *(cast(File*) &_file[0]); } [create File instance and assign to file] scope (exit) destroy(file); --- Forgot to add the try catch: --- ubyte[File.sizeof] _file; ref File file() { return *(cast(File*) &_file[0]); } [create File instance and assign to file] scope (exit) try destroy(file) catch (ErrnoException) {}; --- or just --- scope (exit) destroy(file).collectException ---
Re: Bad file descriptor in File destructor
On Thursday, 13 July 2017 at 10:28:30 UTC, unDEFER wrote: On Thursday, 13 July 2017 at 08:53:24 UTC, Moritz Maxeiner wrote: Where does that `File` come from? If it's std.stdio.File, that one is a struct with internal reference counting, so it shouldn't crash in the above. Could you provide a minimal working (in this case crashing) example? Yes File is std.stdio.File. And I can't provide a minimal crashing example because this code crashes very rarely. I just want to put try/catch and don't know where to do it. Well, if you get an ErrnoException on std.stdio.File.~this you are AFAIK either encountering an OS bug, or you have previously corrupted the file descriptor that File instance wraps around. To be specific, it sounds to me like you're trying to close a file descriptor that's already been closed, i.e. you should fix that instead of trying to work around the consequences of it. Under the assumption, though, that it's an OS bug you're encountering, you can't deal with it with just a try catch in that function, because a (stack allocated) struct's destructor is always called when it goes out of scope. I see essentially two workarounds: - Use two functions foo and bar, where bar has `file` on it's stack, and `foo` calls `bar` and catches the destructor exception via try catch block around the call to `bar` - Hide the `file` from the automatic out-of-scope destruction by using another type for storage Personally I'd prefer the second variant, it could look like this: --- ubyte[File.sizeof] _file; ref File file() { return *(cast(File*) &_file[0]); } [create File instance and assign to file] scope (exit) destroy(file); ---
Re: Bad file descriptor in File destructor
On Thursday, 13 July 2017 at 08:38:52 UTC, unDEFER wrote: Hello! I have the code like this: File file; try { file = File(path); } catch (Exception exp) { return; } ... try { } Where does that `File` come from? If it's std.stdio.File, that one is a struct with internal reference counting, so it shouldn't crash in the above. Could you provide a minimal working (in this case crashing) example? If the `File` above is not std.stdio.File, but some custom type: Be aware that structs have deterministic lifetimes, so `file`'s destructor will be called even when you return in the catch clause (on the default constructed `file`), so `File`'s destructor must check the field carrying the file descriptor for being valid; I advise setting such fields to be default constructed to some invalid value (e.g. `-1` in case of file descriptors).
Re: Why do array literals default to object.Object[]?
On Wednesday, 12 July 2017 at 05:24:49 UTC, Brandon Buck wrote: On Wednesday, 12 July 2017 at 02:06:41 UTC, Steven Schveighoffer wrote: I'm sure there's a bug filed somewhere on this... Is this bug worthy? I can search for one and comment and/or create one if I can't find one. It's at best very unintuitive behaviour (I had expected the inference to go up from the class type to Object, not down from Object), so I'd say yes.
Re: Application settings
On Friday, 7 July 2017 at 19:40:35 UTC, FoxyBrown wrote: What's the "best" way to do this? I want something I can simply load at startup in a convenient and easy way then save when necessary(possibly be efficient at it, but probably doesn't matter). Simply json an array and save and load it, or is there a better way? "best" always depends on your specific use case. I use json files via asdf [1] [1] https://github.com/tamediadigital/asdf
Re: "shared" woes: shared instances of anonymous classes
On Friday, 7 July 2017 at 09:14:56 UTC, Arafel wrote: [...] Is there any way to create a shared instance of an anonymous class? [...] If somebody knows how this works / is supposed to work, I'd be thankful! [1]: https://dpaste.dzfl.pl/ce2ba93111a0 Yes, but it's round about: you have to instantiate the class as unshared and then cast it to `shared` [1]. If you look at the grammar [2][3] you'll see why: NewAnonClassExpression does not support specifying the storage class for the new instance, as opposed to NewExpression. Do note, though, that `shared` is pretty much all rough edges with (virtually) no joy at present and IIRC from DConf2017 it's in the queue for an overhaul. [1] https://dpaste.dzfl.pl/35a9a8a1d1f7 [2] https://dlang.org/spec/grammar.html#NewExpression [3] https://dlang.org/spec/grammar.html#NewAnonClassExpression
Re: Bulk allocation and partial deallocation for tree data structures.
On Tuesday, 4 July 2017 at 03:13:14 UTC, Filip Bystricky wrote: Oh and I forgot to mention: another use-case for this would be for arrays. For manually managed arrays like std.container.array, it would make it possible to transfer ownership of individual objects from the array back to the program after the array goes out of scope. Not sure I understand you here: If an instance of such a manual array implementation goes out of scope it must destruct (if they are objects and not primitives) and deallocate its elements. There is no ownership transfer going on here (and who would be the target, anyway?). For gc slices, it could enable some gc implementations to deallocate parts of an array even if there are still references pointing inside that array. I'm fairly certain the necessary bookkeeping logic for partial deallocations will outweigh any gain from it. In the case of such gc slices, I would rather just memcpy to a new, smaller block and update pointers to it (-> a moving GC).