Re: How does this template work?
On 2019-06-17 20:53:28 +, aliak said: Less typing for one. Otherwise you'd have to write: auto observer = observerObject!int.observerObject(TestObserver()); Since code is many times more read than written I will never understand why the syntax is polluted to save some keystrokes, making it much harded for others who don't have 800 pages special cases in their mind to read the code. One explicit alias or so would be OK too for cases where such a declaration is needed more than once. But anyway, thanks. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Specifying executable names in DUB
On Tuesday, 18 June 2019 at 02:13:46 UTC, Dave wrote: Greetings, This might be totally obvious, but I can't seem to figure out how to specify an executable's name&path to be different for each build types in my DUB package. For example, if my project is named "dlang_test", I might want something like so: dub build --build=debug yields either bin/dlang_test-debug.exe or possibly bin/debug/dlang_test.exe if I did dub build --build=release I might get either bin/dlang_test-release.exe or bin/release/dlang_test.exe When I read the section on build types (https://dub.pm/package-format-json.html#build-types), it specifically mentions that a "buildTypes" entry can override the build settings, but *not* "targetName" and "targetPath", which is what I think I want here. Is there any reason why this is disallowed? Or is there a more canonical way of achieving this with DUB? What I described above is often found in other build tools, such as CMake and Visual Studio, which makes me think I'm missing something obvious here. Thanks! You can specify the names by adding a configuration "debug" and a configuration "unittest". While dub is using the first configuration in the list as default configuration for command "dub build" it will use configuration "unittest" for command "dub test". Kind regards Andre
Re: Specifying executable names in DUB
On Monday, June 17, 2019 8:13:46 PM MDT Dave via Digitalmars-d-learn wrote: > Greetings, > > This might be totally obvious, but I can't seem to figure out how > to specify an executable's name&path to be different for each > build types in my DUB package. For example, if my project is > named "dlang_test", I might want something like so: > > dub build --build=debug > > yields either > > bin/dlang_test-debug.exe > > or possibly > > bin/debug/dlang_test.exe > > if I did > > dub build --build=release > > I might get either > > bin/dlang_test-release.exe > > or > > bin/release/dlang_test.exe > > When I read the section on build types > (https://dub.pm/package-format-json.html#build-types), it > specifically mentions that a "buildTypes" entry can override the > build settings, but *not* "targetName" and "targetPath", which is > what I think I want here. > > Is there any reason why this is disallowed? Or is there a more > canonical way of achieving this with DUB? What I described above > is often found in other build tools, such as CMake and Visual > Studio, which makes me think I'm missing something obvious here. As I understand it, dub does not provide the ability to alter the target name per build type. I don't know why, though if I had to guess, I would say that it probably has to do with how it already renames the target based on what is being generated and what the platform is (e.g. adding lib to the front and either .a or .so to the end on *nix systems when a library is being generated). However, it's probably possible to use the postBuildCommands setting to run cp or mv or whatever to get the target name you want for that particular configuration. - Jonathan M Davis
Specifying executable names in DUB
Greetings, This might be totally obvious, but I can't seem to figure out how to specify an executable's name&path to be different for each build types in my DUB package. For example, if my project is named "dlang_test", I might want something like so: dub build --build=debug yields either bin/dlang_test-debug.exe or possibly bin/debug/dlang_test.exe if I did dub build --build=release I might get either bin/dlang_test-release.exe or bin/release/dlang_test.exe When I read the section on build types (https://dub.pm/package-format-json.html#build-types), it specifically mentions that a "buildTypes" entry can override the build settings, but *not* "targetName" and "targetPath", which is what I think I want here. Is there any reason why this is disallowed? Or is there a more canonical way of achieving this with DUB? What I described above is often found in other build tools, such as CMake and Visual Studio, which makes me think I'm missing something obvious here. Thanks!
Re: Range violation error when reading from a file
On Monday, 17 June 2019 at 03:46:11 UTC, Norm wrote: On Monday, 17 June 2019 at 00:22:23 UTC, Samir wrote: Any suggestions on how to rectify? You could change the IF to `if(line.length > 0 && line[0] == '>')` Thanks, Norm. That seemed to do the trick and fixed the error. On Monday, 17 June 2019 at 11:25:01 UTC, aliak wrote: On Monday, 17 June 2019 at 00:22:23 UTC, Samir wrote: HOWEVER, the output is interesting. There IS a blank line between the last line and the prompt: That's because you're using write*ln*. So even though line is empty, you still output a new line. Curious. I am going to have to think about that for a bit as I don't quite understand. Any suggestions on how to rectify? You can do: if (!line.length) { continue; } Inside your while loop after the call to strip. Thanks, aliak! I think this is similar to Norm's suggestion in that I need to check for a non-zero line length before continuing. What's funny now is that I get two blank lines after the output and before the prompt: $ ./readfile line 1 line 2 line 3 line 4 line 5 $ Ultimately, I think the original suggestions by you and lithium iodate about there being an empty line at the end is probably the culprit. I will have to investigate that further. Thank you to everyone that chimed in to help me out!
Re: How does this template work?
On Monday, 17 June 2019 at 18:25:24 UTC, Robert M. Münch wrote: On 2019-06-16 15:14:37 +, rikki cattermole said: observerObject is an eponymous template. What this means (in essence) is the symbol inside the template block == template block. Hmm... ok. Is there any reason to have these "eponymous templates"? I don't see any benefit... Less typing for one. Otherwise you'd have to write: auto observer = observerObject!int.observerObject(TestObserver());
Re: Just another question about memory management in d from a newbie
On Monday, 17 June 2019 at 20:26:28 UTC, H. S. Teoh wrote: On Mon, Jun 17, 2019 at 07:53:52PM +, Thomas via Digitalmars-d-learn wrote: [...] [...] If x were a heap-allocated object, then your concerns would be true: it would be allocated once every iteration (and also add to the garbage that the GC will have to collect later). [...] Thank you for your exact explanation on how the compiler works inside. That will clear some question that have bothered me. Thomas
Re: Just another question about memory management in d from a newbie
On Mon, Jun 17, 2019 at 07:53:52PM +, Thomas via Digitalmars-d-learn wrote: [...] > int main() > { > foreach(i;0 .. 1) > { > int x; > // do something with x > } > return 0; > } > > Do I understand it right that the variable x will be created 1 > times and destroyed at the end of the scope in each loop ? Or will it > be 1 overwritten by creation ? If x were a heap-allocated object, then your concerns would be true: it would be allocated once every iteration (and also add to the garbage that the GC will have to collect later). However, in this case x is an int, which is a value type. This means it will be allocated on the stack, and allocation is as trivial as bumping the stack pointer (practically zero cost), and deallocation is as simple as bumping the stack pointer the other way. In fact, it doesn't even have to bump the stack pointer between iterations, since it's obvious from code analysis that x will always be allocated on the same position of the stack relative to the function's call frame, so in the generated machine code it can be as simple as just using that memory location directly, and reusing the same location between iterations. > I mean does it not cost the CPU some time to allocate (I know were > talking here of nsec) but work is work. As far I know from my school > days in assembler, allocation of memory is one of the most expensive > instructions on the cpu. Activate memory block, find some free place, > reserve, return pointer to caller, and so on.. That's not entirely accurate. It depends on how the allocation is done. Allocation on the stack is extremely cheap,and consists literally of adding some number (the size of the allocation) to a register (the stack pointer) -- you cannot get much simpler than that. Also, memory allocation on the heap is not "one of the most expensive instructions". The reason it's expensive is because it's not a single instruction, but an entire subroutine of instructions to manage the heap. There is no CPU I know of that has a built-in instruction for heap allocation. > In my opinion this version should perform a little better: > > int main() > { > int x; > foreach(i;0 .. 1) > { > x = 0; // reinitialize > // do something with x > } > return 0; > } > > Or do I miss something and there is an optimization by the compiler to > avoid recreating/destroying the variable x 1 times ? [...] What you wrote here is exactly how the compiler will emit the machine code given your first code example above. Basically all modern optimizing compilers will implement it this way, and you never have to worry about stack allocation being slow. The only time you will have a performance problem is if x is a heap-allocated object. In *that* case you will want to look into reusing previous instances of the object between iterations. But if x is a value type (int, or struct, etc.), you don't have to worry about this at all. The first version of the code you have above is the preferred one, because it makes the code clearer. On a related note, in modern programming languages generally you should be more concerned about the higher-level meaning of the code than worry about low-level details like how instructions are generated, because generally speaking the machine code generated by the compiler is highly transformed from the original source code, and generally will not have a 1-to-1 mapping to the higher-level logical meaning of the code, i.e., the first version of the code *logically* allocates x at the beginning of each iteration of the loop and deallocates it at the end, but the actual generated machine code elides all of that because the compiler's optimizer can easily see that it's a stack allocation that always ends up in the same place, so none of the allocation/deallocation actually needs to be represented as-is in the machine code. On another note, for performance-related concerns the general advice these days is to write something in the most straightforward, logical way first, and then if the performance is not good enough, **use a profiler** to identify where the hotspots are, and optimize those. Trying to optimize before you have real-world, profiler data that your code is a hotspot is premature optimization, widely regarded as evil because it usually leads to overly-convoluted code that's hard to understand and maintain, and often actually *slower* because the way you expressed the meaning of the code has become so obfuscated that the optimizer can't figure out what you're actually trying to do, so it gives up and doesn't even try to apply any optimizations that may have benefitted the code. (Of course, the above is with the caveat that writing "straightforward" code only holds up to a certain extent; when it comes to algorithms, for example, no compiler is going to change an O(n^2) algorithm into an O(log n) one; you have to select the appropriate algorithm in th
Re: Just another question about memory management in d from a newbie
First, thank you for your fast reply! On Monday, 17 June 2019 at 20:00:34 UTC, Adam D. Ruppe wrote: No, the compiler will generate code to reuse the same thing each loop. Does this code also work on complex types like structs ?
Re: Just another question about memory management in d from a newbie
On Monday, 17 June 2019 at 19:53:52 UTC, Thomas wrote: Do I understand it right that the variable x will be created 1 times and destroyed at the end of the scope in each loop ? Or will it be 1 overwritten by creation ? No, the compiler will generate code to reuse the same thing each loop. In my opinion this version should perform a little better: That's actually slightly *less* efficient (in some situations) because it doesn't allow the compiler to reuse the memory for `x` after the loop has exited. But in both cases, the compiler will just set aside a little bit of local space (or just a cpu scratch register) and use it repeatedly. It is totally "free".
Just another question about memory management in d from a newbie
Hello! First my background: C++ and Java ages ago. Since then only PLSQL. Now learning D just for fun and personal education on time to time and very pleased about it :-) Now I have to ask a question here, because I could not find a corresponding answer for it. Or I am unable to find it :-) I was wondering about the memory system in D (and other C like languages) about the handling of the memory allocation overhead. Just an example that I have seen many times: int main() { foreach(i;0 .. 1) { int x; // do something with x } return 0; } Do I understand it right that the variable x will be created 1 times and destroyed at the end of the scope in each loop ? Or will it be 1 overwritten by creation ? I mean does it not cost the CPU some time to allocate (I know were talking here of nsec) but work is work. As far I know from my school days in assembler, allocation of memory is one of the most expensive instructions on the cpu. Activate memory block, find some free place, reserve, return pointer to caller, and so on.. In my opinion this version should perform a little better: int main() { int x; foreach(i;0 .. 1) { x = 0; // reinitialize // do something with x } return 0; } Or do I miss something and there is an optimization by the compiler to avoid recreating/destroying the variable x 1 times ? I know that version 1 is more secure because there will be no value waste after each loop we could stumble onto, but that is not my question here. And I know that were talking about really small cpu usage compared to what an app should do. But when it is to look on performance like games or gui (and there are a lot of examples like version 1 out there) then I have to ask myself if it is not just a waste of cpu time ? Or is it a styling of code thing ? Thank you for your time! Greetings from Austria Thomas
Re: How does this template work?
On 2019-06-16 15:14:37 +, rikki cattermole said: observerObject is an eponymous template. What this means (in essence) is the symbol inside the template block == template block. Hmm... ok. Is there any reason to have these "eponymous templates"? I don't see any benefit... -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Where can find dmd back-end source code.
On Monday, 17 June 2019 at 16:41:00 UTC, lili wrote: html.c Extracts D source code from .html files but source can't find html.c file. that file (and associated feature) was removed a long time ago, the wiki is out of date. but the rest of the x86 code generation should all be there in that backend folder.
Re: Where can find dmd back-end source code.
You found it. https://github.com/dlang/dmd/tree/master/src/dmd/backend Generates x86 and x86_64.
Where can find dmd back-end source code.
Hi gues: I clone the dmd source code from github but i can find backend code. there is a backend dir in src but can not find x86 native code generator. and the dmd wiki page just say Back end FileFunction html.c Extracts D source code from .html files but source can't find html.c file.
Re: CT/RT annoyance
On Monday, 17 June 2019 at 05:04:50 UTC, Bart wrote: Consider void foo(string A = "")(string B = "") { static if (A != "") do(A); else do(B); } [...] I see the annoyance but D clearly separates what is CT and RT so such a change would require a DIP. I don't even know if this is technically possible. About what you want to achieve, note that import() is more used to include code, import lookup tables and this kind of things. It must not be seen as a CT way of reading files.
Re: Range violation error when reading from a file
On Monday, 17 June 2019 at 00:22:23 UTC, Samir wrote: Also, if I run the program below with the same file, I don't get any range violation errors: Ya, writeln will not access individual elements of a range if there aren't any. So no violations occur. HOWEVER, the output is interesting. There IS a blank line between the last line and the prompt: That's because you're using write*ln*. So even though line is empty, you still output a new line. Any suggestions on how to rectify? You can do: if (!line.length) { continue; } Inside your while loop after the call to strip.
Regex driving me nuts
Error: static variable `thompsonFactory` cannot be read at compile time, Trying to regex an import file. Also I have a group (...)* and it always fails or matches only one but if I do (...)(...)(...) it matches all 3(fails if more or less of course. ... is the regex). Also when I ignore a group (?:Text) I get Text as a matched group ;/ But all this is irrelevant if I can't get the code to work at compile time. I tried ctRegex // A workaround for R-T enum re = regex(...) template defaultFactory(Char) { @property MatcherFactory!Char defaultFactory(const ref Regex!Char re) @safe { import std.regex.internal.backtracking : BacktrackingMatcher; import std.regex.internal.thompson : ThompsonMatcher; import std.algorithm.searching : canFind; static MatcherFactory!Char backtrackingFactory; static MatcherFactory!Char thompsonFactory; if (re.backrefed.canFind!"a != 0") { if (backtrackingFactory is null) backtrackingFactory = new RuntimeFactory!(BacktrackingMatcher, Char); return backtrackingFactory; } else { if (thompsonFactory is null) thompsonFactory = new RuntimeFactory!(ThompsonMatcher, Char); return thompsonFactory; } } } The workaround seems to workaround working.
System requirements for compiling/linking big D codebases
Hi, For the sake of people habituated to compiling compilers from source, is it possible to add to the dmd_linux page the 64-bit hardware resource requirements for compiling DMD/Phobos from source? Ideally, with the defaults and also with 2.086's -lowmem switch? Maybe an ever-green link, to those numbers seen on the CI infrastructure when building the release? I hope it is not out of the way for users to compile DMD from source, since I guess people do it for Rust, OCaml, etc. all the time (and, compiling GCC in stages isn't rare either). -lowmem hopefully will make the compiler's compilation memory usage as palatable as other compilers'. I have had to incompletely discover this info. What I have seen is that compiling takes maybe 1G disk dpace, linking DMD doesn't even take 1G of RAM, whereas 3+G available RAM isn't enough for linking Phobos. Short of waiting for Gentoo's dlang to move on from 2.085 so that I have recourse to -lowmem, I would have to chuck more RAM into an older system that actually compiles GCC, libstdc++, OCaml, etc. just fine. - Cheers.
Re: Blog Post #0043 - File Dialog IX - Custom Dialogs (2 of 3)
On Tuesday, 11 June 2019 at 09:06:10 UTC, Ron Tarrant wrote: This is the second in a series (Custom Dialogs) within a series (Dialogs) and deals with the action area. It's available here: http://gtkdcoding.com/2019/06/11/0043-custom-dialog-ii.html Aw man, your content have. "© Copyright 2019 Ron Tarrant"