Re: range algorithms on container class
On 09/01/2020 6:28 PM, Alex Burton wrote: I am writing a specialised container class, and want to make it work like a normal array with range functions. After some struggle, I look at the code for std.container.array and see this comment : When using `Array` with range-based functions like those in `std.algorithm`, * `Array` must be sliced to get a range (for example, use `array[].map!` * instead of `array.map!`). The container itself is not a range. I had thought the compiler would make a generic random access range that works with the built in array for anything else with a simililar interface, but it doesn't. Does that mean it is not possible to have a library container implementation that works nicely with range algorithms ? Do you have to slice it like the comment suggests ? Slicing via the opSlice operator overload is a convention, not a requirement. The reason this is necessary is because you do not want to be calling input range methods directly on a container. If you do this, the container itself will have the state of the input range, with no way to reset it to the full contents. The trick is to store this into a Voldemort type returned by the opSlice operator overload. If you want it to behave like an actual array and not the Array container in Phobos, you will have to use a lot more operator overloads (which you should be doing for a list). These will be on the container itself. One of these is opApply which allows for iterating over it.
range algorithms on container class
I am writing a specialised container class, and want to make it work like a normal array with range functions. After some struggle, I look at the code for std.container.array and see this comment : When using `Array` with range-based functions like those in `std.algorithm`, * `Array` must be sliced to get a range (for example, use `array[].map!` * instead of `array.map!`). The container itself is not a range. I had thought the compiler would make a generic random access range that works with the built in array for anything else with a simililar interface, but it doesn't. Does that mean it is not possible to have a library container implementation that works nicely with range algorithms ? Do you have to slice it like the comment suggests ?
Re: Calling D code from C
On Wednesday, 8 January 2020 at 22:00:03 UTC, H. S. Teoh wrote: On Wed, Jan 08, 2020 at 09:42:03PM +, Ferhat Kurtulmuş via Digitalmars-d-learn wrote: [...] What is going on here? The original post date appears as to be of 2005 :D. [...] Haha yeah, I'm not sure why Stefan replied to a post dating from 2005. T Google is good at resurrection.
Re: @disable("reason")
On Wednesday, 8 January 2020 at 07:03:26 UTC, Jonathan M Davis wrote: On Tuesday, January 7, 2020 5:23:48 PM MST Marcel via Digitalmars-d-learn wrote: [...] In terms of an error message? Not really. You can put a pragma(msg, ""); in there, but that would always print, not just when someone tried to use it. I'd suggest that you just put the information in the documentation. If they write code that doesn't work because of you using [...] Oops, it appears I didn't actually understand this part of the language. I'm coming from C++ and I assumed both languages did this the same way, but thankfully I found a workaround that doesn't require doing what I wrote in the initial post and is better in general. I also agree with Adam, it can make for a nice little feature in D.
Re: Calling D code from C
On Wed, Jan 08, 2020 at 09:42:03PM +, Ferhat Kurtulmuş via Digitalmars-d-learn wrote: [...] > What is going on here? The original post date appears as to be of 2005 > :D. [...] Haha yeah, I'm not sure why Stefan replied to a post dating from 2005. T -- Just because you can, doesn't mean you should.
Re: Calling D code from C
On Wednesday, 8 January 2020 at 19:05:29 UTC, H. S. Teoh wrote: On Wed, Jan 08, 2020 at 06:12:01PM +, Stefan via Digitalmars-d-learn wrote: [...] But you can easily do the initialization in your D code, by calling rt_init() and rt_term(), like this: [...] extern(C) int rt_init(); extern(C) int rt_term(); extern(C) __gshared bool rt_initialized = false; [...] if(!rt_initialized) rt_init(); rt_initialized = true; I believe the rt_initialized flag is unnecessary, because rt_init/rt_term use an atomic counter to keep track of how many times they were called. So you just have to call rt_init and make sure you have a matching call to rt_term, and it should Just Work(tm). T What is going on here? The original post date appears as to be of 2005 :D. And a reminder that druntime must be linked along with phobos when it is a library.
Re: linkererrors when reducion phobos with dustmite
On Mon, Jan 06, 2020 at 10:27:09AM +, berni44 via Digitalmars-d-learn wrote: > As mentioned on the dustmite website [1] I copied the folder std from > Phobos in a separate folder and renamed it to mystd. The I changed all > occurences of std by mystd in all files. > > That works most of the time, but sometimes I get hundreds of linker > errors I do not understand: > > $> dmd -main -unittest -g -run mystd/variant.d > > /usr/bin/ld: variant.o:(.data.rel.ro+0x68): undefined reference to > `_D5mystd4meta12__ModuleInfoZ' > /usr/bin/ld: variant.o:(.data.rel.ro+0x70): undefined reference to > `_D5mystd6traits12__ModuleInfoZ' > /usr/bin/ld: variant.o:(.data.rel.ro+0x78): undefined reference to > `_D5mystd8typecons12__ModuleInfoZ' > ... > > Does anyone know, what's the problem here and how to get arround this? Use dmd -i should solve the problem. The problem is that you didn't specify some of the imported modules on the command-line, and there was a reference to something in that module other than a template (templates are instantiated in the importing module, so they tend to work in spite of this omission). When you don't specify -i, dmd does not pull in imported files because it doesn't know whether you're using separate compilation, in which case the missing symbols would be resolved at link time and emitting them again would cause a linker duplicate symbol error. T -- People tell me that I'm skeptical, but I don't believe them.
Re: Practical parallelization of D compilation
On Wed, Jan 08, 2020 at 09:13:18AM +, Chris Katko via Digitalmars-d-learn wrote: > On Wednesday, 8 January 2020 at 06:51:57 UTC, H. S. Teoh wrote: [...] > > Generally, the recommendation is to separately compile each package. [...] > What's the downsides / difficulties / "hoops to jump through" penalty > for putting code into modules instead of one massive project? Are you talking about *modules* or *packages*? Generally, the advice is to split your code into modules once it becomes clear that certain bits of code ought not to know about the implementation details of other bits of code in the same file. Some people insist that the cut-off is somewhere below 1000 LOC, though personally I'm not so much interested in arbitrary limits, but rather how cohesive/self-contained the code is. The difference between modules and packages is a bit more blurry, since you can create a package.d to make a package essentially behave like a module. But it just so happens that D's requirement that package containment structure must match directory structure does map rather well onto separate compilation: just separately compile each directory and link them at the end. > Is it just a little extra handwriting/boilerplate, or is there a > performance impact talking to other modules vs keeping it all in one? What performance impact are we talking about here, compile-time or runtime? Compile-time might increase slightly because of the need for the compiler to open files and look up directories. But it should be minimal. There is no runtime penalty. I see it mainly as a tool for code organization and management; it has little bearing on the actual machine code generated at the end. T -- It always amuses me that Windows has a Safe Mode during bootup. Does that mean that Windows is normally unsafe?
Re: Practical parallelization of D compilation
On Wed, Jan 08, 2020 at 06:56:20PM +, Guillaume Lathoud via Digitalmars-d-learn wrote: > Thanks to all for the answers. > > The package direction is precisely what I am trying to avoid. It is > still not obvious to me how much work (how many trials) would be > needed to decide on granularity, as well as how much work to > automatize the decision to recompile a package or not ; and finally, > when a given package needs to be recompiled for only one or a few > files changed, most likely one would WAIT (much) more than with the > current solution - and within a single process. This is the problem that build systems set out to solve. So existing tools like makefiles (ugh) would work (even though I dislike make for various reasons -- but for simple projects it may well suffice), you just have to write a bunch of rules for compiling .d files into object files then link them. Personally I prefer using SCons (https://scons.org/), but there are plenty of similar build systems out there, like tup, Meson, CMake, etc.. There are also fancier offerings that double as package managers like Gradle, but from the sounds of it you're not interested to do that just yet. As for using packages or not: I do have some projects where I compile different subsets of .d files separately, for various reasons. Sometimes, it's because I'm producing multiple executables that share a subset of source files. Other times it's for performance reasons, or more specifically, the fact that Vibe.d Diet templates are an absolute *bear* to compile, so I write my SCons rules such that they are compiled separately, and everything else is compiled apart from them, so if no Diet templates change, I can cut quite a lot off my compile times. So it *is* certainly possible; you just have to be comfortable with getting your hands dirty and writing a few build scripts every now and then. IMO, the time investment is more than worth the reduction in compilation waiting times. Furthermore, sometimes in medium- to largish projects I find myself separately compiling a single module plus its subtree of imports (via dmd -i), usually when I'm developing a new module and want to run unittests, or there's a problem with a particular module and I want to be able to run unittests or test code (in the form of temporary unittest blocks) without waiting for the entire program to compile. In such cases, I do: dmd -i -unittest -main -run mymod.d and let dmd -i figure out which subset of source files to pull in. It's convenient, and cuts down quite a bit on waiting times because I don't have to recompile the entire program each iteration. [...] > having a one-liner solution (no install, no config file) delivered > along with the compiler, or as a compiler option, could fill a sweet > spot between a toy app (1 or 2 source files), and a more complex > architecture relying on a package manager. This might remove a few > obstacles to D usage. This is of course purely an opinion. I find myself in the same place -- my projects are generally more than 1 or 2 files, but not so many that I need a package manager (plus I also dislike package managers for various reasons). I find that a modern, non-heavy build system like SCons or tup fills that need very well. And to be frank, a 200+ file project is *far* larger than any of mine, and surely worth the comparatively small effort of spending a couple of hours to write a build script (makefile, SConscript, what-have-you) for? T -- ASCII stupid question, getty stupid ANSI.
Re: Calling D code from C
On Wed, Jan 08, 2020 at 06:12:01PM +, Stefan via Digitalmars-d-learn wrote: [...] > But you can easily do the initialization in your D code, by calling > rt_init() and rt_term(), like this: [...] > extern(C) int rt_init(); > extern(C) int rt_term(); > extern(C) __gshared bool rt_initialized = false; [...] > if(!rt_initialized) > rt_init(); > rt_initialized = true; I believe the rt_initialized flag is unnecessary, because rt_init/rt_term use an atomic counter to keep track of how many times they were called. So you just have to call rt_init and make sure you have a matching call to rt_term, and it should Just Work(tm). T -- The richest man is not he who has the most, but he who needs the least.
Re: Practical parallelization of D compilation
Thanks to all for the answers. The package direction is precisely what I am trying to avoid. It is still not obvious to me how much work (how many trials) would be needed to decide on granularity, as well as how much work to automatize the decision to recompile a package or not ; and finally, when a given package needs to be recompiled for only one or a few files changed, most likely one would WAIT (much) more than with the current solution - and within a single process. For the initial compilation, a quick try at the -c solution worked with ldmd2 (ldc2) on parts of the application. Then, I tried to feed it all 226 files and the compilation process ended with a segmentation fault. No idea why. The direct compilation with -i main.d works. I was not aware of the options for Dub, many thanks! Overall I am happy with any solution, even if there is an upfront cost at the first compilation, as long as it makes testing an idea FAST later on, and that probably can work better using all available cores. Now about this: On Wednesday, 8 January 2020 at 13:14:38 UTC, kinke wrote: On Wednesday, 8 January 2020 at 04:40:02 UTC, Guillaume Lathoud I wonder if some heuristic roughly along these lines - when enough source files and enough cores, do parallel and/or re-use - could be integrated into the compilers, at least in the form of an option. I think that's something to be handled by a higher-level build system, not the compiler itself. Fine... I don't want so much to debate where exactly this should be. Simply: having a one-liner solution (no install, no config file) delivered along with the compiler, or as a compiler option, could fill a sweet spot between a toy app (1 or 2 source files), and a more complex architecture relying on a package manager. This might remove a few obstacles to D usage. This is of course purely an opinion.
Re: Calling D code from C
On Thursday, 26 May 2005 at 20:41:10 UTC, Vathix wrote: The problem is that D's main() initializes things. Using a C main() bypasses that startup code. Put the main() in the D file (with D extern) and have it call a function in the C file that you will treat as main. That's correct, but not always an option, such as when writing a D library which can be called from C programs you can't touch. But you can easily do the initialization in your D code, by calling rt_init() and rt_term(), like this: import std.stdio; import core.memory : GC; extern(C) int rt_init(); extern(C) int rt_term(); extern(C) __gshared bool rt_initialized = false; extern(C) void d_function(){ writeln("Initializing D runtime"); if(!rt_initialized) rt_init(); rt_initialized = true; char[] big = new char[1000]; big = null; writeln("Calling GC"); GC.collect(); writeln("Finishing D function"); scope(exit){ writeln("Terminating D runtime"); if(rt_initialized) rt_term(); rt_initialized = false; } } ...just be careful that you don't do anything requiring memory allocation before rt_init() or after rt_term().
Re: @disable("reason")
On Wed, Jan 08, 2020 at 06:06:33AM -0700, Jonathan M Davis via Digitalmars-d-learn wrote: > On Wednesday, January 8, 2020 4:54:06 AM MST Simen Kjærås via Digitalmars-d- > learn wrote: [...] > > struct S { > > @disable this(); > > @disable static S init(); > > } > > > > This will give sensible error messages anywhere .init is being > > used. Now, Phobos and other libraries might expect that .init is > > always working, so this could potentially be a problem. > > That's likely to break a _lot_ of generic code, because init is used > heavily in template constraints and static if conditions as the way to > get a value of that type to test. I think the new(er?) idiom is to use a lambda parameter to get the type instead. I.e., instead of: auto foo(T, U)(T t, U u) if (is(typeof(someOperation(T.init))) && is(typeof(otherOperation(U.init { ... } you'd write: auto foo(T, U)(T t, U u) if (is(typeof((T t, U u) { someOperation(t); otherOperation(u); }))) { ... } This will work for types T, U that do not have default initialization (e.g., @disabled this(), like in the OP), types that for whatever reason have overridden .init, or types that are non-copyable, etc.. Nonetheless, .init is one of those things that are just assumed to always exist, so overriding it is not advisable. > It's also actually been argued before that it should be illegal to > declare a symbol called init on any type, which is one reason why the > init member was removed from TypeInfo. I'd strongly advise against > anyone declaring a struct or class member named init whether it's > @disabled or not. Indeed: https://issues.dlang.org/show_bug.cgi?id=7066 T -- Today's society is one of specialization: as you grow, you learn more and more about less and less. Eventually, you know everything about nothing.
Re: What is the difference between a[x][y] and a[x,y]?
On Wed, Jan 08, 2020 at 09:09:23AM +0100, Robert M. Münch via Digitalmars-d-learn wrote: > On 2020-01-07 19:06:09 +, H. S. Teoh said: > > > It's up to you how to implement all of this, of course. The language > > itself doesn't ship a built-in type that implements this, but it > > does provide the scaffolding for you to build a custom > > multi-dimensional array type. > > Hi, thanks for your extensive answer! Helped a lot... > > And the above paraghraph brings it to the core. > > How can this be added to the D docs? I think adding such clear "this > is the idea how you should use it" intros would help a lot to see the > world from a d-ish perspective. [...] File a bug against dlang.org, and maybe when I get some free time I'll try to write something up. T -- Designer clothes: how to cover less by paying more.
Re: Practical parallelization of D compilation
On Wednesday, 8 January 2020 at 04:40:02 UTC, Guillaume Lathoud wrote: Hello, One of my D applications grew from a simple main and a few source files to more than 200 files. Although I minimized usage of templating and CTFE, the compiling time is now about a minute. I did not find any solution to take advantage of having multiple cores during compilation, lest I would write a makefile, or split the code into multiple packages and use a package manager. (If I missed such a possibility, feel free to write it here.) yeah there's one. DUB does the same as you script with the following options: dub build --parallel --build-mode=singleFile
Re: @disable("reason")
On Wednesday, 8 January 2020 at 00:23:48 UTC, Marcel wrote: I would like to tell the user why they can't instantiate the struct. Is there a way to do that? I'd love to have exactly what you said for this reason, but D doesn't really have it. You just have to hope they read the docs (my doc generator specifically calls out default ctors for this reason). But we should formally request @disable("reason") to be added, it really is very nice and has precedent in deprecated("reason").
Re: Practical parallelization of D compilation
On Wednesday, 8 January 2020 at 04:40:02 UTC, Guillaume Lathoud wrote: * first run (compiling everything): 50% to 100% slower than classical compilation, depending on the hardware, resp. on an old 4-core or a more recent 8-core. If parallel compiler invocations for each source file are indeed that much slower than a single serial all-at-once compilation in your case, you can also try to compile all modules at once, but output separate object files - `ldc2 -c a.d b.d c.d`. I wonder if some heuristic roughly along these lines - when enough source files and enough cores, do parallel and/or re-use - could be integrated into the compilers, at least in the form of an option. I think that's something to be handled by a higher-level build system, not the compiler itself.
Re: @disable("reason")
On Wednesday, January 8, 2020 4:54:06 AM MST Simen Kjærås via Digitalmars-d- learn wrote: > On Wednesday, 8 January 2020 at 07:03:26 UTC, Jonathan M Davis > > wrote: > > you could just document that no one should ever use its init > > value explicitly, and that they will have bugs if they do > > You also create a static init member marked @disable: > > struct S { > @disable this(); > @disable static S init(); > } > > This will give sensible error messages anywhere .init is being > used. Now, Phobos and other libraries might expect that .init is > always working, so this could potentially be a problem. That's likely to break a _lot_ of generic code, because init is used heavily in template constraints and static if conditions as the way to get a value of that type to test. It's also actually been argued before that it should be illegal to declare a symbol called init on any type, which is one reason why the init member was removed from TypeInfo. I'd strongly advise against anyone declaring a struct or class member named init whether it's @disabled or not. - Jonathan M Davis
Re: @disable("reason")
On Wednesday, 8 January 2020 at 07:03:26 UTC, Jonathan M Davis wrote: you could just document that no one should ever use its init value explicitly, and that they will have bugs if they do You also create a static init member marked @disable: struct S { @disable this(); @disable static S init(); } This will give sensible error messages anywhere .init is being used. Now, Phobos and other libraries might expect that .init is always working, so this could potentially be a problem.
Re: @disable("reason")
On Wednesday, 8 January 2020 at 08:26:51 UTC, user1234 wrote: class Example { @disable this() { pragma(msg, "not allowed..."); } } void main() { new Example(); } outputs: not allowed... /tmp/temp_7F8C65489550.d(12,5): Error: constructor `runnable.Example.this` cannot be used because it is annotated with `@disable` However, it will print that message even if the constructor is never called. If you make the constructor a template instead, you will only get the message when someone attempts to use the default constructor: class Example { @disable this()() { pragma(msg, "not allowed..."); } } void main() { new Example(); } Sadly, this does not work for structs, as they don't really have a default constructor, as Jonathan pointed out. -- Simen
Re: Practical parallelization of D compilation
On Wednesday, 8 January 2020 at 06:51:57 UTC, H. S. Teoh wrote: On Wed, Jan 08, 2020 at 04:40:02AM +, Guillaume Lathoud via Digitalmars-d-learn wrote: [...] [...] Generally, the recommendation is to separately compile each package. E.g., if you have a source tree of the form: src/ src/main.d src/pkg1/mod1.d src/pkg1/mod2.d src/pkg2/mod3.d src/pkg2/mod4.d then you'd have 3 separate compilations: dmd -ofpkg1.o src/pkg1/mod1.d src/pkg1/mod2.d dmd -ofpkg2.o src/pkg2/mod3.d src/pkg2/mod4.d dmd -ofmyprogram src/main.d pkg1.o pkg2.o The first two can be done in parallel, since they are independent of each other. The reason per-package granularity is suggested is because the accumulated overhead of separately compiling every file makes it generally not worth the effort. D compiles fast enough that per-package compilation is still reasonably fast, but you no longer incur as much overhead from separately compiling every file, yet you still retain the advantage of not recompiling the entire program after every change. (Of course, the above example is greatly simplified; generally you'd have about 10 or more files per package, and many more packages, so the savings can be quite significant.) T What's the downsides / difficulties / "hoops to jump through" penalty for putting code into modules instead of one massive project? Is it just a little extra handwriting/boilerplate, or is there a performance impact talking to other modules vs keeping it all in one?
Re: @disable("reason")
On Wednesday, 8 January 2020 at 00:23:48 UTC, Marcel wrote: Hello! I'm writing a library where under certain conditions i need all the default constructors to be disabled. I would like to tell the user why they can't instantiate the struct. Is there a way to do that? class Example { @disable this() { pragma(msg, "not allowed..."); } } void main() { new Example(); } outputs: not allowed... /tmp/temp_7F8C65489550.d(12,5): Error: constructor `runnable.Example.this` cannot be used because it is annotated with `@disable` Because pragma(msg) are evaluated when encountered you can use abuse them. By specification they're not allowed to modify the meaning of the program.
Re: What is the difference between a[x][y] and a[x,y]?
On 2020-01-07 19:06:09 +, H. S. Teoh said: It's up to you how to implement all of this, of course. The language itself doesn't ship a built-in type that implements this, but it does provide the scaffolding for you to build a custom multi-dimensional array type. Hi, thanks for your extensive answer! Helped a lot... And the above paraghraph brings it to the core. How can this be added to the D docs? I think adding such clear "this is the idea how you should use it" intros would help a lot to see the world from a d-ish perspective. -- Robert M. Münch http://www.saphirion.com smarter | better | faster