Re: Fetching licensing info for all dependencies of a DUB project
On Monday, 27 June 2022 at 21:36:31 UTC, Christian Köstlin wrote: I played around with the idea and came up with a small dub package, that is not (yet) uploaded to the dub registry. Source is available at https://github.com/gizmomogwai/packageinfo, feedback very welcome. I've done something similar not for licences but for code amount, to extract from a DUB project: - DUB packages used by project - source files used by project - and their LOC count This is a D forums exclusive: https://pastebin.com/RFbFCgR2 Keep your debt in check!
Re: Consuming D libraries from other languages
On Wednesday, 15 June 2022 at 19:36:34 UTC, Guillaume Piolat wrote: BindBC bindings are multi-platform and can be both static and dynamic linking. My bad I understood the reverse, consuming C libraries from D. I think what you are seeking is described in the D blog.
Re: Consuming D libraries from other languages
On Wednesday, 15 June 2022 at 17:37:32 UTC, Templated Person wrote: It there any resources on how to build D static (`.lib` / `.a`) and dynamic libraries (`.dll` / `.so`), and then use them from C? Do I need to link and initialize phobos somehow? What if I don't want to use the D runtime? What happens with module level `this()` and `~this()`? Is there a comprehensive guide on how to do this stuff? What I would suggest is to look at a few of the BindBC libraries and mimic them. https://code.dlang.org/search?q=bindbc BindBC bindings are multi-platform and can be both static and dynamic linking. They can also work without a D runtime.
Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?
On Monday, 6 June 2022 at 22:24:45 UTC, Guillaume Piolat wrote: My understanding is that while scanning, the GC will see the data.ptr pointer, but will not scan the area it points to since it's not in a GC range (the runtime can distinguish managed pointer and other pointers). After scanning, when obj is non-reachable, the GC will destroy it but that won't lead to a reclaim of data.ptr since it knows it doesn't own that. In D, the ownership of slice is purely determined by the memory area it points to. If it points into GC memory then it's a GC slice.
Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?
On Monday, 6 June 2022 at 22:18:08 UTC, mw wrote: So when `obj` is cleanup by the GC, obj.data won't be freed by the GC: because the `data` is non-gc-allocated (and it's allocated on the non-gc heap), the GC scanner will just skip that field during a collection scan. Is this understanding correct? My understanding is that while scanning, the GC will see the data.ptr pointer, but will not scan the area it points to since it's not in a GC range (the runtime can distinguish managed pointer and other pointers). After scanning, when obj is non-reachable, the GC will destroy it but that won't lead to a reclaim of data.ptr since it knows it doesn't own that.
Re: How to map machine instctions in memory and execute them? (Aka, how to create a loader)
On Monday, 6 June 2022 at 15:13:45 UTC, rempas wrote: Any ideas? See: https://github.com/GhostRain0/xbyak https://github.com/MrSmith33/vox/blob/master/source/vox/utils/mem.d
Re: Why are structs and classes so different?
On Sunday, 15 May 2022 at 15:26:40 UTC, Kevin Bailey wrote: I'm trying to understand why it is this way. I assume that there's some benefit for designing it this way. I'm hoping that it's not simply accidental, historical or easier for the compiler writer. Perhaps someone more informed will chime in, but there is a reason to avoid object inheritance with value types, and force them to be reference types. https://stackoverflow.com/questions/274626/what-is-object-slicing If we want to avoid that problem, then object with inheritance and virtual functions have to be reference types. But you still need values types. So now you have both struct and class, like in C# (Hejlsberg, 2000). For an escape hatch, D has library ways to have structs with virtual functions (there is a DUB package for that), and classes on the stack (Scoped!T, RefCounted!T, a __traits).
Re: What are (were) the most difficult parts of D?
On Friday, 13 May 2022 at 19:16:59 UTC, Steven Schveighoffer wrote: But we also have this confusing dynamic: |scope |no attribute| shared |static | ||||---| |module |TLS |global |TLS (no-op)| |function|local |local! |TLS| |class |instance|global |TLS| There is a typo in your array, a shared field is per-instance, not global. class A { shared int c; // Each A has its own c }
Re: What are (were) the most difficult parts of D?
On Thursday, 12 May 2022 at 17:34:30 UTC, H. S. Teoh wrote: Why is TLS by default a problem? It's not really for optimization, AIUI, it's more for thread safety: module-global state is TLS by default, so you don't accidentally introduce race conditions. What you accidentally have instead is people expecting top-level to be global and instead you get TLS, so it's a surprise. I mean, a lot of things works like C and C++, but not that. It's a problem because it goes from solving "no accidental race condition" and you get "people forget to add shared or __gshared and their shared library silently fail" situation. You could have none of that with explicit TLS. - `shared static this()` vs `static this()` is another trap. One is per-process, one is per-thread. Why is this a trap? Well because you can get that wrong. You get to initialize "__gshared" variables in "shared static this". It's not hard, but it's something more to explain. I wouldn't sweat it if I couldn't easily add `pure` to an entire codebase -- it hardly makes any difference anyway. If it doesn't make a difference to the bottom-line then why keep it? you're on your own and you take responsibility for any problems that you may inadvertently introduce by using the escape hatch. Well sizeable @afe code has heaps of @trusted code, so the escape hatch is very routine. it's none of the users' business. I'm not disagreeing about @trusted in API. But I was remarking in practice that @safe would mean different invariants. it's not a big issue, I was probably ranting. IOW, public APIs should always be @safe or @system. @trusted should only appear on internal APIs. Good rule to follow, TIL. So I'm curious, what exactly is it about UFCS chains that make it less maintainable? Probably personal preference, I mostly write the pedestrian way, so that debugging/optimization goes faster (maybe wrong, dunno). In the dlang.org example: void main() { stdin .byLineCopy .array .sort!((a, b) => a > b) // descending order .each!writeln; } This code has a number of prerequisites to be able to read: why is ".array" needed, why is it ".byLineCopy" vs ".byLine", is the sort stable, etc. It's just requires more time spent with the language.
Re: What are (were) the most difficult parts of D?
On Thursday, 12 May 2022 at 16:24:26 UTC, Ali Çehreli wrote: Cool trick but "parent" confused me there. I think you mean "base". :) https://en.wikipedia.org/wiki/Inheritance_(object-oriented_programming mentions "base class" as much as "parent class"
Re: What are (were) the most difficult parts of D?
On Thursday, 12 May 2022 at 11:05:08 UTC, Basile B. wrote: - Certain variant forms of the `is` Expression are not obvious (not intuitive), I'm pretty sure I still cant use them without a quick look to the specs. That one was a trouble to hear about => http://p0nce.github.io/d-idioms/#Get-parent-class-of-a-class-at-compile-time
Re: What are (were) the most difficult parts of D?
On Wednesday, 11 May 2022 at 05:41:35 UTC, Ali Çehreli wrote: What are you stuck at? What was the most difficult features to understand? etc. - How to do deterministic destruction with programs that use everything (struct / class / dynamic dispatch / GC / manual / etc). This requires to understand what the runtime does, what the gc does. Interesting nonetheless. - Some traps. Accidental TLS is a thing, top-level should probably not be silently TLS. People will loose hours on this completely preventable thing. What was the idea, optimize code without people knowing? - `shared static this()` vs `static this()` is another trap. Honestly would have preferred `__threadlocal`. It's not like being thread-local is something completely normal or without consequence for platform support. - Some features lack an escape hatch, notably `pure`. pure leaks into identifiers, like `pureMalloc`. Trying to add `pure` fails on a large codebase. - `@safe`/`@trusted`/`@system` is good but the definition of what `@trusted` means has to be remembered from the programmer. For example `Mutex.lock()` is `@trusted`, it could have been `@system` to let user review their usage of locks. You have to wonder "can a lock()/unlock() corrupt memory?". People can use that to mean "@reviewed" instead. Because it is up to us, the exact meaning will float in the D subcultures. A function which has been marked `@trusted` does not receive any review whan changed later. It will not mean the same as `@trusted` in another codebase. - Generic code typically has bad names (domain-less) and worse usability. It's often not pretty to look at. Mostly cultural, since D has powerful templates so they had to be everywhere. UFCS chains are not that convincing when you are worried about maintenance. Phobos take short names for itself, this leads to pretty complicated operations having a small screen estate. - `assert(false)` being different and not removed by `-release`. Keyword reuse seems entrenched but honestly a "crash here" keyword would be more readable. It is really 3 different things: assert, crash, and unreachable. Otherwise D is glorious and get syntax and usability right, which puts it ahead of almost every other language.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 22:16:15 UTC, rikki cattermole wrote: Of course I still don't think that code is right and should have the casts. Absolutely. I'm a bit anxious about "accidental VRP" now, not sure if the checks fluctuate from version to version, or worse, depends upon the platform.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 21:59:39 UTC, rikki cattermole wrote: Putting an int into a ubyte absolutely should error, that is a lossy conversion and should not be automatic. It's just VRP, here it works in 2.094 https://d.godbolt.org/z/vjq7xsMdn because the compiler wasn't complaining I wouldn't know it was reliant on VRP (which is certainly an issue to be fixed).
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 21:44:56 UTC, rikki cattermole wrote: On 27/04/2022 9:39 AM, Guillaume Piolat wrote: On Tuesday, 26 April 2022 at 21:13:38 UTC, Alexander Zhirov wrote: more build errors If you "dub upgrade" it should work a bit better. No success in reproducing the bug here. It definitely on your end. void main() { int scale; int* in_ = new int; ubyte b = cast(int)scale * (cast(int)*in_ >> 7); } onlineapp.d(5): Error: cannot implicitly convert expression `scale * (*in_ >> 7)` of type `int` to `ubyte` No. Obviously VRP works differently for me and for him, for an unknown reason.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 21:13:38 UTC, Alexander Zhirov wrote: more build errors If you "dub upgrade" it should work a bit better. No success in reproducing the bug here.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 20:45:16 UTC, Alexander Zhirov wrote: On Tuesday, 26 April 2022 at 20:37:28 UTC, Guillaume Piolat wrote: Curious as to what DMD you are using on what OS? It builds with 2.095.1 to 2.100-b1 here. DMD64 D Compiler v2.098.0 OS Solus Linux Well I cannot reproduce your problem => https://imgur.com/a/HZvZWr2 Perhaps a DUB mismatch that would give different DIP flags. DUB version 1.27.0, built on Oct 19 2021 Good luck.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 20:26:42 UTC, Alexander Zhirov wrote: build error Curious as to what DMD you are using on what OS? It builds with 2.095.1 to 2.100-b1 here.
Re: Library for image editing and text insertion
On Tuesday, 26 April 2022 at 17:22:54 UTC, Alexander Zhirov wrote: It is necessary to write a utility that will insert (x,y) text on the image. It is desirable that the utility does not depend on large libraries, since a minimum utility size is required. I'm looking for something similar in C/C++, I can't find anything. Maybe there is some simple library on D? You can eventually use dplug:graphics for that https://u.pcloud.link/publink/show?code=XZPwMFVZW9c6bTWtevRvNz7UdfOOqVYIE5uk
Re: How to use Vector Extensions in an opBinary
On Sunday, 17 April 2022 at 11:16:25 UTC, HuskyNator wrote: As a small disclaimer; I don't know to what extent the compiler already automates these kind of operations, and mostly want to use this as a learning experience. For your particular case, it is very likely LDC and GDC will be able to optimize your loops using SIMD.
Re: Looking for a workaround
On Thursday, 7 April 2022 at 12:56:05 UTC, MoonlightSentinel wrote: On Wednesday, 6 April 2022 at 18:10:32 UTC, Guillaume Piolat wrote: Any idea how to workaround that? I really need the same UDA in parent and child class. Use a frontend >= dmd 2.099, it works according to run.dlang.io. Good to know, thanks.
Re: Looking for a workaround
On Wednesday, 6 April 2022 at 18:21:11 UTC, Adam D Ruppe wrote: On Wednesday, 6 April 2022 at 18:10:32 UTC, Guillaume Piolat wrote: Any idea how to workaround that? Works fine if you just use the language instead of the buggy phobos wrappers: --- struct MyUDA { } class A { @MyUDA int a; } class B : A { @MyUDA int b; } void main() { foreach(memberName; __traits(allMembers, B)) foreach(attr; __traits(getAttributes, __traits(getMember, B, memberName))) static if(is(attr == MyUDA)) pragma(msg, memberName); // a, b } --- So make a function that does that and applies whatever it is you need to apply and you're in business. Note that it is `is(typeof(attr) == MyUDA)` if defined `@MyUDA(args)`. Thanks, it will also create less templates.
Looking for a workaround
This program fails to build: import std.traits: getSymbolsByUDA; struct MyUDA { } class A { @MyUDA int a; } class B : A { @MyUDA int b; } void main() { alias G = getSymbolsByUDA!(B, MyUDA); } Output: c:\d\ldc2-1.28.0-windows-multilib\bin\..\import\std\traits.d(8933): Error: template instance `AliasSeq!(b, a)` `AliasSeq!(b, a)` is nested in both `B` and `A` c:\d\ldc2-1.28.0-windows-multilib\bin\..\import\std\traits.d(8707): Error: template instance `std.traits.getSymbolsByUDAImpl!(B, MyUDA, "b", "a", "toString", "toHash", "opCmp", "opEquals", "Monitor", "factory")` error instantiating main.d(19):instantiated from here: `getSymbolsByUDA!(B, MyUDA)` Failed: ["c:\\d\\ldc2-1.28.0-windows-multilib\\bin\\ldmd2.exe", "-v", "-o-", "main.d", "-I."] Any idea how to workaround that? I really need the same UDA in parent and child class.
Re: I like dlang but i don't like dub
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote: Dlang includes some good ideas. But dub pulls in so much stuff. Too much for me. I like things which are clean,lean,little,small. But when i use dub it links with so many libraries. Are they really needed ? And how do you compare to pythons pip. Feel free to elaborate. DUB changed my programming practice. To understand why DUB is needed I think it's helpful to see the full picture, at the level of your total work, in particular recurring costs. ### An example My small software shop operation (sorry) is built on DUB and if I analyze my own package usage, there are 4 broad categories: - Set A. Proprietary Code => **8 packages, 30.4** kloc - Set B. Open source, that I wrote, maintain, and evolve => **33 packages, 88.6** kloc - Set C. Open source, that I maintain minimally, write it in part only => **5 packages, 59.1** kloc - Set D. Foreign packages (not maintaining it, nor wrote it. Stuff like arsd) => **14 package, 45.9** kloc => Total = **224 kloc**, only counting non-whitespace lines here. This is only the code that needs to be kept alive and maintained. Obviously code that is more R and/or temporary bear no recurring cost. Visually: Set A: ooo 30.4 (proprietary) Set B: o 88.6 (open-source) Set C: oo 59.1 (open-source) Set D: 45.9 (open-source) -- Total: oo ### What is the cost of maintaining all that? At a very minimum, all code in A + B + C + D needs to build with the D compiler since the business use it, and build at all times. Maintaining the "it builds" invariant takes a fixed cost m(A) + m(B) + m(C) + m(D). Here m(D) is beared by someone else. As B and C are open-source and maintained by me, the cost of building B and C for someone else is zero, that's why ecosystem is so important for language, as a recurrent expense removal. And indeed, open-source ecosystem is probably the main driver of language adoption, as a pure capital gain. Now consider the cost of evolving and bug fixing instead of just building. => This is about the same reasoning, with perhaps bug costs being less transferrable. Reuse delivers handsomely, and is cited by the Economics of Software Quality as one of the best driver for increased quality [1]. Code you don't control, but trust, is a driver for increased quality (and as the book demonstrate: lowered cost/defect/litigations). ### Now let's pretend DUB doesn't exist For maintaining the invariant "it builds with latest compiler", you'd have to pay: m(A) + m(B) + m(C) but then do another important task: => Copy each new updated source in dependent projects. Unfortunately this isn't trivial at all, that code is now duplicated in several place. Realistically you will do this on an as-needed basis. And then other people can rely on none of your code (it doesn't build, statistically) and then much fewer ecosystem becomes possible (because nothing builds and older version of files are everywhere). Without using DUB, you can't have a large set of code that maintain this or that invariant, and will have to rely to an attentional model where only the last thing you worked on is up-to-date. DUB also make it easy to stuff your code into the B and C categories which provides value for everyone. With DUB you won't have say VisualD projects because the cost of maintaining the invariant "has a working VisualD project" would be too high, but with DUB because it's declarative it's almost free. [1] "The Economics of Software Quality" - Jones, Bonsignour, Subramanyam
Re: Colors in Raylib
On Monday, 28 February 2022 at 11:48:59 UTC, Salih Dincer wrote: Is there a namespace I should implement in Raylib? For example, I cannot compile without writing Colors at the beginning of the colors: ```Colors.GRAY``` When writing C bindings, you may refer to this: https://p0nce.github.io/d-idioms/#Porting-from-C-gotchas This keeps example code working.
Re: How to deploy single exe application (?)
On Wednesday, 1 December 2021 at 09:49:56 UTC, Guillaume Piolat wrote: Huh, I never intended for someone to actually use this :| Such a thing will never work on macOS for example. You can create an installer rather easily with InnoSetup instead.
Re: How to deploy single exe application (?)
On Wednesday, 1 December 2021 at 07:45:21 UTC, bauss wrote: On Monday, 29 November 2021 at 14:58:07 UTC, Willem wrote: Thanks again for all the responses. For now -- I am simply adding the DLL to the EXE and writing it out to the working directory. Not elegant - but it does work. If you intend to distribute it then becareful with this as it might trigger some (if not all) antiviruses under most configurations. Huh, I never intended for someone to actually use this :| Such a thing will never work on macOS for example.
Re: Is DMD still not inlining "inline asm"?
On Friday, 12 November 2021 at 00:46:05 UTC, Elronnd wrote: On Thursday, 11 November 2021 at 13:22:15 UTC, Basile B. wrote: As for now, I know no compiler that can do that. GCC can do it. Somewhat notoriously, LTO can lead to bugs from underspecified asm constraints following cross-TU inlining. LDC can also do it with GCC asm constraints, however it is atrociously hard to get documentation and examples for this.
Re: Rather Bizarre slow downs using Complex!float with avx (ldc).
On Friday, 1 October 2021 at 08:32:14 UTC, james.p.leblanc wrote: Does anyone have insight to what is happening? Thanks, James Maybe something related to: https://gist.github.com/rygorous/32bc3ea8301dba09358fd2c64e02d774 ? AVX is not always a clear win in terms of performance. Processing 8x float at once may not do anything if you are memory-bound, etc.
Re: Loading assimp
On Tuesday, 28 September 2021 at 16:30:09 UTC, Eric_DD wrote: I am trying to use a newer version of Assimp. I have found a assimp-vc140-mt.dll (v3.3.1) which I renamed to assimp.dll When running my executable it throws a derelict.util.exception.SharedLibLoadException: "Failed to load one or more shared libraries: assimp.dll - %1 is not a valid Win32 application. Assimp64.dll - The specified module could not be found" Any idea what's going on? Are 64bit dlls not supported? If using dub you can build your D programs with dub -a x86 for a 32-bit executable dub -a x86_64 for a 64-bit executable (which is also the default thankfully). Your problem is very probably trying to load a 32-bit DLL into a 64 host program.
Re: Two major problems with dub
On Tuesday, 3 August 2021 at 00:54:56 UTC, Steven Schveighoffer wrote: Given the way D works, and often template-heavy coding styles, I think it's going to be hard to do this correctly, without careful attention and lots of `version(has_xxx)` conditionals. -Steve I don't think optional dependencies are truly the answer. There are ways to fix this otherwise is to break dependency chains when only a small part is used. In this case: - use a GC slice - use malloc - use std.experimental.allocator My pet peeve is the isfreedesktop package. https://github.com/FreeSlave/isfreedesktop/blob/master/source/isfreedesktop.d package :) Yes it is annoying, but with a bit of copy-paste you can break dependencies chain and avoid the npm situation where "640 packages were installed"
Re: Registering-unregistering threads
On Friday, 30 July 2021 at 23:48:41 UTC, solidstate1991 wrote: Info on it is quite scarce and a bit confusing. If I unregister from the RT, will that mean it'll be GC independent, or will have other consequences too? The consequence is that the stack memory of that thread isn't traced, so things that are only "owned" pointed to transitively by pointers on the thread stack might get collected under your feet. Your thread should use things that outlive its existence.
Re: LLVM asm with constraints, and 2 operands
On Monday, 19 July 2021 at 17:20:21 UTC, kinke wrote: You know that asm is to be avoided whenever possible, but unfortunately, AFAIK intel-intrinsics doesn't fit the usual 'don't worry, simply compile all your code with an appropriate -mattr/-mcpu option' recommendation, as it employs runtime detection of available CPU instructions. intel-intrinsics employs compile-time detection of CPU instructions. If not available, it will work anyway(tm) with alternate slower pathes (and indeed need the right -mattr, so this is the one worry you do get). So, not using @target("feature") right now, figured it would be helpful for runtime dispatch, but that means literring the code with __traits(targetHasFeature).
Re: LLVM asm with constraints, and 2 operands
On Monday, 19 July 2021 at 10:49:56 UTC, kinke wrote: This workaround is actually missing the clobber constraint for `%2`, which might be problematic after inlining. An unrelated other issue with asm/__asm is that it doesn't follow consistent VEX encoding compared to normal compiler output. sometimes you might want: paddq x, y at other times: vpaddq x, y, z but rarely both in the same program. So this can easily nullify any gain obtained with VEX transition costs (if they are still a thing).
Re: LLVM asm with constraints, and 2 operands
On Monday, 19 July 2021 at 16:05:57 UTC, kinke wrote: Is LDC still compatible with GDC/GCC inline asm? I remember Johan saying they will break compatibilty in the near future... I'm not aware of any of that; who'd be 'they'? GCC breaking their syntax is IMO unimaginable. LDC supporting it (to some extent) is pretty recent, was introduced with v1.21. It went under my radar. Thanks for the tips in this thread.
Re: LLVM asm with constraints, and 2 operands
On Monday, 19 July 2021 at 10:21:58 UTC, kinke wrote: What works reliably is a manual mov: OK that's what I feared. It's very easy to get that wrong. Thankfully I haven't used __asm a lot.
Re: LLVM asm with constraints, and 2 operands
On Sunday, 18 July 2021 at 18:48:47 UTC, Basile B. wrote: On Sunday, 18 July 2021 at 18:47:50 UTC, Basile B. wrote: On Sunday, 18 July 2021 at 17:45:05 UTC, Guillaume Piolat wrote: On Sunday, 18 July 2021 at 16:32:46 UTC, Basile B. wrote: [...] Thanks. Indeed that seems to work even when inline and optimized. Registers are spilled to stack. A minor concern is what happens when the enclosing function is extern(C) => https://d.godbolt.org/z/s6dM3a3de I need to check that more... I think this should be rejected just like when you use D arrays in the interface of an `extern(C)` func, as C has no equivalent of __vector (afaik). but in any case there's a bug. I checked and thankfullyit works when the enclosed function is inlined in an extern(C) function, that respects extern(C) ABI.
Re: LLVM asm with constraints, and 2 operands
On Sunday, 18 July 2021 at 16:32:46 UTC, Basile B. wrote: Yeah I can confirm it's aweful. Took me hours to understand how to use it a bit (my PL has [an interface](https://styx-lang.gitlab.io/styx/primary_expressions.html#asmexpression) for LLVM asm) You need to add a "x" to the constraint string return __asm!int4("paddd $1,$0","=x,x,x",a, b); - **=x** says "returns in whatever is has to" - **x** (1) is the constraint for input `a`, which is passed as operand **$0** - **x** (2) is the constraint for input `b`, which is passed as operand **$1** So the thing to get is that the output constraint does not consume anything else, it is standalone. Thanks. Indeed that seems to work even when inline and optimized. Registers are spilled to stack. A minor concern is what happens when the enclosing function is extern(C) => https://d.godbolt.org/z/s6dM3a3de I need to check that more...
LLVM asm with constraints, and 2 operands
Is anyone versed in LLVM inline asm? I know how to generate SIMD unary op with: return __asm!int4("pmovsxwd $1,$0","=x,x",a); but I struggle to generate 2-operands SIMD ops like: return __asm!int4("paddd $1,$0","=x,x",a, b); If you know how to do it => https://d.godbolt.org/z/ccM38bfMT it would probably help build speed of SIMD heavy code, also -O0 performance Also generating the right instruction is good but it must resist optimization too, so proper LLVM constraints is needed. It would be really helpful if someone has understood the cryptic rules of LLVM assembly constraints.
Re: Trivial simple OpenGl working example
On Thursday, 8 July 2021 at 14:09:30 UTC, drug wrote: 08.07.2021 16:51, Виталий Фадеев пишет: Hi! I searching trivial simple D/OpenGL working in 2021 year example. It may be triangle. It may be based on any library: SDL, GLFW, Derelict, etc. Can you help me ? https://github.com/drug007/gfm7/tree/master/examples/simpleshader it's not trivial though but it works (tested in linux) just `dub fetch gfm7` then go to `path\to\gfm7\examples\simpleshader` and run `dub`. All kudos to Guillaume Piolat, original author of gfm library. If like me you hate OpenGL :) you can can also get software-rendered DPI-aware triangles with the "turtle" package: https://code.dlang.org/packages/turtle (Courtesy of Cerjones for the software renderer. )
Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?
On Monday, 24 May 2021 at 17:39:38 UTC, Gavin Ray wrote: On Sunday, 23 May 2021 at 21:08:06 UTC, Ola Fosheim Grostad wrote: On Sunday, 23 May 2021 at 21:02:31 UTC, Gavin Ray wrote: I don't really know anything at all about compilers or low-level code -- but is there any high-level notion of "inheritance" after it's been compiled? Yes, in the structure of the vtable, which is why the spec is so hard to read. If possible stick to single inheritance in C++... Yeah agreed, multiple inheritance is asking for trouble. But unfortunately when you're binding to existing libraries you don't have control over the API Hence why I was asking how to make D structs/classes that have compatible or identical vtables to multiply inherited objects to pass as arguments to `extern (C++)` functions. Also general explanation of what makes a compiled variable compatible in terms of vtable with what's expected as an argument I'd be grateful for solid information on this AFAIK multiple inheritance is described in this book https://www.amazon.com/Inside-Object-Model-Stanley-Lippman/dp/0201834545 Multiple inheritance is a rare topic here, I doubt too many people know how it works internally. Java and COM stuck on single-inheritance because it gives you 99% bang for the buck, also v-table dispatch in case of multiple inheritance is not as straightforward.
Re: DUB doesn't seem to respect my config, am I doing something wrong?
On Saturday, 22 May 2021 at 20:28:56 UTC, rempas wrote: I'm compiling using `dub --config=development` and I'm getting the following line: `Performing "debug" build using /usr/bin/dmd for x86_64`. The same exactly happens when I'm trying to do the release config. If I disable the `targetType` option, it seems that it's creating a library and I can also manually change the compiler and the build-type so I don't know what's going on Hello, DUB has two separate concepts: - buildTypes: default ones are debug, release, release-debug, release-nobounds They You can define custom buildTypes. Selected with -b https://dub.pm/package-format-json.html#build-types By default, "debug" build type. - configurations are more often used to define software options You can define custom configurations. Selected with -c By default the first one in your file is taken, else it's a default configuration. People use configurations to define example programs or platform builds (probably becase buildTypes are limited), but they are primarily intended for enabling or disabling features in software.
Re: running a d compiler on the Mac Mini with an M1 chip
On Friday, 26 March 2021 at 22:41:08 UTC, dan wrote: On Friday, 26 March 2021 at 21:54:20 UTC, rikki cattermole wrote: On 27/03/2021 10:51 AM, dan wrote: Are there any d compilers that run natively on the Mac Mini with an M1 chip? If so, does anybody here have any experience with them that can be shared? If not, and your machine is a mac mini, how would you go about programming in d on it? TIA for any info! Looks like latest ldc has an arm build. But both dmd and ldc should already work due to x86 emulation that takes place. https://github.com/ldc-developers/ldc/releases/tag/v1.25.1 Thanks Rikki! If anybody has any particular experience using d on a mac mini with M1 that they want to share, please do post, but this does look promising. dan (Not M1 but the DTK) Hello, Here are the instructions for setup and building both for arm64 and x86_64: https://forum.dlang.org/post/rtf2j3$2oh1$1...@digitalmars.com In addition to these instructions, you can also use the native LDC for faster build.
Re: How to delete dynamic array ?
On Wednesday, 17 March 2021 at 10:54:10 UTC, jmh530 wrote: This is one of those things that is not explained well enough. Yes. I made this article to clear up that point: https://p0nce.github.io/d-idioms/#Slices-.capacity,-the-mysterious-property "That a slice own or not its memory is purely derived from the pointed area." could perhaps better be said "A slice is managed by the GC when the memory it points to is in GC memory"?
Re: Is it possible to suppress standard lib and dlang symbols in dylib (macos)
On Sunday, 14 March 2021 at 11:33:00 UTC, David wrote: Anyone else done this? Pointers welcome. Sorry for delay. Just add "dflags-osx-ldc": ["-static"],
Re: Is it possible to suppress standard lib and dlang symbols in dylib (macos)
On Thursday, 11 March 2021 at 08:34:48 UTC, David wrote: I thought it would be fun to convert some old C++/C quant utils to D. I'm starting with a simple library that I call from vba in Excel on macos: module xlutils; import core.stdc.string : strlen, strcpy; //import std.conv : to; //import std.string : toStringz; import core.stdc.stdlib : malloc, free; extern (C) double addDD_D(double a, double b) {return a + b;} ... Is there a way of not exposing the symbols that aren't mine? - I only need a simple C interface. Thx David Create a exports.lst file with: _addDD_D as the only line there. Build with: "lflags-osx-ldc": [ "-exported_symbols_list", "exports.lst", "-dead_strip" ],
Re: Can't I allocate at descontructor?
On Friday, 5 March 2021 at 20:28:58 UTC, Ali Çehreli wrote: To my surprise, even though 'c' is not null below, the destructor is not executed multiple times. Hence why https://p0nce.github.io/d-idioms/#GC-proof-resource-class works as a detector of undeterminism.
Re: Using YMM registers causes an undefined label error
On Saturday, 6 March 2021 at 16:09:03 UTC, Imperatorn wrote: On Saturday, 6 March 2021 at 15:40:56 UTC, Rumbu wrote: On Saturday, 6 March 2021 at 12:15:43 UTC, Mike Parker wrote: [...] Where exactly is documented the extern(D) x86-64 calling convention? Because currently seems like a mess according to the dissasembly. First X parameters on stack from left to right, last 4 in registers. But wait, if you have less than 4 parameters, they are passed in register. Again, WTF? Reading this, I'm experiencing true fear for the first time in my life. I'm also learning that extern(D) is different across compilers in some cases, but it isn't that bad. Preferred ABI boundaries across executables is extern(C). If you deal with static librariries, then they are likely built from the same compiler too. When LDC change the extern(D) ABI, it is rightfully a minor change as everything will get rebuilt. https://github.com/ldc-developers/ldc/releases/tag/v1.25.0 Besides, such changes are there for efficiency :)
Re: DMD support for Apples new silicon
On Tuesday, 2 March 2021 at 08:01:41 UTC, tastyminerals wrote: On Sunday, 10 January 2021 at 14:50:44 UTC, Guillaume Piolat wrote: On Sunday, 10 January 2021 at 14:22:25 UTC, Christian Köstlin wrote: [...] Hello Christian, [...] I see that there is a ldc2-1.25.1-osx-arm64.tar.xz already among https://github.com/ldc-developers/ldc/releases So, one could use this straight away, right? Yes, it will run faster and you get to avoid the flag to target arm64. On the minus side, you can't target x86_64 with that build IIRC, whereas the x86_64 one cross-compile to arm64.
Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)
On Thursday, 25 February 2021 at 14:28:40 UTC, Guillaume Piolat wrote: On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote: How does one optimize code to make full use of the CPU's SIMD capabilities? Is there any way to guarantee that "packed" versions of SIMD instructions will be used?(e.g. vmulps, vsqrtps, etc...) https://code.dlang.org/packages/intel-intrinsics A bit of elaboration on why you might want to prefer intel-intrinsics: - it supports all D compilers, including DMD 32-bit target - targets arm32 and arm64 with same code (LDC only) - core.simd just give you the basic operators, but not say, pmaddwd or any of the complex instructions. Some instructions need very specific work to get them. - at least with LLVM, optimizers works reliably over subsequent versions of the compiler.
Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)
On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote: How does one optimize code to make full use of the CPU's SIMD capabilities? Is there any way to guarantee that "packed" versions of SIMD instructions will be used?(e.g. vmulps, vsqrtps, etc...) https://code.dlang.org/packages/intel-intrinsics
Re: Profiling
On Wednesday, 10 February 2021 at 11:52:51 UTC, JG wrote: Thanks for the suggestions. However, I would prefer not to spend time trying to debug d-profile-viewer at the moment. As a follow up question I would like to know what tool people use to profile d programs? Here is what I use for sampling profiler: (On Windows) Build with LDC, x86_64, with dub -b release-debug in order to have debug info. Run your program into: - Intel Amplifier (free with System Studio) - AMD CodeXL (more lightweight, and very good) - Very Sleepy (On Mac) Build with dub -b release-debug Run your program with Instruments.app which you can find in your Xcode.app (On Linux) I don't know. Though most of the time to validate the optimization a comparison program that runs two siilar programs and computer the speed difference can be needed.
Re: D meets GPU: recommendations?
On Friday, 29 January 2021 at 16:34:25 UTC, Bruce Carneal wrote: The project I've been working on for the last few months has a compute backend that is currently written MT+SIMD. I would like to bring up a GPU variant. What you could do is ressurect DerelictCL, port it to BindBC, and write vanilla OpenCL 1.2 + OpenCL C. Not up to date on both, but CUDA is messier than OpenCL. I don't really know about the other possibilities, like OpenGL + compute shaders or Vulkan + compute shaders.
Re: Why many programmers don't like GC?
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad wrote: Many open source projects (and also some commercial ones) work ok for small datasets, but tank when you increase the dataset. So "match and mix" basically means use it for prototyping, but do-not-rely-on-it-if-you-can-avoid-it. It's certainly true that in team dynamics, without any reward, efficiency can be victim to a tragedy of commons. Well, any software invariant is harder to hold if the shareholders don't care. (be it "being fast", or "being correct", or other invariants).
Re: Why many programmers don't like GC?
On Friday, 15 January 2021 at 18:55:27 UTC, Ola Fosheim Grøstad wrote: On Friday, 15 January 2021 at 18:43:44 UTC, Guillaume Piolat wrote: Calling collect() isn't very good, it's way better to ensure the GC heap is relatively small, hence easy to traverse. You can use -gc=profile for this (noting that things that can't contain pointer, such as ubyte[], scan way faster than void[]) Ok, so what you basically say is that the number of pointers to trace was small, and perhaps also the render thread was not under GC control? A small GC heap is sufficient. There is this blog post where there was a quantitative measure of the sub-1ms D GC heap size. http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html 200 KB can be scanned/collected in 1 ms. Since then the D GC has improved in many ways (multicore, precise, faster...) that surprisingly have not been publicized that much ; but probably the suggested realtime heap size is in the same order of magnitude. In this 200kb number above, things that can't contain pointers don't count.
Re: Why many programmers don't like GC?
On Friday, 15 January 2021 at 16:37:46 UTC, Ola Fosheim Grøstad wrote: But when do you call collect? Do you not create more and more long-lived objects? Calling collect() isn't very good, it's way better to ensure the GC heap is relatively small, hence easy to traverse. You can use -gc=profile for this (noting that things that can't contain pointer, such as ubyte[], scan way faster than void[]) How do you structure this? Limit GC to one main thread? But an audio plugin GUI is not used frequently, so... hickups are less noticable. For a 3D or animation editor hickups would be very annoying. Yes but when a hiccup happen you can often trace it back to gargage generation and target it. It's an optimization task. I think it is better with something simpler like saying one GC per thread But then ownership doesn't cross threads, so it can be tricky to keep object alive when they cross threads. I think that was a problem in Nim. It really is quite easy to do: build you app normally, evetually optimize later by using manual memory management. I understand what you are saying, but it isn't all that much more work to use explicit ownership if all the libraries have support for it. But sometimes that ownership is just not interesting. If you are writing a hello world program, no one cares who "hello world" string belongs to. So the GC is that global owner.
Re: Why many programmers don't like GC?
On Friday, 15 January 2021 at 16:21:18 UTC, Ola Fosheim Grøstad wrote: What do you mean by "mix and match"? If it means shutting down the GC after initialization then it can easily backfire for more complicated software that accidentally calls code that relies on the GC. I mean: "using GC, unless where it creates problems". Examples below. Until someone can describe a strategy that works for a full application, e.g. an animation-editor or something like that, it is really difficult to understand what is meant by it. Personal examples: - The game Vibrant uses GC for some long-lived objects. Memory pools for most game entities. Audio thread has disabled GC. - Dplug plugins before runtime removal used GC in the UI, but no GC in whatever was called repeatedly, leading to no GC pause in practice. In case an error was made, it would be a GC pause, but not a leak. The pain point with the mixed approach is adding GC roots when needed. You need a mental model of traceability. It really is quite easy to do: build you app normally, evetually optimize later by using manual memory management.
Re: Why many programmers don't like GC?
On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote: That's the whole point of being able to mix and match. Anyone avoiding the GC completely is missing it (unless they really, really, must be GC-less). +1 mix and match is a different style versus only having a GC, or only having lifetimes for everything. And it's quite awesome as a style, since half of things don't need a well-identified owner.
Re: Why many programmers don't like GC?
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote: I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC? Languages where the GC usage is unavoidable (Javascript and Java) have created a lot of situations where there is a GC pause in realtime program and the cause is this dynamically allocated memory. So a lot of people make their opinion of GC while using setup where you couldn't really avoid it. For example in Javascript from 10 years ago just using a closure or an array literals could make your web game stutter.
Re: writeln and write at CTFE
On Wednesday, 13 January 2021 at 08:35:09 UTC, Andrey wrote: Hello all, Tell me please how can I "writeln" and "write" in function that is used in CTFE? At the moment I get this: import\std\stdio.d(4952,5): Error: variable impl cannot be modified at compile time Or may be exist some other ways to do it? pragma(msg, );
Re: DMD support for Apples new silicon
On Sunday, 10 January 2021 at 16:03:53 UTC, Christian Köstlin wrote: Good news! I was hoping for support in ldc, but dmds super fast compile times would be very welcome. I guess it's more work to put an ARM backend there. Kind regards, Christian It is indeed more work and up to the DMD leadership what should happen. You can already switch between compilers with: dub --compiler dmd dub --compiler ldc2 so as to benefit from dmd fast build times, and then release with ldc. Apple Silicon and Rosetta 2 are really quite fast, so you should experience pretty quick build times there anyway.
Re: DMD support for Apples new silicon
On Sunday, 10 January 2021 at 14:22:25 UTC, Christian Köstlin wrote: Hi all, are there any plans on supporting Apples new ARM silicon with DMD or would this be something for ldc? Kind regards, Christian Hello Christian, LDC since 1.24+ support cross-compiling to Apple Silicon. Here is how to build for it on Big Sur. 1. Download ldc2-1.24.0-osx-x86_64.tar.xz (or later version) from this page: https://github.com/ldc-developers/ldc/releases 2. Unzip where you want, and put the bin/ subdirectory in your PATH envvar This will give you the ldc2 and dub command in your command-line, however they won't work straight away in Catalina/Big Sur because of lacking notarization. 3. (optional) In this case, in Finder, right-click + click "Open" on the bin/dub and bin/ldc2 binaries since it is not notarized software, and macOS will ask for your approval first. Once you've done that, dub and ldc2 can be used from your Terminal normally. 4. Type 'ld' in Terminal, this will install the necessary latest XCode.app if it isn't already. That is a painful 10 gb download in general. You can also install Xcode from the App Store. People target Big Sur arm64 from Catalina or Big Sur usually. 5. You can target normal x86_64 (Rosetta 2) with: ldc2 dub 6. If you want to target arm64, adapt the SDK path in etc/ldc2.conf with your actual Xcode macOS11.0 path, and then use -mtriple=arm64-apple-macos to cross-compile. ldc2 -mtriple=arm64-apple-macos dub -a arm64-apple-macos Debugging and notarization is a whole another topic then.
Re: How to Install D on my new MacBook with M1 ARM computer
On Tuesday, 29 December 2020 at 19:04:33 UTC, Dave Chapman wrote: Greetings, Apologies If I have double posted. I received a MacBook pro M1 for Christmas and I would like to install a D compiler on it. After looking at the downloads page I don't see how to install D on a new MacBook. I did not see a precompiled version to download with the possible exception of ldc for macOS with 64 bit ARM support (thanks Guillaume!) Hello, 1. Download ldc2-1.24.0-osx-x86_64.tar.xz (or later version) from this page: https://github.com/ldc-developers/ldc/releases 2. Unzip where you want, and put the bin/ subdirectory in your PATH envvar This will give you the ldc2 and dub command in your command-line, however they won't work straight away... 3. In Finder, right-click + click "Open" on the bin/dub and bin/ldc2 binaries since it is not notarized software, and macOS will ask for your approval first. Once you've done that, dub and ldc2 can be used from your Terminal normally. 4. Type 'ld' in Terminal, this will install the necessary latest XCode.app if it isn't already. That is a painful 10 gb download in general. You can also install Xcode from the App Store. 5. You can target normal x86_64 (Rosetta 2) with: ldc2 dub 6. If you want to target arm64, adapt the SDK path in etc/ldc2.conf with your actual Xcode macOS11.0 path, and then use -mtriple=arm64-apple-macos to cross-compile. ldc2 -mtriple=arm64-apple-macos dub -a arm64-apple-macos Let me know if you want to _distribute_ consumer software for macOS, there are a lot more complications with signing and notarization.
Re: How to resize an image ? 樂
On Friday, 25 December 2020 at 20:59:03 UTC, vnr wrote: Hello For a small "script" that generates printable files, I would need to change the size of an image (which is loaded into memory as an array of bytes) to shrink it to scale if it exceeds the A4 page size. To load the images into memory and generate a PDF, I use the "printed" package. It is not very provided but is sufficient for my use, I just need the resize option... Is there a relatively simple way to do this? Thank you. Hello, I've updated `printed` to v1.0.1, you can now the call: /// Draws an image at the given position, with the given width and height. /// Both `width` and `height` must be provided. void drawImage(Image image, float x, float y, float width, float height); http://printed.dpldocs.info/printed.canvas.irenderer.IRenderingContext2D.html
Re: How to resize an image ? 樂
On Friday, 25 December 2020 at 20:59:03 UTC, vnr wrote: Hello For a small "script" that generates printable files, I would need to change the size of an image (which is loaded into memory as an array of bytes) to shrink it to scale if it exceeds the A4 page size. To load the images into memory and generate a PDF, I use the "printed" package. It is not very provided but is sufficient for my use, I just need the resize option... Is there a relatively simple way to do this? Thank you. printed use the DPI information in your image to set the target size. You do not necessarily need to change the pixels. Save your PNG / JPEG with proper DPI information.
Re: C++ or D?
On Tuesday, 10 November 2020 at 01:00:50 UTC, Mark wrote: Hi all, Anyone have any thoughts how C++ and D compare? C++ has a bit more mathematical feeling, everything has been sorted out in the spec, even if the rules are crazy difficult. D feels like it's up to _you_ to write the spec as you discover things in the compiler. C++ code feels a bit more cast in stone than any other language, you can't move around things as quickly, and you won't be willing to. But as you wrote the lines slower, likely you were a bit more careful too as a side-effect. If you write a small command-line tool, using D vs C++ will be appreciably more productive. Just std.process will speed up things by a lot, for this kind of work Phobos really shines. I don't think it makes the same difference for large projects. Learning D is something that can be almost finished, whereas with C++ you have to aggressively conquer new features from the standard one by one, and unfortunately C++ evolves faster than you can assimilate C++. Generally when you meet a C++ programmer, you are meeting someone who has given up the hope to have a full understanding of the language and instead stay strategically on a useful, codebase-specific subset (eg: if you learn about std:unique_ptr, you can avoid to learn about most of move semantics so that's a good learning investment). D lets you think more about your problem domain, and less about language things. Don't know precisely why. If you are deeply immersed in C++ everyday, you won't see that problem, but it's there. It's as if the culture of C++ was "complexity is free", there is little attempt to contain it. And it shows in the small things, for example: - D atomics (core.atomic) has 11 public functions and defined 5 memory models. - C++ atomics has 29 functions and 6 memory models. It doesn't seem like much, but there is a bit _more of everything_ you can count. All in all as a D replacement C++ seems a bit lacking, unless you want a particular domain-specific library that only exists in C++. I'm sure with a bit more effort, it could be a bit more attractive to the vast masses of D programmers.
Re: Docs generation example
On Saturday, 10 October 2020 at 02:07:02 UTC, Виталий Фадеев wrote: Wanted! Docs generation example. I have dub project, sources/*.d. I want html-index with all classes/functions. Is exists simple, hi-level, one-line command line solution ? Alternatively: 1. Publish the 'blablah' package on the DUB registry. 2. Navigate to the https://blablah.dpldocs.info/index.html URL
Re: Problem with gfm.math.matrix (some gamedevs out there ?)
On Thursday, 3 September 2020 at 12:36:35 UTC, Thomas wrote: - import std.stdio; int main() { import gfm.math.matrix; const int width = 800; const int height = 600; auto projectionMatrix = mat4!(float).identity(); Note that instead of `mat4!(float)` you can just use `mat4f`. auto ratio = cast(float)width / cast(float)height; projectionMatrix = mat4!(float).perspective( 45.0f, ratio, 0.0f, 100.0f ); As others said, zNear is zero so your matrix is not invertible. I guess perspective should warn about that.
Re: Lack of asm volatile qualifier (explicitly) again.
On Tuesday, 28 July 2020 at 06:57:36 UTC, Cecil Ward wrote: What do others think? If others agree, how could a very small DIP be set in motion ? Hello, LDC lets you do optimizable assembly with ldc.llvmasm.__asm Better yet, you can also create IR directly with ldc.llvmasm.__ir_pure This will yield results mroe portable and with optimal efficiency in a lot of cases. GDC let's you do optimizable assembly if you can understand its arcane syntax! But all this isn't very useful since doing assembly directly rarely lead to fastest results provided you use a modern backend.
Re: How DerelictCL works
On Tuesday, 21 July 2020 at 12:00:03 UTC, bioinfornatics wrote: Dear, I would like to use OpenCL in D. Thus I try to use DerelictCL. But I fail to use it I encounter this error message: Hello, I don't have time at all at the moment for maintaining DerelictCL, can you provide a fully working PR that fix your problem? I will then make a git tag. Guillaume
Re: Windows + LDC/DMD installation nightmare when changing VS versions
On Friday, 12 June 2020 at 19:21:46 UTC, kinke wrote: On Friday, 12 June 2020 at 15:21:12 UTC, Guillaume Piolat wrote: Any idea what could be causing this? Mentioning at least the used LDC version would be helpful; especially since the MSVC detection was completely overhauled with the v1.22 betas (and I think the previous non-existing-LDC_VSDIR hack wouldn't work anymore). LDC doesn't need a reinstall when tampering the VS installations (there's no setup process, MSVC auto-detection runs each time). - Assuming you are using an LDC version < 1.22, you can manually check the auto-detection result by invoking `bin\msvcEnv.bat ` (e.g., by checking the env variables afterwards via `set`). Some leftovers from uninstalled VS installations might be problematic, but probably hardly the reason for a 32-bit libcmt.lib to be linked with a 64-bit target. But I'd start first with checking whether LDC/dub works in a naked command prompt, to rule out that VisualD is interfering. [And adding -v to the LDC commandline is useful for debugging linking problems.] Thanks a lot. I was trying with LDC 1.17.0 and LDC 1.20.1 64-bit linking works within a VSvars shell. It also seems I have disc-related problems, so a faulty VS installation might be at fault. Anyway, thanks everyone for the help. I'm doing a dskchk while installing on another laptop. ^^
Re: Windows + LDC/DMD installation nightmare when changing VS versions
On Friday, 12 June 2020 at 16:16:18 UTC, mw wrote: --arch=x86_64 ? check where this config is set? you said it’s for 32 bit Indeed it's the other way around, it's with -a x86_64
Windows + LDC/DMD installation nightmare when changing VS versions
Originally I installed VisualD and LDC and DMD with the VisualD installer on top of VS2019 and life was good. Then because VS2019 is very slow, I uninstalled VS2019 and installed VS2015 instead. This broke both DMD+64-bit and LDC despite having LDC_VSDIR set at "invalid-path". Isn't it supposed to auto-detect? Well it wasn't anymore and LINK.EXE would not get found. I then reinstalled stuff with the VisualD installer, which fixed DMD + 64-bit (Linking stage was never finishing) but not LDC + 32-bit. Now: - With DMD + 64-bit it works. - With DMD + 32-bit it works. - With LDC + 64-bit it works. - With LDC + 32-bit it still fail with: libcmt.lib(chkstk.obj) : fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'x64' Error: C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\link.exe failed with status: 1112 ldc2 failed with exit code 1. error: Command 'dub build --build=debug --arch=x86_64 --compiler=ldc2 --config=VST-FULL' returned 2 Any idea what could be causing this? Please help. This was a living nightmare. I just want a working setup...
Re: Objective C protocols
On Saturday, 16 May 2020 at 19:14:51 UTC, John Colvin wrote: What's the best way to implement an Objective C protocol in D? I see mention here https://dlang.org/changelog/2.085.0.html#4_deprecated_objc_interfaces but it's not clear where things are these days. I did it throught the Obj-C runtime a while ago: https://github.com/AuburnSounds/Dplug/blob/dda1f80d69e8bfd4af0271721738ce827c2f0eae/au/dplug/au/cocoaviewfactory.d#L99 and the result is brittle, you need to replicate the protocol declaration, add all methods etc.
Re: XMM Intrinsics
On Friday, 8 May 2020 at 12:38:51 UTC, Marcio Martins wrote: How would I go about calling _mm_* functions in D in a way that is portable between D compilers? Hello, I've made this library for that exact purpose: https://github.com/AuburnSounds/intel-intrinsics Supports every intrinsic listed under MMX/SSE/SSE2/SSE3 in https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Re: How to call 'shared static this()' code of a D shared library?
On Saturday, 18 January 2020 at 03:53:43 UTC, Adam D. Ruppe wrote: Did you already try rt_init? That should trigger it Indeed, this is done by runtime initialization.
Re: Help me decide D or C
On Wednesday, 31 July 2019 at 18:38:02 UTC, Alexandre wrote: Should I go for C and then when I become a better programmer change to D? Should I start with D right now? D and C++ (and probably other languages) inherit features of C such as operator precendence, integer promotion, and a few things. So learning these specific points of C will pay dividends. However, I don't see any other reason - apart from platform support maybe - to bother with C when D is available.
Re: accuracy of floating point calculations: d vs cpp
On Monday, 22 July 2019 at 13:23:26 UTC, Guillaume Piolat wrote: On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote: I have almost identical (I believe it at least) implementation (D and C++) of the same algorithm that uses Kalman filtering. These implementations though show different results (least significant digits). Before I start investigating I would like to ask if this issue (different results of floating points calculation for D and C++) is well known? May be I can read something about that in web? Does D implementation of floating point types is different than the one of C++? Most of all I'm interesting in equal results to ease comparing outputs of both implementations between each other. The accuracy itself is enough in my case, but this difference is annoying in some cases. Typical floating point operations in single-precision like a simple (a * b) + c will provide a -140dB difference if order is changed. It's likely the order of operations is not the same in your program, so the least significant digit should be different. What I would recommend is compute the mean relative error, in double, and if it's below -200 dB, not bother. This is an incredibly low relative error of 0.0001%. You will have no difficulty making your D program deterministic, but knowing exactly where the C++ and D differ will be long and serve no purpose.
Re: accuracy of floating point calculations: d vs cpp
On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote: I have almost identical (I believe it at least) implementation (D and C++) of the same algorithm that uses Kalman filtering. These implementations though show different results (least significant digits). Before I start investigating I would like to ask if this issue (different results of floating points calculation for D and C++) is well known? May be I can read something about that in web? Does D implementation of floating point types is different than the one of C++? Most of all I'm interesting in equal results to ease comparing outputs of both implementations between each other. The accuracy itself is enough in my case, but this difference is annoying in some cases. Typical floating point operations in single-precision like a simple (a * b) + c will provide a -140dB difference if order is changed. It's likely the order of operations is not the same in your program, so the least significant digit should be different.
Re: OT - Git training Lon/HK and book recommendation on taste in programming
On Wednesday, 1 May 2019 at 09:51:01 UTC, Laeeth Isharc wrote: Second question. Lots of people these days start to program to solve their problems at work but they may never have been shown the basic principles of design, structuring and maintenance of their code. If I could give them one book (and a few YouTube links) what should it be ? Pragmatic Programmer
Re: Recommendations for best JSON lib?
On Sunday, 21 April 2019 at 02:09:29 UTC, evilrat wrote: On Saturday, 20 April 2019 at 20:44:22 UTC, Guillaume Piolat wrote: On Saturday, 20 April 2019 at 18:49:07 UTC, Nick Sabalausky (Abscissa) wrote: I only need to read arbitrary JSON data, no need for writing/(de)serialization. std.json is simple as pie. However IIRC it fails with trailing commas, means that for reading user written JSON's it might be annoying. I also tried experimental std json, asdf and vibe.d. The only one that worked for me is vibe.d JSON subpackage, and adding simple commented lines stripping is simple with phobos, because there is absolutely no libraries that can handle JSON comments yet. (yes, I know it's not standard) I wrote a JSON parser just for this use case https://gitlab.com/AuburnSounds/rub/blob/master/source/permissivejson.d
Re: Recommendations for best JSON lib?
On Saturday, 20 April 2019 at 18:49:07 UTC, Nick Sabalausky (Abscissa) wrote: I only need to read arbitrary JSON data, no need for writing/(de)serialization. std.json is simple as pie. import std.json: parseJSON; import std.file: read; JSONValue dubFile = parseJSON(cast(string)(read("dub.json"))); string name = dubFile["name"].str;
Re: How can I build dynamic library with ldc in termux?
On Sunday, 3 March 2019 at 01:51:49 UTC, Domain wrote: On Sunday, 3 March 2019 at 01:47:50 UTC, Domain wrote: /data/data/com.termux/files/usr/bin/aarch64-linux-android-ld: cannot find -lphobos2-ldc-shared /data/data/com.termux/files/usr/bin/aarch64-linux-android-ld: cannot find -ldruntime-ldc-shared Any dub config example? Perhaps this: add this flag to your dub.json: "dflags-linux-dmd": ["-defaultlib=libphobos2.a"], and if you are using SDLang, convert it to JSON before :)
Re: Handling big FP numbers
On Saturday, 9 February 2019 at 02:54:18 UTC, Adam D. Ruppe wrote: (The `real` thing in D was a massive mistake. It is slow and adds nothing but confusion.) We've had occasional problems with `real` being 80-bit on FPU giving more precision than asked, and effectively hiding 32-bit float precision problems until run on SSE. Not a big deal, but I would argue giving more precision than asked is a form of Postel's law: a bad idea.
Re: Bitwise rotate of integral
On Monday, 7 January 2019 at 14:39:07 UTC, Per Nordlöw wrote: What's the preferred way of doing bitwise rotate of an integral value in D? Are there intrinsics for bitwise rotation available in LDC? Turns out you don't need any: https://d.godbolt.org/z/C_Sk_- Generates ROL instruction.
Re: DirectXMath alternative
On Wednesday, 5 December 2018 at 11:43:46 UTC, evilrat wrote: Are you sure you don't confuse lines with columns? Here it says it is row major https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/matrix.d#L17 Yes, sorry I made a mistake. It's indeed row-major in gfm:math. The only real difference is the order or operations. IIRC however gfm tries to hide this difference and use math notation. Another thing is memory caches - accessing row in succession will have better chance of cached access while accessing columns in most(if not all) cases will fetch more items only to discard them on next value. Though I haven't ever profiled this myself to be 100% sure. I don't know if there is a definitive answer as to which is preferable.
Re: DirectXMath alternative
On Wednesday, 5 December 2018 at 01:57:53 UTC, evilrat wrote: On Tuesday, 4 December 2018 at 20:41:54 UTC, Guillaume Piolat wrote: On Tuesday, 4 December 2018 at 20:33:07 UTC, John Burton wrote: What is the best alternative for D, assuming there is anything? (I want vector, matrix math for use in D3, things like inverting a matrix, getting perspective matrices etc) I can program something myself if necessary but I'd prefer notto You have the choice between the following packages: - dlib - gfm:math - gl3n I was using gl3n then switched to gfm math. Try gfm, IIRC it should work without much PITA because it stores matrices row-major way, so you don't have to transpose it like with OpenGL. Can't say anything about dlib though, I tried it a bit with dagon engine, but just didn't stick for long. I think you are mistaken, gfm:math also stores matrices line-major so you _have_ to transpose them. The problem with row-major is it makes matrix literals read transposed vs the math notation.
Re: DirectXMath alternative
On Tuesday, 4 December 2018 at 20:33:07 UTC, John Burton wrote: What is the best alternative for D, assuming there is anything? (I want vector, matrix math for use in D3, things like inverting a matrix, getting perspective matrices etc) I can program something myself if necessary but I'd prefer notto You have the choice between the following packages: - dlib - gfm:math - gl3n
Re: Small or big dub packages
On Monday, 29 October 2018 at 11:31:55 UTC, Igor wrote: The way I see it the advantage of smaller packages is that users can pick and choose and and only have the code they really need in their project, but the con could become managing a lot of dependencies. Also I am not sure how compile time on clean project and previously compiled project would be affected. Pros: Users can pick exactly what they need. Encourages decoupling instead of too much cohesion. Less code to build and maintain. Less chances of breakage on upgrade since you depend on less. Improve build time since only modified sub-packages get rebuilt. Good for the ecosystem. Cons: More link-time operations when not using --combined, each sub-package is compiled at once. Too much sub-package can slow down builds. Possibly hitting more DUB edge cases (less the case since DUB has tests) Directory layout may need to change for proper VisualD support. On the DUB registry, sub-packages are less popular than "big" packages because less discoverable and for some reasons some people won't just pick a sub-package when there is a toplevel package.
Re: Profiling with DUB?
On Monday, 29 October 2018 at 10:14:23 UTC, Dukc wrote: I'm trying to profile my program, built like: dub build --build=profile When I run the program, where is the performance profile file supposed to appear? I can find nothing new in the program/project root directory. This happens regardless whether I compile with dmd or ldc2. If you want to use sampling profilers (like the free Intel Amplifier coming with System Studio) you can also use dub build -b release-debug And then check in your profiler.
Re: Is there an efficient byte buffer queue?
On Monday, 8 October 2018 at 09:39:55 UTC, John Burton wrote: I would do much better to maintain a fixed size buffer and maintain read and write positions etc. Perhaps https://github.com/AuburnSounds/Dplug/blob/master/core/dplug/core/ringbuf.d#L16
Re: Is it possible to translate this API's C headers?
On Monday, 17 September 2018 at 03:16:33 UTC, spikespaz wrote: Could one of you give me pointers about how to go about this? I have the dynamic link libraries, the static libraries, and the header includes. Every other language other than C++ will have the same problem as you interacting with this library, so you could follow this plan. Step 1 Chime in https://github.com/ultralight-ux/ultralight/issues/15 and wait until it is implemented: everyone will need this since its a C++ library hence unusable from any other language Step 2 Ask for binary releases in dynlib form, or build them yourselves. Step 3 Implement a BindBC or Derelict library
Re: Manual delegates
On Sunday, 16 September 2018 at 14:45:08 UTC, Vladimir Panteleev wrote: On Sunday, 16 September 2018 at 14:12:27 UTC, Guillaume Piolat wrote: Anyone has any information about the ABI of delegates? In particular how to call them with a particular "this"/frame pointer? To solve a hairy problem I need a delegate with a synthesized frame pointer. https://dpaste.dzfl.pl/cf44417c98f9 The problem is that delegate forwarding seems to require GC closures. I want manually-managed closures. Have a look at the implementation of toDelegate, which does exactly this: https://github.com/dlang/phobos/blob/v2.082.0/std/functional.d#L1463 Thanks. I ended up using toDelegate internally, and enclosing the resulting delegate with code returning a struct with `opCall`. The conclusion is that "struct with `opCall`" is much easier to implement that faking delegate ABI, this is less brittle ; and doesn't add a lifetime of a trampoline context to extend the input delegate.
Re: Manual delegates
On Sunday, 16 September 2018 at 14:12:27 UTC, Guillaume Piolat wrote: In particular how to call them with a particular "this"/frame pointer? Related thread: https://forum.dlang.org/post/wjbhpztovxratexao...@forum.dlang.org
Manual delegates
Anyone has any information about the ABI of delegates? In particular how to call them with a particular "this"/frame pointer? To solve a hairy problem I need a delegate with a synthesized frame pointer. https://dpaste.dzfl.pl/cf44417c98f9 The problem is that delegate forwarding seems to require GC closures. I want manually-managed closures.
Re: C++ GLM(OpenGL Mathematics) D Equivalent.
On Tuesday, 4 September 2018 at 19:23:16 UTC, SrMordred wrote: Most C++ game related projects uses GLM as they default math/vector lib (even if not using opengl). In D we have (that I found): gfm.math - https://github.com/d-gamedev-team/gfm dlib.math - https://github.com/gecko0307/dlib Gl3n - https://github.com/Dav1dde/gl3n But i'm not sure which to pick. Can someone point me out some reasons to use one over the other? (or show some differences) I´m expecting something of equivalent functions and performance as c++ glm. Thank you! It appears mine is the only one that is @nogc.
Re: Docs for subpackages?
On Thursday, 14 June 2018 at 04:39:16 UTC, 9il wrote: On Wednesday, 13 June 2018 at 14:56:10 UTC, 9il wrote: Hi, I am trying to build a large project that is split into dozen of sub-packages. How I can do it using dub without writing my own doc scripts? --combined does not help here. Best regards, Ilya If your project is public you can use dpldocs. http://mir.dpldocs.info/index.html
Re: Static Array Idiom not working anymore.
On Tuesday, 12 June 2018 at 15:35:42 UTC, Steven Schveighoffer wrote: No, that's not what I mean. What I mean is: int[] arr = [1,2,3].s; int[] arr2 = [4,5,6].s; Legally, the compiler is allowed to reuse the stack memory allocated for arr for arr2. The lifetime of the arr data is over. -Steve https://github.com/p0nce/d-idioms/issues/150 Especially if the stdlib has a way to do this now.
Re: Static Array Idiom not working anymore.
On Tuesday, 12 June 2018 at 14:44:12 UTC, Steven Schveighoffer wrote: What you are being told is that your memory is not being kept around. Essentially what you had originally was a memory corruption bug (yes, even before the deprecation happened). Don't do that anymore! And a reminder that this idiom exist because _you can't have static array literals under @nogc_, which is just strange (I know there are reasons, this was debated to death at the time). When D makes a decision that isn't practical, the d-idioms page get one additional entry.
Re: Static Array Idiom not working anymore.
On Tuesday, 12 June 2018 at 14:44:12 UTC, Steven Schveighoffer wrote: Note to ponce, please update your idioms, this is NOT safe, even within the same function. Just because it does work, doesn't mean it will always work. The language makes no guarantees once the lifetime is over. -Steve I thought it was clear enough because the comment said // Slice that static array __which is on stack__ but now I see how it can be hard to see the unsafety.