D IDE Dexed - v3.9.0
Despite of the mini drama last year I've continued developping dexed. The changelog since last announce here is a bit long, check https://gitlab.com/basile.b/dexed/-/releases for more information and get the releases (linux only).
DDOC generator Harbored-Mod - v0.3.4
Harbored-Mod is a lesser known documentation generator for the D programming language. Since the last release several old bugs have been fixed. The location[1] has changed too. - added support for anchors via anchor.js - enum member attributes are displayed (they didn't exist when the soft was created). - comments for enum members were not displayed. - fix a problem when collapsing the TOC in index.html - the template param for typed and templated enum was not displayed. - empty headers were generated, which has been revealed by the addition of anchors. [1] https://gitlab.com/basile.b/harbored-mod Note that I don't have access to the DUB registry to update the location so Ilya Y. if you read this maybe you can do that [2] ;). [2] https://code.dlang.org/packages/harbored-mod
Re: GUI library for DMD 2.090 or DMD 2.091
On Friday, 24 April 2020 at 13:45:22 UTC, Phrozen wrote: I'm too new to DLang and I have a lot to learn. Probably that's why I have a lot of difficulties. Has anyone tried using a GUI library to the latest DMD 2.090 or DMD 2.091? I plan to use this language for a specific Thermal calculator application for Windows, but for two days I've been struggling with dub and elementary examples in GUI libraries. I need something simple - a modal window with 3 buttons and a two text boxes. So far I have tested DWT, TKD, DFL, dlangui without success. Can anyone help me with advice or some more recent tutorial. Thank you! you also have GTK-D[1], and you have up to date sources to learn[2] it. [1] https://code.dlang.org/packages/gtk-d [2] https://gtkdcoding.com/
Re: How to use import std.algorithm.iteration.permutations?
On Sunday, 19 April 2020 at 20:25:23 UTC, Basile B. wrote: On Sunday, 19 April 2020 at 17:57:21 UTC, David Zaragoza wrote: [...] `permutation()` returns a lazy range (i.e an iterator). To turn a permutation into a concrete data type use .array on each one. --- void test(int[] array){} void main() { int[] a = [1,1,2,2,3,3]; foreach (p; a.permutations) { test(p.array); } } --- forgot to put the imports: --- import std.algorithm.iteration, std.array; ---
Re: How to use import std.algorithm.iteration.permutations?
On Sunday, 19 April 2020 at 17:57:21 UTC, David Zaragoza wrote: Hi When I try to build the following: import std.algorithm.iteration; void test(int[] array); void main() { int[] a = [1,1,2,2,3,3]; foreach (p; a.permutations) { test(p); } } I get the error: .\permutations_example.d(10): Error: function permutations_example.test(int[] array) is not callable using argument types (Indexed!(int[], ulong[])) .\permutations_example.d(10):cannot pass argument p of type Indexed!(int[], ulong[]) to parameter int[] array What's the proper way to obtain the array of permutations of a? Kind regards David `permutation()` returns a lazy range (i.e an iterator). To turn a permutation into a concrete data type use .array on each one. --- void test(int[] array){} void main() { int[] a = [1,1,2,2,3,3]; foreach (p; a.permutations) { test(p.array); } } ---
Why Pegged action dont not work in this case ?
I 've started experimenting Pegged action. Quickly i got blocked by this problem. The start action works where I use the rule but not directly in the rule. Test program: gdb_commander.d: --- /+dub.sdl: dependency "pegged" version="~>0.4.4" versions "dub_run" +/ module gdb_commander; import core.stdc.string, std.json; import pegged.grammar, pegged.peg; enum gdbMiOutput = `GdbmiOutput: Output < OutOfBandRecord* ResultRecord? '(gdb)' #HERE before ResRec OK ResultRecord< {beginResultRecord} Token? '^' ResultClass (',' Result)* {endResultRecord} OutOfBandRecord < AsyncRecord / StreamRecord AsyncRecord < ExecAsyncOutput / StatusAsyncOutput / NotifyAsyncOutput ExecAsyncOutput < Token? '*' AsyncOutput StatusAsyncOutput < Token? '+' AsyncOutput NotifyAsyncOutput < Token? '=' AsyncOutput AsyncOutput < AsyncClass ( ',' Result )* ResultClass < 'done' / 'running' / 'connected' / 'error' / 'exit' AsyncClass < 'stopped' Result < Variable '=' Value Variable< String Value < Const / Object / List Const < CString Object < '{}' / '{' Result ( ',' Result )* '}' List< '[]' / '[' Value ( ',' Value )* ']' / '[' Result ( ',' Result )* ']' StreamRecord< ConsoleStreamOutput / TargetStreamOutput / LogStreamOutput ConsoleStreamOutput < '~' CString TargetStreamOutput < '@' CString LogStreamOutput < '&' CString Token <~ [a-zA-Z_][a-zA-Z0-9_]* CString <~ "\"" (EscapedQuotes / (!"\"" .) )* :"\"" EscapedQuotes <~ backslash doublequote String <~ [a-zA-Z0-9_\-]*`; T beginResultRecord(T)(T t) { import std.stdio; writeln(__PRETTY_FUNCTION__); return t; } T endResultRecord(T)(T t) { import std.stdio; writeln(t); return t; } mixin(grammar(gdbMiOutput)); version(dub_run) { import std.stdio, std.path, std.file, std.process; import pegged.tohtml; enum testString01 = `^done,path="/usr/bin" (gdb)`; enum testString02 = `^done,threads=[ {id="2",target-id="Thread 0xb7e14b90 (LWP 21257)", frame={level="0",addr="0xe410",func="__kernel_vsyscall", args=[]},state="running"}, {id="1",target-id="Thread 0xb7e156b0 (LWP 21254)", frame={level="0",addr="0x0804891f",func="foo", args=[{name="i",value="10"}], file="/tmp/a.c",fullname="/tmp/a.c",line="158",arch="i386:x86_64"}, state="running"}], current-thread-id="1" (gdb)`; enum testString03 = `^done,new-thread-id="3", frame={level="0",func="vprintf", args=[{name="format",value="0x8048e9c \"%*s%c %d %c\\n\""}, {name="arg",value="0x2"}],file="vprintf.c",line="31",arch="i386:x86_64"} (gdb)`; void exportHTMLandBrowse(T)(auto ref T tree, string name) { string fname = __FILE_FULL_PATH__.dirName ~ "/" ~ name ~ ".html"; if (fname.exists) remove(fname); toHTML(tree, fname); browse(fname); } void main() { GdbmiOutput(testString01).exportHTMLandBrowse("t1"); GdbmiOutput(testString02).exportHTMLandBrowse("t2"); GdbmiOutput(testString03).exportHTMLandBrowse("t3"); } } --- --- $ dub gdb_commander.d --- Also I'd like to report that actions dont work with partially specialized templates: --- T handleResultRecord(bool end, T)(T t); // then you use handleResultRecord!true and handleResultRecord!false in the PEG. ---
Re: Can a lib file converted to 1 ob file?
On Sunday, 19 April 2020 at 11:33:15 UTC, Andre Pany wrote: On Sunday, 19 April 2020 at 10:53:09 UTC, Basile B. wrote: On Sunday, 19 April 2020 at 10:48:04 UTC, Basile B. wrote: This should work if you pass the static library files to the linker. It is exactly its job to select what's used from the archive. So you would have to pass your stuff and optionally phobos2 as a static library (but this would also work if linking against phobos2.dll) BTW I have an example here [1], but it's for FreePascal and under linux, and in the end i've decided to use dynamic library (but with static linking) [2] [1] https://gitlab.com/basile.b/link-with-d [2] https://gitlab.com/basile.b/dexed/-/merge_requests/6 The only thing I found so far is, Delphi does not support linking .lib files. (Mac os 64 bit compiler though seems to support it). I understand from you, that FreePascal is able to link .lib files. Was my impression false and I can link .lib files with Delphi? No (sorry for the false hope) I also find this (‑‑linker-option for iOS only). Very surprising. Looks like you would have to use a dll. Note that with static linking of the dll cdecl; external 'mydll'; You don't have to use a loader.
Re: Can a lib file converted to 1 ob file?
On Sunday, 19 April 2020 at 10:48:04 UTC, Basile B. wrote: This should work if you pass the static library files to the linker. It is exactly its job to select what's used from the archive. So you would have to pass your stuff and optionally phobos2 as a static library (but this would also work if linking against phobos2.dll) BTW I have an example here [1], but it's for FreePascal and under linux, and in the end i've decided to use dynamic library (but with static linking) [2] [1] https://gitlab.com/basile.b/link-with-d [2] https://gitlab.com/basile.b/dexed/-/merge_requests/6
Re: Can a lib file converted to 1 ob file?
On Sunday, 19 April 2020 at 07:50:13 UTC, Andre Pany wrote: Hi, My understanding is, a lib file is a collection of multiple obj files. This is exact. From a delphi app I want to call D coding without using a dll. Delphi does not know the concept of lib files but can link obj files. Linking all single obj files of DRuntime, phobos and my library might be possible but I wonder whether there is a better way. Therefore the question, if I have a d lib file which contains all the obj files of DRuntime, phobos and my custom code, is it possible to convert it to exactly 1 obj file? Or must 1 obj file correspond to exactly 1 d module? Kind regards Andre This should work if you pass the static library files to the linker. It is exactly its job to select what's used from the archive. So you would have to pass your stuff and optionally phobos2 as a static library (but this would also work if linking against phobos2.dll)
Re: mir: How to change iterator?
On Thursday, 16 April 2020 at 19:56:21 UTC, Basile B. wrote: On Tuesday, 14 April 2020 at 20:24:05 UTC, jmh530 wrote: [...] `approxEqual` cant work with ranges. If you look at the signature there is a use of the constructor syntax, e.g const `T maxRelDiff = T(0x1p-20f)` so when `T` is not a basic FP type that just does not compile (note the error message if you try with .array on both operands) I'd just use zip(...).each!(...), e.g assert(zip(y, [2.5, 2.5].sliced(2)).each!(a => assert(approxEqual(a[0], a[1]; And remove the extra assert() BTW... I don't know why this is accepted.
Re: mir: How to change iterator?
On Tuesday, 14 April 2020 at 20:24:05 UTC, jmh530 wrote: In the code below, I multiply some slice by 5 and then check whether it equals another slice. This fails for mir's approxEqual because the two are not the same types (yes, I know that isClose in std.math works). I was trying to convert the y variable below to have the same double* iterator as the term on the right, but without much success. I tried std.conv.to and the as, slice, and sliced functions in mir. I figure I am missing something basic, but I can't quite figure it out... /+dub.sdl: dependency "mir-algorithm" version="~>3.7.28" +/ import mir.math.common: approxEqual; import mir.ndslice.slice : sliced; void main() { auto x = [0.5, 0.5].sliced(2); auto y = x * 5.0; assert(approxEqual(y, [2.5, 2.5].sliced(2))); } `approxEqual` cant work with ranges. If you look at the signature there is a use of the constructor syntax, e.g const `T maxRelDiff = T(0x1p-20f)` so when `T` is not a basic FP type that just does not compile (note the error message if you try with .array on both operands) I'd just use zip(...).each!(...), e.g assert(zip(y, [2.5, 2.5].sliced(2)).each!(a => assert(approxEqual(a[0], a[1]; But I don't know MIR at all.
Re: __init unresolved external when using C library structs converted with dstep
On Tuesday, 14 April 2020 at 17:51:58 UTC, Robert M. Münch wrote: I use a C libary and created D imports with dstep. It translates the C structs to D structs. When I now use them, everything compiles fine but I get an unresolved external error: WindowsApp1.obj : error LNK2019: unresolved external symbol "myCstruct.__init" (_D7myCStruct6__initZ) referenced in function _Dmain Any idea what this is about and how to fix it? I'm wondering why D tries to find the __init function in the C library and not compile it from the import. One way to prevent the problem is to do void initialization each time you declare a variable of this type.
Re: How can I fully include "libdruntime-ldc.a" and "libphobos2-ldc.a" in my .so lib ?
On Thursday, 16 April 2020 at 12:45:21 UTC, kinke wrote: On Thursday, 16 April 2020 at 10:04:54 UTC, Basile B. wrote: Just got it to work using "libs" : [ "druntime-ldc", "phobos2-ldc" ] $ ldc2 -help | grep -- -link-defaultlib-shared --link-defaultlib-shared - Link with shared versions of default libraries. Defaults to true when generating a shared library (-shared). Boolean options can take an optional value, e.g., -link-defaultlib-shared=. Thanks. The first solution was not working acutally, got SIGABRT when running
Re: How can I fully include "libdruntime-ldc.a" and "libphobos2-ldc.a" in my .so lib ?
On Thursday, 16 April 2020 at 09:48:21 UTC, Basile B. wrote: My dub recipe includes this "dflags" : [ "bin/libdruntime-ldc.a", "bin/libphobos2-ldc.a" ] so that ideally I'll get everything in the library but this does not work. For example rt_init and rt_term are no visible in the exports $ nm -D libdexed-d.so | grep rt_init $ and the project that uses the library does not link anyway, unless I instruct LD to use libdruntime-ldc.so. Just got it to work using "libs" : [ "druntime-ldc", "phobos2-ldc" ] instead of what was in the original question.
How can I fully include "libdruntime-ldc.a" and "libphobos2-ldc.a" in my .so lib ?
My dub recipe includes this "dflags" : [ "bin/libdruntime-ldc.a", "bin/libphobos2-ldc.a" ] so that ideally I'll get everything in the library but this does not work. For example rt_init and rt_term are no visible in the exports $ nm -D libdexed-d.so | grep rt_init $ and the project that uses the library does not link anyway, unless I instruct LD to use libdruntime-ldc.so.
Re: Can I use Dlang in Qt5 instead C++ for develop Android Apps?
On Tuesday, 14 April 2020 at 09:27:35 UTC, Basile B. wrote: On Tuesday, 14 April 2020 at 01:50:22 UTC, evilrat wrote: On Monday, 13 April 2020 at 21:01:50 UTC, Baby Beaker wrote: I want develop Android apps using Qt5. But C++ is very hard. I want to use Dlang becouse Dlang is very easy. In theory nothing stops you from doing that. In practice however you have to deal with C++ anyway, how API matches ABI, and many more low level underlying things. You also need to know how your OS works. Don't forget that you have to know the tools as well, dealing with Android means you will have to do cross-compilation. Must know how to use compilers, linkers and debuggers, and shell scripts as well as bonus. Oh and don't forget that you have to make bindings to interface these 2 domains, and that requires knowledge of both D and C++. So if you're too brave yet go ahead and try. There was some Qt attempts such as this one https://code.dlang.org/packages/qte5 But since the last release was in 2016 it is probably no longer compiles, and I have no idea if it supports Android. I agree. I think ABI compliance is an easy step but one will have to take care to memory managment, that's the big thing OMO, IMO ʕ •`ᴥ•´ʔ
Re: Can I use Dlang in Qt5 instead C++ for develop Android Apps?
On Tuesday, 14 April 2020 at 01:50:22 UTC, evilrat wrote: On Monday, 13 April 2020 at 21:01:50 UTC, Baby Beaker wrote: I want develop Android apps using Qt5. But C++ is very hard. I want to use Dlang becouse Dlang is very easy. In theory nothing stops you from doing that. In practice however you have to deal with C++ anyway, how API matches ABI, and many more low level underlying things. You also need to know how your OS works. Don't forget that you have to know the tools as well, dealing with Android means you will have to do cross-compilation. Must know how to use compilers, linkers and debuggers, and shell scripts as well as bonus. Oh and don't forget that you have to make bindings to interface these 2 domains, and that requires knowledge of both D and C++. So if you're too brave yet go ahead and try. There was some Qt attempts such as this one https://code.dlang.org/packages/qte5 But since the last release was in 2016 it is probably no longer compiles, and I have no idea if it supports Android. I agree. I think ABI compliance is an easy step but one will have to take care to memory managment, that's the big thing OMO, i.e "this thing is allocated in C++ so it must not me mutated by D" or "this thing is allocated by D so it must not be mutated by C++".
Re: Get symbols (and/or UDAs) of subclass from superclass
On Sunday, 15 March 2020 at 20:18:03 UTC, James Blachly wrote: I would like to programmatically retrieve members of a subclass to create a self-documenting interface. I am afraid that my approach is not possible due to need for compile time __traits / std.traits, and runtime typeinfo. My proposed approach is as follows: class Base { string whatever; string toString() { // loop over __traits(allMembers, typeof(this)) or getSymbolsByUDA, etc. } } /// There may be a dozen Derived classes class Derived1 : Base { @Config("a name", "a description") float probability; } class Derived2 : Base { @Config("another", "more description") int replicates; } ... Unfortunately, I am afraid this doesn't look possible because of the need for compile-time UDAs and runtime TypeInfo. Is there a way to do this without re-implementing toString in every single derived class? I expect there to be many derived classes. Other ideas welcomed, as I usually write C-style D and am only recently taking a stab at OO Inheritance features. Thanks in advance. James A few years ago I've been faced to a similar problem and I ended up by using an array, defined in the base class which was filled by "hybdrid" runtime/compile-time reflection (explanations about this come later). In each derived class the constructor performed reflection, using __traits(derivedMembers) + UDA filtering and the array from the base class received the new stuff... More concrectly: --- class Base { Stuff[] stuff; } /// There may be a dozen Derived classes class Derived1 : Base { @Config("a name", "a description") float probability; this() { foreach(dm; __traits(derivedMembers, this)) { /* UDA filtering, then add to stuff*/ } } } class Derived2 : Base { @Config("another", "more description") int replicates; this() { foreach(dm; __traits(derivedMembers, this)) { /* UDA filtering, then add to stuff*/ } } } --- This worked and would work for you but you'll be faced to several issues, for example the most obvious is what is the type of Stuff... Another, less immediate, is that because of overridden methods some already existing entries in the array of Stuff may have to be replaced. Oterwise the thing to get here is that altough you use compile-time reflection, the reflection code is only executed at runtime, to fill the array of Stuff.
Re: Cool name for Dub packages?
On Saturday, 7 March 2020 at 10:49:24 UTC, Paolo Invernizzi wrote: On Saturday, 7 March 2020 at 09:31:27 UTC, JN wrote: Do we have any cool name for Dub packages? tapes. Rust has 'crates' Crystal has 'shards' Python has 'wheels' Ruby has 'gems' Frankly, I simply hate all that shuffle around names ... it's so difficult to understand people when it's referring to them ...we already had to remember a gazillion on things, including horrible ubuntu names instead of simple numbers! :-) That's something else, it's more related to the internal codename. Very common for Operating systems. I've seen usage for other software. Often they are forgotten quickly. Packages are ... well packages! :-P /P
Re: DMD: Is it possible change compile time errors to runtime errors in Dlang?
On Friday, 6 March 2020 at 04:56:28 UTC, Marcone wrote: Is it possible change compile time errors to runtime errors in Dlang? no If yes, how can I make it? if you deactivate all the errors emitted during the semantic then there are very good chance that the compiler crashes while generating code.
Re: Idiomatic way to express errors without resorting to exceptions
On Saturday, 29 February 2020 at 12:50:59 UTC, Adnan wrote: I have a struct that has to arrays. Each of those must have the same sizes. So while constructing the array, if you pass two arrays of different sizes the constructor must return nothing. In Rust I could easily use Option. D has no answer to Optional types as far as I am concerned. Is throwing exceptions the only idiomatic way? --- What I already considered: * Using Nullable!T: Okay but Nullable!T has all the overloads for regular T which makes the API even more unpredictable. In Rust you don't just add a Some(44) and 34; No overloads for Some and i32 are allowed (purposefully). * Option!T from the optional package: Has even worse problem IMO. Not only it allows None + int but also it returns a `[]`. This API is not to my liking. You could say well Haskell has fmap for Optional etc, and I am aware of that, so does Rust with map etc. But I am talking about basic things: like `+`. * Value-based error handling like Go and C: well, that works but the error checking is opt-in. There's no such thing as [[nodiscard]] in D too so the user of the API might as well forget to check for error value. * if specialization: Clever workaround but sometimes the struct may fail for complex reasons, not only for a single condition. There's no idiomatic way since D lang is based on exceptions... However I'd use one of those system: 1. return error, write result in ref parameter. alias CheckedValueProto(RT, P...) = bool function(ref RT, P params); 2. the same using a special struct and no more ref param. So more like Nullable/Optional but with a dedicated generic type that contain a single opover used to indicate if there's been an error or not. struct CheckedValue(T) { bool noError; T t; B opCast(B : bool)() inout pure nothrow @safe { return noError; } } and you make your functions to return CheckedValues... CheckedValue!int strToInt(string input); if (const CheckedValue!int = strToInt("a") {} else {} Although - both still require self-discpline or a specialized linter to detect unchecked calls ; - the whole standard library is incompatible ; I have a personal preference for 2. even if it causes problems when T is of same size as a pointer. Now the question is also what's the more costly ? try/catch or this non atomic return ?
Re: Should getSymbolsByUDA work with member variables?
On Friday, 28 February 2020 at 18:34:08 UTC, cc wrote: This compiles: class Foo { int x; @(1) void y() {} this() { static foreach (idx, field; getSymbolsByUDA!(Foo, 1)) { } } } This does not: class Foo { @(1) int x; void y() {} this() { static foreach (idx, field; getSymbolsByUDA!(Foo, 1)) { } } } Error: value of `this` is not known at compile time Is there an equivalent for getSymbolsByUDA for member variables, or is this a bug? I dont see a bug, the error message is correct. I'd say that you have two options: 1. drop the `static` before `foreach`, for example to use the symbols during run-time 2. to do metaprog adopt another style of loop, e.g class Foo { @(1) int x; void y() {} this() { alias AtOne = getSymbolsByUDA!(Foo, 1); static foreach (i; 0 .. AtOne.length) { pragma(msg, __traits(identifier, AtOne[i])); } } }
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 21:46:08 UTC, Bruce Carneal wrote: On Thursday, 27 February 2020 at 19:46:23 UTC, Basile B. wrote: [...] The code below is the test jig that I'm using currently. It is adopted from yours but has added the -d=distribution command line option. [...] Yes I finally can see the branchless version beating the original with -dedig. Thanks for your time.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 17:17:32 UTC, Bruce Carneal wrote: On Thursday, 27 February 2020 at 17:11:48 UTC, Basile B. wrote: On Thursday, 27 February 2020 at 15:29:02 UTC, Bruce Carneal wrote: On Thursday, 27 February 2020 at 08:52:09 UTC, Basile B. wrote: I will post my code if there is any meaningful difference in your subsequent results. give me something I can compile and verify. I'm not there to steal, if you found something you can still propose it to the repos that would take advantage of the optim. I'm not at all concerned about theft of trivial code. I am concerned that a simple error in my code will live on in a copy/paste environment. Regardless, I'll post the code once I get home. It may be the only way to communicate the central problem as I see it: imprecision in the test specification (the input specification). Yes please, post the benchmark method. You see the benchmarks I run with your version are always slowest. I'm aware that rndGen (and generaly any uniform rnd func) is subject to a bias but I dont thing this bias maters much in the case we talk about.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 15:29:02 UTC, Bruce Carneal wrote: On Thursday, 27 February 2020 at 08:52:09 UTC, Basile B. wrote: I will post my code if there is any meaningful difference in your subsequent results. give me something I can compile and verify. I'm not there to steal, if you found something you can still propose it to the repos that would take advantage of the optim.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 14:12:35 UTC, Basile B. wrote: On Wednesday, 26 February 2020 at 22:07:30 UTC, Johan wrote: On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote: [...] Hi Basile, I recently saw this presentation: https://www.youtube.com/watch?v=Czr5dBfs72U Andrei made a talk about this too a few years ago. It has some ideas that may help you make sure your measurements are good and may give you ideas to find the performance bottleneck or where to optimize. llvm-mca is featured on godbolt.org: https://mca.godbolt.org/z/YWp3yv cheers, Johan https://www.youtube.com/watch?v=Qq_WaiwzOtI
Re: Strange counter-performance in an alternative `decimalLength9` function
On Wednesday, 26 February 2020 at 22:07:30 UTC, Johan wrote: On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote: [...] Hi Basile, I recently saw this presentation: https://www.youtube.com/watch?v=Czr5dBfs72U Andrei made a talk about this too a few years ago. It has some ideas that may help you make sure your measurements are good and may give you ideas to find the performance bottleneck or where to optimize. llvm-mca is featured on godbolt.org: https://mca.godbolt.org/z/YWp3yv cheers, Johan
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 09:41:20 UTC, Basile B. wrote: On Thursday, 27 February 2020 at 09:33:28 UTC, Dennis Cote wrote: [...] Sorry but no. I think that you have missed how this has changed since the first message. 1. the way it was tested initially was wrong because LLVM was optimizing some stuff in some tests and not others, due to literals constants. 2. Apparently there would be a branchless version that's fast when testing with unbiased input (to be verified) this version is: --- ubyte decimalLength9_4(const uint v) pure nothrow { return 1 + (v >= 10) + (v >= 100) + (v >= 1000) + (v >= 1) + (v >= 10) + (v >= 100) + (v >= 1000) + (v >= 1) ; } --- but i cannot see the improvment when use time on the test program and 1 calls feeded with a random number. see https://forum.dlang.org/post/ctidwrnxvwwkouprj...@forum.dlang.org for the latest evolution of the discussion. maybe just add you version to the test program and run time ./declen -c1 -f0 -s137 // original time ./declen -c1 -f4 -s137 // the 100% branchless time ./declen -c1 -f5 -s137 // the LUT + branchless for the bit num that need attention time ./declen -c1 -f6 -s137 // assumed to be your version to see if it beats the original. Thing is that i cannot do it right now but otherwise will try tomorrow.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 09:33:28 UTC, Dennis Cote wrote: On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote: So after reading the translation of RYU I was interested too see if the decimalLength() function can be written to be faster, as it cascades up to 8 CMP. Perhaps you could try something like this. int decimalDigitLength(ulong n) { if (n < 1) if (n < 100) return n < 10 ? 1 : 2; else return n < 1000 ? 3 : 4; else if (n < 1) if (n < 100) return n < 10 ? 5 : 6; else return n < 1000 ? 7 : 8; else if (n < 1) if (n < 100) return n < 10 ? 9 : 10; else return n < 1000 ? 11 : 12; else if (n < 1) if (n < 100) return n < 10 ? 13 : 14; else return n < 1000 ? 15 : 16; else if (n < 100) return n < 10 ? 17 : 18; else return n < 1000 ? 19 : 20; } This uses at most 6 compares for any 64 bit number and only 3 for the most common small numbers less than 1. I was glad to see that with ldc at run.dlang.io using the -O3 optimization I could change the function signature to match yours and the compiler eliminated all the unreachable dead code for larger values. The compiler produced the following assembler .section .text.ubyte onlineapp.decimalLength9(uint),"axG",@progbits,ubyte onlineapp.decimalLength9(uint),comdat .globl ubyte onlineapp.decimalLength9(uint) .p2align4, 0x90 .type ubyte onlineapp.decimalLength9(uint),@function ubyte onlineapp.decimalLength9(uint): .cfi_startproc cmpl$, %edi ja .LBB1_5 cmpl$99, %edi ja .LBB1_4 cmpl$10, %edi movb$2, %al sbbb$0, %al retq .LBB1_5: cmpl$, %edi ja .LBB1_9 cmpl$99, %edi ja .LBB1_8 cmpl$10, %edi movb$6, %al sbbb$0, %al retq .LBB1_4: cmpl$1000, %edi movb$4, %al sbbb$0, %al retq .LBB1_9: cmpl$10, %edi movb$10, %al sbbb$0, %al retq .LBB1_8: cmpl$1000, %edi movb$8, %al sbbb$0, %al retq .Lfunc_end1: .size ubyte onlineapp.decimalLength9(uint), .Lfunc_end1-ubyte onlineapp.decimalLength9(uint) .cfi_endproc for the same body with signature ubyte decimalLength9(uint n). This may be faster than your sequential comparison function depending upon the distribution of numbers. In real applications, small numbers are far more common so the reduced number of compares for those values should be beneficial in most cases. Sorry but no. I think that you have missed how this has changed since the first message. 1. the way it was tested initially was wrong because LLVM was optimizing some stuff in some tests and not others, due to literals constants. 2. Apparently there would be a branchless version that's fast when testing with unbiased input (to be verified) this version is: --- ubyte decimalLength9_4(const uint v) pure nothrow { return 1 + (v >= 10) + (v >= 100) + (v >= 1000) + (v >= 1) + (v >= 10) + (v >= 100) + (v >= 1000) + (v >= 1) ; } --- but i cannot see the improvment when use time on the test program and 1 calls feeded with a random number. see https://forum.dlang.org/post/ctidwrnxvwwkouprj...@forum.dlang.org for the latest evolution of the discussion.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 04:44:56 UTC, Basile B. wrote: On Thursday, 27 February 2020 at 03:58:15 UTC, Bruce Carneal wrote: Maybe you talked about another implementation of decimalLength9 ? Yes. It's one I wrote after I saw your post. Psuedo-code here: auto d9_branchless(uint v) { return 1 + (v >= 10) + (v >= 100) ... } Using ldc to target an x86 with the above yields a series of cmpl, seta instruction pairs in the function body followed by a summing and a return. No branching. Let me know if the above is unclear or insufficient. No thanks, it's crystal clear now. although I don't see the performance gain. Now for me an hybrid version based on a LUT and on the branchless one is the fatest (decimalLength9_5), although still slowest then the original. updated program, incl your branchless version (decimalLength9_4): --- #!ldmd -boundscheck=off -release -inline -O -mcpu=native -mattr=+sse2,+sse3,+sse4.1,+sse4.2,+fast-lzcnt,+avx,+avx2,+cmov,+bmi,+bmi2 import core.memory; import core.bitop; import std.stdio; import std.range; import std.algorithm; import std.getopt; import std.random; ubyte decimalLength9_0(const uint v) { if (v >= 1) { return 9; } if (v >= 1000) { return 8; } if (v >= 100) { return 7; } if (v >= 10) { return 6; } if (v >= 1) { return 5; } if (v >= 1000) { return 4; } if (v >= 100) { return 3; } if (v >= 10) { return 2; } return 1; } ubyte decimalLength9_1(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; const ubyte lzc = cast(ubyte) bsr(v); ubyte result; switch (lzc) { case 0 : case 1 : case 2 : result = 1; break; case 3 : result = v >= 10 ? 2 : 1; break; case 4 : case 5 : result = 2; break; case 6 : result = v >= 100 ? 3 : 2; break; case 7 : case 8 : result = 3; break; case 9 : result = v >= 1000 ? 4 : 3; break; case 10: case 11: case 12: result = 4; break; case 13: result = v >= 1 ? 5 : 4; break; case 14: case 15: result = 5; break; case 16: result = v >= 10 ? 6 : 5; break; case 17: case 18: result = 6; break; case 19: result = v >= 100 ? 7 : 6; break; case 20: case 21: case 22: result = 7; break; case 23: result = v >= 1000 ? 8 : 7; break; case 24: case 25: result = 8; break; case 26: result = v >= 1 ? 9 : 8; break; case 27: case 28: case 29: case 30: case 31: result = 9; break; default: assert(false); } return result; } private ubyte decimalLength9_2(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; const ubyte lzc = cast(ubyte) bsr(v); static immutable pure nothrow ubyte function(uint)[32] tbl = [ 0 : (uint a) => ubyte(1), 1 : (uint a) => ubyte(1), 2 : (uint a) => ubyte(1), 3 : (uint a) => a >= 10 ? ubyte(2) : ubyte(1), 4 : (uint a) => ubyte(2), 5 : (uint a) => ubyte(2), 6 : (uint a) => a >= 100 ? ubyte(3) : ubyte(2), 7 : (uint a) => ubyte(3), 8 : (uint a) => ubyte(3), 9 : (uint a) => a >= 1000 ? ubyte(4) : ubyte(3), 10: (uint a) => ubyte(4), 11: (uint a) => ubyte(4), 12: (uint a) => ubyte(4), 13: (uint a) => a >= 1 ? ubyte(5) : ubyte(4), 14: (uint a) => ubyte(5), 15: (uint a) => ubyte(5), 16: (uint a) => a >= 10 ? ubyte(6) : ubyte(5), 17: (uint a) => ubyte(6), 18: (uint a) => ubyte(6), 19: (uint a) => a >= 100 ? ubyte(7) : ubyte(6), 20: (uint a) => ubyte(7), 21: (uint a) => ubyte(7), 22: (uint a) => ubyte(7), 23: (uint a) => a >= 1000 ? ubyte(8) : ubyte(7), 24: (uint a) => ubyte(8), 25: (uint a) => ubyte(8), 26: (uint a) => a >= 1 ? ubyte(9) : ubyte(8), 27: (uint a) => ubyte(9), 28: (uint a) => ubyte(9), 29: (uint a) => ubyte(9), 30: (uint a) => ubyte(9), 31: (uint a) => ubyte(9), ]; return tbl[lzc](v); } ubyte decimalLength9_3(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; ubyte result; enum ubyte doSwitch = ubyte(0); const ubyte lzc = cast(ubyte) bsr(v); const ubyte[32] decimalLength9LUT = [ 0 : ubyte(1), 1 : ubyte(1), 2 : ubyte(1), 3 : doSwitch, 4 : ubyte(2), 5 : ubyte(2), 6 : doSwitch, 7 : ubyte(3), 8 : ubyte(3), 9 : doSwitch, 10: ubyte(4), 11: ubyte(4), 12: ubyte(4), 13: doSwitch, 14: ubyte(5), 15: ubyte(5), 16: doSwitch, 17: ubyte(6), 18: ubyte(6), 19: doSwitch, 20: ubyte(7), 21: ubyte(7), 22: ubyte(7), 23: doSwitch, 24: ubyte(8), 25: ubyte(8), 26: doSwitch, 27: ubyte(9), 28: ubyte(9), 29:
Re: Strange counter-performance in an alternative `decimalLength9` function
On Thursday, 27 February 2020 at 03:58:15 UTC, Bruce Carneal wrote: Maybe you talked about another implementation of decimalLength9 ? Yes. It's one I wrote after I saw your post. Psuedo-code here: auto d9_branchless(uint v) { return 1 + (v >= 10) + (v >= 100) ... } Using ldc to target an x86 with the above yields a series of cmpl, seta instruction pairs in the function body followed by a summing and a return. No branching. Let me know if the above is unclear or insufficient. No thanks, it's crystal clear now.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Wednesday, 26 February 2020 at 20:44:31 UTC, Bruce Carneal wrote: The winning function implementation lines up with that distribution. It would not fare as well with higher entropy input. Using sorted equi-probable inputs (N 1 digit numbers, N 2 digit numbers, ...) decimalLength9_0 beats a simple branchless implementation by about 10%. After shuffling the input, branchless wins by 2.4X (240%). I've replaced the input by the front of a rndGen (that pops for count times and starting with a custom seed) and I never see the decimalLength9_3 (which seems to be the closest to the original in term of performances) doing better. Maybe you talked about another implementation of decimalLength9 ?
Re: Strange counter-performance in an alternative `decimalLength9` function
On Wednesday, 26 February 2020 at 22:07:30 UTC, Johan wrote: On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote: [...] Hi Basile, I recently saw this presentation: https://www.youtube.com/watch?v=Czr5dBfs72U It has some ideas that may help you make sure your measurements are good and may give you ideas to find the performance bottleneck or where to optimize. llvm-mca is featured on godbolt.org: https://mca.godbolt.org/z/YWp3yv cheers, Johan yes llvm-mca looks excellent, although I don't know if it worth continuing... You see this function is certainly not a bottleneck, it's just that I wanted to try better than the naive implementation. Fundamentatlly the problem is that 1. the original is smaller, faster to decode 2. the alternatives (esp. the 3rd) is conceptually better but the cost of the jump table + lzcnt wastes it.
Re: Strange counter-performance in an alternative `decimalLength9` function
On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote: How is that possible ? It turns out that there's a problem with the benchmarking method. With command line argument the different optimization passes of LLVM don't fuck up with the literal constants. It appears that none of the alternatives based on the "most significant bit trick" are faster (at least when covering a decent range of numbers): --- #!ldmd -boundscheck=off -release -inline -O -mcpu=native -mattr=+sse2,+sse3,+sse4.1,+sse4.2,+fast-lzcnt,+avx,+avx2,+cmov,+bmi,+bmi2 import core.memory; import core.bitop; import std.stdio; import std.range; import std.algorithm; import std.getopt; ubyte decimalLength9_0(const uint v) { if (v >= 1) { return 9; } if (v >= 1000) { return 8; } if (v >= 100) { return 7; } if (v >= 10) { return 6; } if (v >= 1) { return 5; } if (v >= 1000) { return 4; } if (v >= 100) { return 3; } if (v >= 10) { return 2; } return 1; } ubyte decimalLength9_1(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; const ubyte lzc = cast(ubyte) bsr(v); ubyte result; switch (lzc) { case 0 : case 1 : case 2 : result = 1; break; case 3 : result = v >= 10 ? 2 : 1; break; case 4 : case 5 : result = 2; break; case 6 : result = v >= 100 ? 3 : 2; break; case 7 : case 8 : result = 3; break; case 9 : result = v >= 1000 ? 4 : 3; break; case 10: case 11: case 12: result = 4; break; case 13: result = v >= 1 ? 5 : 4; break; case 14: case 15: result = 5; break; case 16: result = v >= 10 ? 6 : 5; break; case 17: case 18: result = 6; break; case 19: result = v >= 100 ? 7 : 6; break; case 20: case 21: case 22: result = 7; break; case 23: result = v >= 1000 ? 8 : 7; break; case 24: case 25: result = 8; break; case 26: result = v >= 1 ? 9 : 8; break; case 27: case 28: case 29: case 30: case 31: result = 9; break; default: assert(false); } return result; } private ubyte decimalLength9_2(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; const ubyte lzc = cast(ubyte) bsr(v); static immutable pure nothrow ubyte function(uint)[32] tbl = [ 0 : (uint a) => ubyte(1), 1 : (uint a) => ubyte(1), 2 : (uint a) => ubyte(1), 3 : (uint a) => a >= 10 ? ubyte(2) : ubyte(1), 4 : (uint a) => ubyte(2), 5 : (uint a) => ubyte(2), 6 : (uint a) => a >= 100 ? ubyte(3) : ubyte(2), 7 : (uint a) => ubyte(3), 8 : (uint a) => ubyte(3), 9 : (uint a) => a >= 1000 ? ubyte(4) : ubyte(3), 10: (uint a) => ubyte(4), 11: (uint a) => ubyte(4), 12: (uint a) => ubyte(4), 13: (uint a) => a >= 1 ? ubyte(5) : ubyte(4), 14: (uint a) => ubyte(5), 15: (uint a) => ubyte(5), 16: (uint a) => a >= 10 ? ubyte(6) : ubyte(5), 17: (uint a) => ubyte(6), 18: (uint a) => ubyte(6), 19: (uint a) => a >= 100 ? ubyte(7) : ubyte(6), 20: (uint a) => ubyte(7), 21: (uint a) => ubyte(7), 22: (uint a) => ubyte(7), 23: (uint a) => a >= 1000 ? ubyte(8) : ubyte(7), 24: (uint a) => ubyte(8), 25: (uint a) => ubyte(8), 26: (uint a) => a >= 1 ? ubyte(9) : ubyte(8), 27: (uint a) => ubyte(9), 28: (uint a) => ubyte(9), 29: (uint a) => ubyte(9), 30: (uint a) => ubyte(9), 31: (uint a) => ubyte(9), ]; return tbl[lzc](v); } ubyte decimalLength9_3(const uint v) pure nothrow { if (v == 0) // BSR and LZCNT UB when input is 0 return 1; ubyte result; const ubyte lzc = cast(ubyte) bsr(v); const ubyte[32] decimalLength9LUT = [ 0 : ubyte(1), 1 : ubyte(1), 2 : ubyte(1), 3 : ubyte(10), 4 : ubyte(2), 5 : ubyte(2), 6 : ubyte(11), 7 : ubyte(3), 8 : ubyte(3), 9 : ubyte(12), 10: ubyte(4), 11: ubyte(4), 12: ubyte(4), 13: ubyte(12), 14: ubyte(5), 15: ubyte(5), 16: ubyte(13), 17: ubyte(6), 18: ubyte(6), 19: ubyte(14), 20: ubyte(7), 21: ubyte(7), 22: ubyte(7), 23: ubyte(15), 24: ubyte(8), 25: ubyte(8), 26: ubyte(16), 27: ubyte(9), 28: ubyte(9), 29: ubyte(9), 30: ubyte(9), 31: ubyte(9), ]; ubyte resultOrSelector = decimalLength9LUT[lzc]; if (resultOrSelector < 10) result = resultOrSelector; else switch (lzc) { case 3 : result = v >= 10 ? 2 : 1; break; case 6 : result = v >= 100 ? 3 : 2; break; case 9 : result = v >= 1000 ? 4 : 3; break; case 13: result = v >= 1 ? 5 : 4; break; case 16: result = v >= 10 ? 6 : 5; break; case 19: result =
Re: Strange counter-performance in an alternative `decimalLength9` function
On Wednesday, 26 February 2020 at 01:10:07 UTC, H. S. Teoh wrote: On Wed, Feb 26, 2020 at 12:50:35AM +, Basile B. via Digitalmars-d-learn wrote: [...] #!dmd -boundscheck=off -O -release -inline [...] TBH, I'm skeptical of any performance results using dmd. I wouldn't pay attention to performance numbers obtained this way, and rather look at the ldmd/ldc2 numbers. I didn't use DMD. The script line is actually interpreted by the IDE. It drops the compiler name and just parse the arg and pass them to a compiler that's defined by non overridable options. In my case I used LDMD (this is LDC you see, but with a DMD like options syntax)
Strange counter-performance in an alternative `decimalLength9` function
So after reading the translation of RYU I was interested too see if the decimalLength() function can be written to be faster, as it cascades up to 8 CMP. After struggling with bad ideas I finally find something that looks nice: - count the leading zero of the input - switch() this count to have in the worst of the cases only 1 CMP (when the bit number possibly changes the digit count, e.g 9 -> 10 (when bsr(input) == 4) after writing all the values allowing to know the cases where a comparison is necessary... min2 = 0b10 max2 = 0b11 min3 = 0b100 max3 = 0b111 ... ... min32 = 0b100...0 max32 = 0b111...1 ...I finally write the "nice" thing. --- #!dmd -boundscheck=off -O -release -inline import std.stdio; ubyte decimalLength9(const uint v) { if (v >= 1) { return 9; } if (v >= 1000) { return 8; } if (v >= 100) { return 7; } if (v >= 10) { return 6; } if (v >= 1) { return 5; } if (v >= 1000) { return 4; } if (v >= 100) { return 3; } if (v >= 10) { return 2; } return 1; } ubyte fdecimalLength9(const uint v) pure nothrow { import core.bitop; const ubyte lzc = cast(ubyte) (bsr(v) + 1); ubyte result; switch (lzc) { case 0 : case 1 : case 2 : case 3 : result = 1; break; case 4 : result = v >= 10 ? 2 : 1; break; case 5 : case 6 : result = 2; break; case 7 : result = v >= 100 ? 3 : 2; break; case 8 : case 9 : result = 3; break; case 10: result = v >= 1000 ? 4 : 3; break; case 11: case 12: case 13: result = 4; break; case 14: result = v >= 1 ? 5 : 4; break; case 15: case 16: result = 5; break; case 17: result = v >= 10 ? 6 : 5; break; case 18: case 19: result = 6; break; case 20: result = v >= 100 ? 7 : 6; break; case 21: case 22: case 23: result = 7; break; case 24: result = v >= 1000 ? 8 : 7; break; case 25: case 26: result = 8; break; case 27: result = v >= 1 ? 9 : 8; break; case 28: case 29: case 30: case 31: case 32: result = 9; break; default: assert(false); } return result; } private ubyte ffdecimalLength9(const uint v) pure nothrow { import core.bitop; const ubyte lzc = cast(ubyte) (bsr(v) + 1); static immutable pure nothrow ubyte function(uint)[33] tbl = [ 0 : (uint a) => ubyte(1), 1 : (uint a) => ubyte(1), 2 : (uint a) => ubyte(1), 3 : (uint a) => ubyte(1), 4 : (uint a) => a >= 10 ? ubyte(2) : ubyte(1), 5 : (uint a) => ubyte(2), 6 : (uint a) => ubyte(2), 7 : (uint a) => a >= 100 ? ubyte(3) : ubyte(2), 8 : (uint a) => ubyte(3), 9 : (uint a) => ubyte(3), 10: (uint a) => a >= 1000 ? ubyte(4) : ubyte(3), 11: (uint a) => ubyte(4), 12: (uint a) => ubyte(4), 13: (uint a) => ubyte(4), 14: (uint a) => a >= 1 ? ubyte(5) : ubyte(4), 15: (uint a) => ubyte(5), 16: (uint a) => ubyte(5), 17: (uint a) => a >= 10 ? ubyte(6) : ubyte(5), 18: (uint a) => ubyte(6), 19: (uint a) => ubyte(6), 20: (uint a) => a >= 100 ? ubyte(7) : ubyte(6), 21: (uint a) => ubyte(7), 22: (uint a) => ubyte(7), 23: (uint a) => ubyte(7), 24: (uint a) => a >= 1000 ? ubyte(8) : ubyte(7), 25: (uint a) => ubyte(8), 26: (uint a) => ubyte(8), 27: (uint a) => a >= 1 ? ubyte(9) : ubyte(8), 28: (uint a) => ubyte(9), 29: (uint a) => ubyte(9), 30: (uint a) => ubyte(9), 31: (uint a) => ubyte(9), 32: (uint a) => ubyte(9), ]; return tbl[lzc](v); } void main() { import std.datetime.stopwatch, std.range, std.algorithm; int s1, s2, s3; benchmark!({ iota(1u).each!(a => s1 += decimalLength9(a+1)); })(10).writeln; benchmark!({ iota(1u).each!(a => s2 += fdecimalLength9(a+1));})(10).writeln; benchmark!({ iota(1u).each!(a => s3 += ffdecimalLength9(a+1));})(10).writeln; assert(s1 == s2); assert(s1 == s3); } --- Then bad surprise. Even with ldmd (so ldc2 basically) feeded with the args from the script line. Maybe the fdecimalLength9 version is slightly faster. Only *slightly*. Good news, I've lost my time. So I try an alternative version that uses a table of delegates instead of a switch (ffdecimalLength9) and surprise, "tada", it is like **100x** slower then the two others. How is that possible ?
Re: DIP 1027---String Interpolation---Format Assessment
On Monday, 24 February 2020 at 10:02:26 UTC, Mike Parker wrote: I mean, people spend a lot of time thinking, making suggestions, etc. and the end result is: we now have nothing. Which, IMO is the worst result for all. Not at all. In this case, as the DIP author, Walter could have chosen to revise the DIP with a new implementation. He chose not to. He wasn't persuaded by the arguments in the thread. Yeah, not to mention that in first place I think that he made a DIP for named args because the commuity was unable to formulate a good propostion, while he was himself not "pro-named args"... (maybe, I think i read this in the past)
Re: Custom asset handler messes unit test summary report
On Monday, 24 February 2020 at 00:50:38 UTC, ric maicle wrote: [dmd 2.090.1 linux 64-bit] The following code does not report the correct unit test summary. The report says 1 unit test passed. ~ shared static this() { import core.exception; assertHandler(); } void cah(string file, ulong line, string msg) nothrow { import core.stdc.stdio: printf; printf("==\n"); printf("Assert error: %s %d: %s\n", file.ptr, line, msg.ptr); printf("==\n"); } unittest { assert(false); } ~ Looks like you have to throw an Error at the end of your custom handler, e.g: --- void cah(string file, ulong line, string msg) nothrow { import core.stdc.stdio: printf; import core.exception : AssertError; printf("==\n"); printf("Assert error: %s %d: %s\n", file.ptr, line, msg.ptr); printf("==\n"); throw new AssertError(""); } --- For a betterC program you can use an HLT: --- void cah(string file, ulong line, string msg) nothrow { import core.stdc.stdio: printf, fflush, stdout; printf("==\n"); printf("Assert error: %s %d: %s\n", file.ptr, line, msg.ptr); printf("==\n"); fflush(stdout); asm nothrow { hlt; } } ---
Re: Auto-generation of online documentation for my open libraries
On Sunday, 23 February 2020 at 17:14:33 UTC, Per Nordlöw wrote: I would like to setup auto-generation of online documentation for my public D libraries residing on Github and Gitlab. What alternatives do I have? for gitlab they have a system of pages that's quite easy to setup: something like .gitlab-ci.yml --- #to get dmd and dub image: dlang2/dmd-ubuntu #special section to generate gitlab "pages" pages: before_script: - apt-get install -y - script: - artifacts: paths: - public only: - master - --- and then your doc is online and updated for each push. It can be downloaded from the artifacts too, maybe in an automated way I think, but havent used this for now so cant say.
Re: dscanner and ref parameters
On Sunday, 23 February 2020 at 12:28:41 UTC, mark wrote: On Sunday, 23 February 2020 at 09:35:30 UTC, Jacob Carlborg wrote: On 2020-02-23 10:03, mark wrote: Then this would not only help dscanner, but also make it clear to programmers that the argument could be modified. It's not necessary for dscanner. It should look at the signature of `getKeyval` to see that it takes an argument by `ref`. Just realised that the arg is 'out' not 'ref'; don't know if that makes a difference to dscanner. Anyway, I've made a bug report: https://github.com/dlang-community/D-Scanner/issues/793 This like https://github.com/dlang-community/D-Scanner/issues/366 or https://github.com/dlang-community/D-Scanner/issues/298, so a false positive due to limlitations. D-Scanner: - only works at the module level (i.e cant see declaration from an import) ; - does not perform regular semantic (even not the ones done for DCD) ; People who care should just start developing a new linter based on DMD as a library. It's pretty clear (IMO) that these problems will never be fixed.
Re: How to get the name of an object's class at compile time?
On Monday, 17 February 2020 at 22:34:31 UTC, Stefan Koch wrote: Upon seeing this I just implemented typeid(stuff).name; https://github.com/dlang/dmd/pull/10796 With any luck this will be possible in the next release ;) Can this work using `stuff.classinfo.name` too ? This is the same as `typeid(stuff).name`
operator overload for sh-like scripting ?
eg Sh(echo) < "meh"; struct Sh { // you see the idea we have op overload for < here }
Re: How to iterate over range two items at a time
On Monday, 17 February 2020 at 09:41:35 UTC, Adnan wrote: On Monday, 17 February 2020 at 07:50:02 UTC, Mitacha wrote: On Monday, 17 February 2020 at 05:04:02 UTC, Adnan wrote: What is the equivalent of Rust's chunks_exact()[1] method in D? I want to iterate over a spitted string two chunks at a time. [1] https://doc.rust-lang.org/beta/std/primitive.slice.html#method.chunks_exact It sounds similar to `slide` https://dlang.org/phobos/std_range.html#slide The key difference here is that slide seems to overlap, which is a big no-no for my current problem. I have just gone with a classic for loop. Error prone, but just works. Yeah your analysis is right. slide is specifically made to overlapp, if overlapping is not needed slide is not needed.
Re: dub / use git branch
On Sunday, 16 February 2020 at 14:01:13 UTC, Robert M. Münch wrote: I want to use a specific branch version if a package. I specified the branch version in a dub.selections.json file. But it seems that dub requires a ZIP file that can be downloaded from code.dlang.org, which of course fails because the branch is only available on github. Fetching rtree ~fix-#3 (getting selected version)... Downloading https://code.dlang.org/packages/rtree/~fix-#3.zip failed with 404 (Not Found). How am I supposed to switch to a branch version of a package to try a bug-fix version for example? We fail to get the most basic support for git since a full year now: https://github.com/dlang/dub/pull/1802 https://github.com/dlang/dub/pull/1798 https://github.com/dlang/dub/pull/1403 So for now you have two options: 1/ in your project recipe, you can add the dependency as a path to the git submodule representing the DUB package and do the branch checkout manually or using a DUB pregenerate command. 2/ ask to the dependency maintainer to add a git tag in the branch you'd like to use. the DUB registry will automatically create a new version of the package, using the right branch. This is done for example for stx allocator and so far this work.
Re: Dscanner: is it possible to switch off style checks case-by-case?
On Thursday, 13 February 2020 at 17:15:50 UTC, mark wrote: I'm starting out with GtkD and have this function: void main(string[] args) { Main.init(args); auto game = new GameWindow(); Main.run(); } and this method: void quit(Widget widget) { Main.quit(); } When I run dscanner --styleCheck it reports: ./src/app.d(10:10)[warn]: Variable game is never used. ./src/app.d(22:22)[warn]: Parameter widget is never used. These are correct. However, is it possible to switch them off individually? Yes you can but in the present case the analysis is right because you can write void main(string[] args) { Main.init(args); new GameWindow(); Main.run(); } and for the other void quit(Widget) { Main.quit(); } or even void quit() { Main.quit(); } Otherwise here is an example of how you can tune the different checks: https://raw.githubusercontent.com/dlang/phobos/master/.dscanner.ini See also the last section of https://raw.githubusercontent.com/dlang-community/D-Scanner/master/README.md
Re: Strange instruction sequence with DMD while calling functions with float parameters
On Friday, 14 February 2020 at 22:36:20 UTC, PatateVerte wrote: Hello I noticed a strange behaviour of the DMD compiler when it has to call a function with float arguments. I build with the flags "-mcpu=avx2 -O -m64" under windows 64 bits using "DMD32 D Compiler v2.090.1-dirty" I have the following function : float mul_add(float a, float b, float c); //Return a * b + c When I try to call it : float f = d_mul_add(1.0, 2.0, 3.0); I tested with other functions with float parameters, and there is the same problem. Then the following instructions are generated : //Loads the values, as it can be expected vmovss xmm2,dword [rel 0x64830] vmovss xmm1,dword [rel 0x64834] vmovss xmm0,dword [rel 0x64838] //Why ? movq r8,xmm2 movq rdx,xmm1 movq rcx,xmm0 // call 0x400 //0x400 is where the mul_add function is located My questions are : - Is there a reason why the registers xmm0/1/2 are saved in rcx/rdx/r8 before calling ? The calling convention specifies that the floating point parameters have to be put in xmm registers, and not GPR, unless you are using your own calling convention. - Why is it done using non-avx instructions ? Mixing AVX and non-AVX instructions may impact the speed greatly. Any idea ? Thank you in advance. It's simply the bad codegen (or rather a missed opportunity to optimize) from DMD, its backend doesn't see that the parameters are already in the right order and in the right registers so it copy them and put them in the regs for the inner func call. I had observed this in the past too, i.e unexplained round tripping from GP to SSE regs. For good FP codegen use LDC2 or GDC or write iasm (but loose inlining). For other people who'd like to observe the problem: https://godbolt.org/z/gvqEqz. By the way I had to deactivate AVX2 targeting because otherwise the result is even more weird (https://godbolt.org/z/T9NwMc)
Re: Dexed debugger UI now supports inspection of the variables based on mouse motion
On Thursday, 13 February 2020 at 09:06:26 UTC, Basile B. wrote: I don't know why I havent implemented this earlier as it was quite simple. It's basically the same as when you evaluate a custom expression excepted that you use the mouse position to extract a (more or less, TBH) precise unary expression. https://imgur.com/a/e4urRY9 Only problem is that GDB requires explicit dereferences, that are normally automatic in D semantic, so big chain of identifiers wont work and display '???'. Example: for D :a.d.c.somevar GDB needs : (*a).(*d).(*c).somevar if a,d and c are classes or struct pointers. So far dexed will display __ exp: a.d.c.somevar --- (result of -data-evaluate $exp) --- (result of -data-evaluate *$exp) __ so only one dereference. And in the a.d.c.somevar example this would not work. related commit https://gitlab.com/basile.b/dexed/-/commit/7a049959214db2e8818131b6943724e2b3a15ee5
Re: Is it possible to use DMD as a library to compile strings at runtime?
On Monday, 10 February 2020 at 12:31:03 UTC, Basile B. wrote: On Friday, 31 January 2020 at 14:25:30 UTC, Basile B. wrote: [...] about [1] (llvm) I've made a better binding this weekend: https://gitlab.com/basile.b/llvmd-d Seriouly I cant believe that at some point in the past I translated by hand. dstep can handle big C libraries. ah ah ah, I meant https://gitlab.com/basile.b/llvm-d
Re: Is it possible to use DMD as a library to compile strings at runtime?
On Friday, 31 January 2020 at 14:25:30 UTC, Basile B. wrote: On Friday, 31 January 2020 at 11:19:37 UTC, Saurabh Das wrote: I see that DUB has DMD as a library package, but I was not able to understand how to use it. Is it possible to use DMD as a library within a D program to compile a string to machine code and run the compiled code at runtime? Thanks, Saurabh Fundamentally DMD as a library is a front-end. Jitting is to the backend side. You'll be able to lex and parse the source to get an AST, to perform the semantic passes on this AST and that's all. Then to run this code you would need to make an AST visitor that will generate the binary code to execute. Even using a specialized library with jitting abilities, such as LLVM-d [1] or libfirm-d [2], this would be *quite* a journey. [1] https://github.com/MoritzMaxeiner/llvm-d [2] https://gitlab.com/basile.b/libfirm-d about [1] (llvm) I've made a better binding this weekend: https://gitlab.com/basile.b/llvmd-d Seriouly I cant believe that at some point in the past I translated by hand. dstep can handle big C libraries.
Re: total newbie + IDE
On Saturday, 8 February 2020 at 03:59:22 UTC, Borax Man wrote: As linked before, dexed is available here https://github.com/akira13641/dexed and I compiled it just a few days ago with success. It is a fork (check the count of commits). The most recent version is here https://gitlab.com/basile.b/dexed.
Re: Does D have an equvalent of: if (auto = expr; expr)
On Friday, 7 February 2020 at 08:52:44 UTC, mark wrote: Some languages support this kind of thing: if ((var x = expression) > 50) print(x, " is > 50") Is there anything similar in D? Yes assuming that the expression is bool evaluable. This includes - pointers: `if (auto p = giveMeSomePtr()){}` - classes references: `if (auto p = giveMeSomeClasses()){}` - integers `if (auto p = giveMeAnInt()){}` and using the in operators as you've been answered previously. The problem is that this support only one variable and that the If condition must be either a variable or a relational expression. Not both. To overcome the limitation of a single variable I've made a little template: --- /** * Encapsulates several variables in a tuple that's usable as a if condition, * as a workaround to the single declaration allowed by the language. * * Params: * a = The expressions giving the variables. * The variables must be evaluable to $(D bool). * * Returns: * A tuple containing the variables. */ auto ifVariables(A...)(auto ref A a) if (A.length) { static struct IfVariables(A...) { private A tup; alias tup this; this() @disable; this(this) @disable; this(ref A a) { tup = a; } bool opCast(T : bool)() const { static foreach (i; 0 .. A.length) if (!tup[i]) return false; return true; } } return IfVariables!A(a); } /// unittest { assert(ifVariables(new Object, true, new Object)); assert(!ifVariables(new Object, false, new Object)); // typical usage bool isDlangExpressive(){return true;} if (auto a = ifVariables(new Object, isDlangExpressive())) {} // use the variables if (auto a = ifVariables(new Object, new Object)) { assert(a.length == 2); assert(a[0] !is a[1]); } } ---
Re: total newbie + IDE
On Friday, 7 February 2020 at 18:10:07 UTC, bachmeier wrote: On Friday, 7 February 2020 at 17:02:18 UTC, solnce wrote: Hi guys, I am total newbie and trying to learn a little bit of programming for personal purposes (web scrapping, small databases for personal use etc.). I've been trying to install any of IDE available, but had no success. I use Manjaro, so for the most task I use its AUR, where: Dexed points to obsolete github (https://github.com/Basile-z/dexed) [...] [...] The new Github link for Dexed is https://github.com/akira13641/dexed No I've reuploaded the most recent version here https://gitlab.com/basile.b/dexed. However, 1. no more binaries 2. don't care about requests anymore. 3. code for the Windows version is not maintained. So it's not recommened for newbies, or only those who can manage things by themselves.
Re: Is it possible to use DMD as a library to compile strings at runtime?
On Friday, 31 January 2020 at 11:19:37 UTC, Saurabh Das wrote: I see that DUB has DMD as a library package, but I was not able to understand how to use it. Is it possible to use DMD as a library within a D program to compile a string to machine code and run the compiled code at runtime? Thanks, Saurabh Fundamentally DMD as a library is a front-end. Jitting is to the backend side. You'll be able to lex and parse the source to get an AST, to perform the semantic passes on this AST and that's all. Then to run this code you would need to make an AST visitor that will generate the binary code to execute. Even using a specialized library with jitting abilities, such as LLVM-d [1] or libfirm-d [2], this would be *quite* a journey. [1] https://github.com/MoritzMaxeiner/llvm-d [2] https://gitlab.com/basile.b/libfirm-d
Re: list of all defined items in a D file
On Friday, 24 January 2020 at 14:28:03 UTC, berni44 wrote: On Friday, 24 January 2020 at 12:22:49 UTC, Dennis wrote: You can pass the -X flag to dmd, which makes it generate a .json file describing the compiled file. Great, that's what I was looking for - although it's also good to know the __traits approach! Thanks! a 3rd approach is to use libdparse and make your own AST serializer by writing a visitor. (or even with DMD front end as a library). Although if it's a matter of time the dmd -X flag is great in the sense that the output format is standard and straightforward.
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 09:18:01 UTC, Basile B. wrote: On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: I have this code: import std.random; import std.stdio; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); auto rnd = rndGen; auto word = aa.byKey.choice(rnd); writeln(word); } And in the D playground it gives this error: onlineapp.d(8): Error: template std.random.choice cannot deduce function from argument types !()(Result, MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u)), candidates are: /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2599): choice(Range, RandomGen = Random)(auto ref Range range, ref RandomGen urng) with Range = Result, RandomGen = MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u) must satisfy the following constraint: isRandomAccessRange!Range /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2609): choice(Range)(auto ref Range range) I am treating aa as a set and want to pick a random word from it. What am I doing wrong? I'm sorry I can't give a link to this code in the D playground but the URL in the web browser is just https://run.dlang.io/ and when I click Shorten to get a URL nothing seems to happen (using Firefox on Linux). So the problem is that byKey is not a ref parameter, so you can use array on it: Well the explanation for your error is rather that byKey did not verify isRandomAccesRange, i.e indexable by an index so .array on it solve this. pfff finally ...
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: I have this code: import std.random; import std.stdio; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); auto rnd = rndGen; auto word = aa.byKey.choice(rnd); writeln(word); } And in the D playground it gives this error: onlineapp.d(8): Error: template std.random.choice cannot deduce function from argument types !()(Result, MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u)), candidates are: /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2599): choice(Range, RandomGen = Random)(auto ref Range range, ref RandomGen urng) with Range = Result, RandomGen = MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u) must satisfy the following constraint: isRandomAccessRange!Range /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2609): choice(Range)(auto ref Range range) I am treating aa as a set and want to pick a random word from it. What am I doing wrong? I'm sorry I can't give a link to this code in the D playground but the URL in the web browser is just https://run.dlang.io/ and when I click Shorten to get a URL nothing seems to happen (using Firefox on Linux). So the problem is that byKey is not a ref parameter, so you can use array on it: --- import std.random; import std.stdio; import std.array; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); Random rnd; auto word = choice(aa.byKey.array, rnd); writeln(word); } --- sorry for the previous noise.
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: I have this code: import std.random; import std.stdio; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); auto rnd = rndGen; auto word = aa.byKey.choice(rnd); writeln(word); } And in the D playground it gives this error: onlineapp.d(8): Error: template std.random.choice cannot deduce function from argument types !()(Result, MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u)), candidates are: /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2599): choice(Range, RandomGen = Random)(auto ref Range range, ref RandomGen urng) with Range = Result, RandomGen = MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u) must satisfy the following constraint: isRandomAccessRange!Range /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2609): choice(Range)(auto ref Range range) I am treating aa as a set and want to pick a random word from it. What am I doing wrong? I'm sorry I can't give a link to this code in the D playground but the URL in the web browser is just https://run.dlang.io/ and when I click Shorten to get a URL nothing seems to happen (using Firefox on Linux). So the problem is that byKey is not a ref parameter, so you can use array on it: --- import std.random; import std.stdio; import std.array; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); Random rnd; auto word = choice(aa.byKey.array, rnd); writeln(word); } --- sorry for the previous noise.
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 09:06:53 UTC, mark wrote: On Saturday, 25 January 2020 at 08:59:23 UTC, Basile B. wrote: On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: [...] [snip] [...] rndGen is a range. Use `auto word = aa.byKey.choice(rnd.front())` as index instead. Then `rndGen.popFront()` to advance. I tried that. It doesn't solve the problem but does reduce the size of the error output to: onlineapp.d(9): Error: template std.random.choice cannot deduce function from argument types !()(Result, uint), candidates are: /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2599): choice(Range, RandomGen = Random)(auto ref Range range, ref RandomGen urng) /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2609): choice(Range)(auto ref Range range) yeah indeed. sorry, I didn't read and thought you needed the index of rndGen, e.g index % upperBound.
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 08:59:23 UTC, Basile B. wrote: On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: [...] rndGen is a range. Use `auto word = aa.byKey.choice(rnd.front())` as index instead. Then `rndGen.popFront()` to advance. no sorry, I didn't read and thought you need the index of rndGen.
Re: compiler error when trying to get random key from AA
On Saturday, 25 January 2020 at 08:35:18 UTC, mark wrote: I have this code: import std.random; import std.stdio; void main() { auto aa = ["one": 1, "two": 2, "three": 3]; writeln(aa); auto rnd = rndGen; auto word = aa.byKey.choice(rnd); writeln(word); } And in the D playground it gives this error: onlineapp.d(8): Error: template std.random.choice cannot deduce function from argument types !()(Result, MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u)), candidates are: /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2599): choice(Range, RandomGen = Random)(auto ref Range range, ref RandomGen urng) with Range = Result, RandomGen = MersenneTwisterEngine!(uint, 32LU, 624LU, 397LU, 31LU, 2567483615u, 11LU, 4294967295u, 7LU, 2636928640u, 15LU, 4022730752u, 18LU, 1812433253u) must satisfy the following constraint: isRandomAccessRange!Range /dlang/dmd/linux/bin64/../../src/phobos/std/random.d(2609): choice(Range)(auto ref Range range) I am treating aa as a set and want to pick a random word from it. What am I doing wrong? I'm sorry I can't give a link to this code in the D playground but the URL in the web browser is just https://run.dlang.io/ and when I click Shorten to get a URL nothing seems to happen (using Firefox on Linux). rndGen is a range. Use `auto word = aa.byKey.choice(rnd.front())` as index instead. Then `rndGen.popFront()` to advance.
Re: Bison 3.5 is released, and features a D backend
On Wednesday, 1 January 2020 at 09:47:11 UTC, Akim Demaille wrote: Hi all! GNU Bison 3.5 was released with a D backend (https://savannah.gnu.org/forum/forum.php?forum_id=9639). This backend is functional, and you can get a sense of its current shape by looking at the shipped example (a calculator, what did you expect?): https://github.com/akimd/bison/blob/master/examples/d/calc.y. Bison is an LR parser generator. It supports not only Yacc's original LALR(1) parsers, but also canonical LR and IELR(1) which are strictly more powerful (meaning: they accept wider classes of languages). It also features Generalized LR, which can even parse ambiguous grammars. The D backend currently does not support the full range of Bison features. We desperately need some skilled D programmer(s) to support this backend. It was first contributed by Oliver Mangold, based on Paolo Bonzini's Java backend. It was cleaned and improved thanks to H. S. Teoh, yet it's certainly not yet fitting perfectly the D spirit. Since the backend is still experimental, there is flexibility: it can be changed and improved until it meets the D community standards. If you would like to contribute, please reach out to us via bison-patc...@gnu.org, or help-bi...@gnu.org. Best wishes for 2020. Cheers! nice, thanks
Re: needing to change the order of things at module level = compiler bug, right?
On Sunday, 8 December 2019 at 18:13:59 UTC, DanielG wrote: On Sunday, 8 December 2019 at 18:01:03 UTC, Steven Schveighoffer wrote: Yes, if it can compile when you move things around, and the result is *correct* (very important characteristic) Indeed, everything's working as intended when rearranged. Thanks! Still worth opening an issue. https://issues.dlang.org/show_bug.cgi?id=20443
Re: Unexpectedly nice case of auto return type
On Wednesday, 4 December 2019 at 12:54:34 UTC, Basile B. wrote: On Wednesday, 4 December 2019 at 03:17:27 UTC, Adam D. Ruppe wrote: On Wednesday, 4 December 2019 at 01:28:00 UTC, H. S. Teoh wrote: typeof(return) is one of the lesser known cool things about D that make it so cool. Somebody should write an article about it to raise awareness of it. :-D you know i probably will write about that next week. so be sure to like, comment, and subscribe so you never miss my next post and then give me all your money on patreon so i can keep bringing you content :P :P I've just made the refact using typeof(null) and gained 78 SLOC The pattern was: void issueError(); and then in the code, in a dozen a function returning different classes types if (...) { issueError(); return null; } now this becomes: typeof(null) issueError(); if (...) return issueError(); I wish I knew that trick before. I prefer typeof(null) in case I would translate to another language. I tend to type the static type when I know it anyway. more -520 lines today.
Re: opCmp with and without const
On Friday, 6 December 2019 at 07:03:45 UTC, berni44 wrote: In std.typecons, in Tuple there are two opCmp functions, that are almost identical; they only differ by one being const and the other not: int opCmp(R)(R rhs) if (areCompatibleTuples!(typeof(this), R, "<")) { static foreach (i; 0 .. Types.length) { if (field[i] != rhs.field[i]) { return field[i] < rhs.field[i] ? -1 : 1; } } return 0; } int opCmp(R)(R rhs) const if (areCompatibleTuples!(typeof(this), R, "<")) { static foreach (i; 0 .. Types.length) { if (field[i] != rhs.field[i]) { return field[i] < rhs.field[i] ? -1 : 1; } } return 0; } What is the reason for having this? (I guess, that it's because the function may indirectly call opCmp of other types which may or may not be const.) My real question is: Can this code duplication be avoided somehow? Usually `inout` is used. I'm pretty sure this is not possible here otherwise it would be done to avoid the dup. (I ask, because I've got a PR running, which increases the size of these functions and it doesn't feel good to have two long, almost identical functions.) Well the content of body could be mixed in
Re: confused about template constructors and implicit type conversions
On Wednesday, 4 December 2019 at 23:53:53 UTC, NeeO wrote: Would someone be able to explain this ? I can only seem to call a template constructor in one way, but I can't seem to pass what looks like an accepted type to the template constructor via a function call. /+ main.d +/ import std.stdio ; struct obj_ (T) { int demo ; this (int R,int C)(T[R][C] val) { writeln ("init for type is ",val.init) ; writeln ("num rows ",R ) ; writeln ("num cols ",C ) ; } } void check (obj_!float val) { writeln ("success") ; } int main () { float[3][4] testval ; obj_!float p = testval ; /+ works +/ check (testval) ; /+ not callable using argument types compiler error +/ return 0 ; } Hello, the problem you encounter here is that D, per spec, does not perform implicit construction from parameters. You have to constructs explicitly. Nothing more to explain.
Re: Unexpectedly nice case of auto return type
On Wednesday, 4 December 2019 at 03:17:27 UTC, Adam D. Ruppe wrote: On Wednesday, 4 December 2019 at 01:28:00 UTC, H. S. Teoh wrote: typeof(return) is one of the lesser known cool things about D that make it so cool. Somebody should write an article about it to raise awareness of it. :-D you know i probably will write about that next week. so be sure to like, comment, and subscribe so you never miss my next post and then give me all your money on patreon so i can keep bringing you content :P :P I've just made the refact using typeof(null) and gained 78 SLOC The pattern was: void issueError(); and then in the code, in a dozen a function returning different classes types if (...) { issueError(); return null; } now this becomes: typeof(null) issueError(); if (...) return issueError(); I wish I knew that trick before. I prefer typeof(null) in case I would translate to another language. I tend to type the static type when I know it anyway.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 10:19:02 UTC, Jonathan M Davis wrote: On Tuesday, December 3, 2019 3:03:22 AM MST Basile B. via Digitalmars-d- learn wrote: On Tuesday, 3 December 2019 at 09:58:36 UTC, Jonathan M Davis wrote: > On Tuesday, December 3, 2019 12:12:18 AM MST Basile B. via > > Digitalmars-d- learn wrote: >> I wish something like this was possible, until I change the >> return type of `alwaysReturnNull` from `void*` to `auto`. >> >> >> --- >> class A {} >> class B {} >> >> auto alwaysReturnNull() // void*, don't compile >> { >> >> writeln(); >> return null; >> >> } >> >> A testA() >> { >> >> return alwaysReturnNull(); >> >> } >> >> B testB() >> { >> >> return alwaysReturnNull(); >> >> } >> >> void main() >> { >> >> assert( testA() is null ); >> assert( testB() is null ); >> >> } >> --- >> >> OMG, isn't it nice that this works ? >> >> I think that this illustrates an non intuitive behavior of >> auto >> return types. >> One would rather expect auto to work depending on the inner >> return type. > > The void* version doesn't work, because void* doesn't > implicitly convert to a class type. It has nothing to do > with null. auto works thanks to the fact that typeof(null) > was added to the language a while back, and since class > references can be null, typeof(null) implicitly converts to > the class type. Before typeof(null) was added to the > language, null by itself had no type, since it's just a > literal representing the null value for any pointer or class > reference. The result was that using null in generic code or > with auto could run into issues. typeof(null) was added to > solve those problems. > > - Jonathan M Davis That's interesting details of D developement. Since you reply to the first message I think you have not followed but in the last reply I told that maybe we should be able to name the type of null. I think this relates to TBottom too a bit. There isn't much point in giving the type of null an explicit name given that it doesn't come up very often, and typeof(null) is quite explicit about what the type is. Also, anyone doing much generic programming in D is going to be well versed in typeof. They might not know about typeof(null) explicitly, but they should recognize what it means when they see it, and if someone were trying to get the type of null, it would be the obvious thing to try anyway. And typeof(null) isn't even the prime case where typeof gets used on something other than an object. From what I've seen, typeof(return) gets used far more. As for TBottom, while the DIP does give it a relationship to null, they're still separate things, and giving typeof(null) a name wouldn't affect TBottom at all. - Jonathan M Davis I think that any internal compiler types that are also things in code should be named. Things like tuples are really a thing in the compiler (TupleExp, TypeTuple, also Tuple in dtemplate.d, ...), we still need a library type for tuples while everything there in the compiler.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 23:44:59 UTC, mipri wrote: On Tuesday, 3 December 2019 at 10:13:30 UTC, mipri wrote: Speaking of nice stuff and aliases, suppose you want to return a nice tuple with named elements? Option 1: auto auto option1() { return tuple!(int, "apples", int, "oranges")(1, 2); } Option 2: redundancy Tuple!(int, "apples", int, "oranges") option2() { return tuple!(int, "apples", int, "oranges")(1, 2); } Option 3: an alias alias BadMath = Tuple!(int, "apples", int, "oranges"); BadMath option3() { return BadMath(1, 2); } Option 4: typeof(return) Tuple!(int, "apples", int, "oranges") option4() { return typeof(return)(1, 2); } aha nice
Re: Floating-Point arithmetic in dlang - Difference to other languages
On Tuesday, 3 December 2019 at 09:22:49 UTC, Jan Hönig wrote: Today i have stumbled on Hacker News into: https://0.30004.com/ I am learning D, that's why i have to ask. Why does writefln("%.17f", .1+.2); not evaluate into: 0.30004, like C++ but rather to: 0.2 Many other languages evaluate to 0.30004 as well. D lang could have a very good sprintf replacement.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 10:19:02 UTC, Jonathan M Davis wrote: On Tuesday, December 3, 2019 3:03:22 AM MST Basile B. via Digitalmars-d- learn wrote: [...] There isn't much point in giving the type of null an explicit name given that it doesn't come up very often, and typeof(null) is quite explicit about what the type is. Also, anyone doing much generic programming in D is going to be well versed in typeof. They might not know about typeof(null) explicitly, but they should recognize what it means when they see it, and if someone were trying to get the type of null, it would be the obvious thing to try anyway. And typeof(null) isn't even the prime case where typeof gets used on something other than an object. From what I've seen, typeof(return) gets used far more. [...] you're right but I see two cases: - transpiling - header generation
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 09:58:36 UTC, Jonathan M Davis wrote: On Tuesday, December 3, 2019 12:12:18 AM MST Basile B. via Digitalmars-d- learn wrote: I wish something like this was possible, until I change the return type of `alwaysReturnNull` from `void*` to `auto`. --- class A {} class B {} auto alwaysReturnNull() // void*, don't compile { writeln(); return null; } A testA() { return alwaysReturnNull(); } B testB() { return alwaysReturnNull(); } void main() { assert( testA() is null ); assert( testB() is null ); } --- OMG, isn't it nice that this works ? I think that this illustrates an non intuitive behavior of auto return types. One would rather expect auto to work depending on the inner return type. The void* version doesn't work, because void* doesn't implicitly convert to a class type. It has nothing to do with null. auto works thanks to the fact that typeof(null) was added to the language a while back, and since class references can be null, typeof(null) implicitly converts to the class type. Before typeof(null) was added to the language, null by itself had no type, since it's just a literal representing the null value for any pointer or class reference. The result was that using null in generic code or with auto could run into issues. typeof(null) was added to solve those problems. - Jonathan M Davis That's interesting details of D developement. Since you reply to the first message I think you have not followed but in the last reply I told that maybe we should be able to name the type of null. I think this relates to TBottom too a bit.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 09:44:20 UTC, Basile B. wrote: On Tuesday, 3 December 2019 at 08:47:45 UTC, Andrea Fontana wrote: On Tuesday, 3 December 2019 at 07:24:31 UTC, Basile B. wrote: A testA() { return alwaysReturnNull(); // Tnull can be implictly converted to A } still nice tho. Why not [1]? [1] typeof(null) alwaysReturnNull() { ... } Andrea Yeah nice, that works instead of auto. That reminds me of the discussion about TBottom. You see what surprises me here is that we cannot express the special type that is `TypeNull` and that can only have one value (`null`) so instead we have to use `auto` or `typeof(null)`. Making this type public would make the logic more clear: when `null` is used it's not a value but rather a value of a type that is convertible to all pointers or reference types.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 08:47:45 UTC, Andrea Fontana wrote: On Tuesday, 3 December 2019 at 07:24:31 UTC, Basile B. wrote: A testA() { return alwaysReturnNull(); // Tnull can be implictly converted to A } still nice tho. Why not [1]? [1] typeof(null) alwaysReturnNull() { ... } Andrea Yeah nice, that works instead of auto. That reminds me of the discussion about TBottom.
Re: Unexpectedly nice case of auto return type
On Tuesday, 3 December 2019 at 07:12:18 UTC, Basile B. wrote: I wish something like this was possible, until I change the return type of `alwaysReturnNull` from `void*` to `auto`. --- class A {} class B {} auto alwaysReturnNull() // void*, don't compile { writeln(); return null; } A testA() { return alwaysReturnNull(); } B testB() { return alwaysReturnNull(); } void main() { assert( testA() is null ); assert( testB() is null ); } --- OMG, isn't it nice that this works ? I think that this illustrates an non intuitive behavior of auto return types. One would rather expect auto to work depending on the inner return type. Actually I think this can work because of a Tnull (internal compiler type) inference which can always implictly be converted to the static return type (A or B, or even int*): auto alwaysReturnNull()// `auto` is translated to Tnull by inference because of `return null` then: A testA() { return alwaysReturnNull(); // Tnull can be implictly converted to A } still nice tho.
Unexpectedly nice case of auto return type
I wish something like this was possible, until I change the return type of `alwaysReturnNull` from `void*` to `auto`. --- class A {} class B {} auto alwaysReturnNull() // void*, don't compile { writeln(); return null; } A testA() { return alwaysReturnNull(); } B testB() { return alwaysReturnNull(); } void main() { assert( testA() is null ); assert( testB() is null ); } --- OMG, isn't it nice that this works ? I think that this illustrates an non intuitive behavior of auto return types. One would rather expect auto to work depending on the inner return type.
Re: Why same pointer type for GC and manual memory?
On Friday, 15 November 2019 at 10:55:55 UTC, IGotD- wrote: On Friday, 15 November 2019 at 08:58:43 UTC, user1234 wrote: On Wednesday, 13 November 2019 at 11:07:12 UTC, IGotD- wrote: I'm trying to find the rationale why GC pointers (should be names managed pointers) are using the exact same type as any other pointer. Doesn't this limit the ability to change the default GC type? Doesn't this confusion make GC pointers just as unsafe as raw pointers? Has there been any prior discussion about introducing managed pointers in D? One other reason is that special pointers are not good for debugging and have a runtime cost... in D it's not like in a VM. It's true it has a runtime cost but I'm wondering if it isn't worth it. There will be an extra dereference of course but this is only done once per for every piece that use the fat pointer. With a fat pointer you could also provide more information, allocated size for example which makes it possible to safely cast the pointer to anything checking that the boundaries are correct. Also reading a fat pointer (like *ptr) can check the bounds. Size information is also handy for some allocators, buddy allocator for example where storing the size in the allocated block is something that you not always want to as it throws the memory off the natural alignment which is one of the benefits of the buddy allocator. In the fat pointer you could store more information like reference count, type of allocator, a pointer to free function (makes it possible to hotswap allocators). When it comes to debugging, then you need to parse the fat pointer, just like it parses common containers classes. This is not a too big deal I think. TBH I see your point but D is a system programming language. Even if there's a GC you can also do Manual Memory Mangement (sometimes you'll see "MMM "to refer to that in the forums), RC, and you can also write custom machine code in asm blocks. In most of the case, if you just stick to the GC way, it'll be alright, even if pointers are actual machine pointers, i.e usable in asm blocks ;) Recently some guys have discovered however that an aggressive anti virus scanner could cause problems... Anyway, just to say, D produces machine code and doesn't requires a VM... if you need fat pointers for a custom memory management, free to you. I don't think this will ever change.
Re: Unexpected aliasing
On Tuesday, 12 November 2019 at 07:59:39 UTC, Bastiaan Veelo wrote: On Monday, 11 November 2019 at 20:05:11 UTC, Antonio Corbi wrote: [...] Thanks, Antonio. My problem is that the length of the array should be a built-in property of WrapIntegerArray (immutable in this case); what I'd actually want is a constructor without arguments. Jonathan's suggestion of using a factory function comes closest to that. Bastiaan. I'm curious to know what is the equivalent in Pascal that your transpiler fails to translate since Pascal records don't have constructors at all. Maybe you used an old school 'Object' ?
Re: How would I write a travis-ci file for a Meson Dlang project?
On Tuesday, 16 July 2019 at 15:07:11 UTC, Mike Brockus wrote: If you never seen Meson before then pick up a camera and take a picture: 樂 https://mesonbuild.com/ Hello, everyone. I started adding continues integration as part of my development cycle and I was wondering how would I write a '.travis.yml' file for a D language project using Meson build system. https://github.com/squidfarts/c-executable.git Hello, an example https://github.com/dlang-community/libdparse/blob/master/.travis.yml
Re: Is there any way to define an interface that can implicitly convert to Object?
On Wednesday, 10 July 2019 at 08:03:30 UTC, Nathan S. wrote: I want to be able to do things like: --- bool isSame(Object a, Object b) { return a is b; } interface SomeInterface { int whatever(); } bool failsToCompile(SomeInterface a, SomeInterface b) { return isSame(a, b); } --- Error: function isSame(Object a, Object b) is not callable using argument types (SomeInterface, SomeInterface) Is there a way to declare an interface as explicitly not a COM interface or a C++ interface? Having to add "cast(Object)" everywhere is annoying. Hi, not tested extensively but : --- interface Foo { final Object asObject() { return cast(Object) this; } alias asObject this; } class Bar : Foo { } void main(string[] args) { Foo foo = new Bar; Object o = foo; writeln(o); } ---
Re: Mixin mangled name
On Monday, 1 July 2019 at 23:52:49 UTC, Andrey wrote: Hello, Is it possible to mixin in code a mangled name of some entity so that compiler didn't emit undefined symbol error? For example mangled function name or template parameter? Yes. An example from the DMD test suite itself : https://run.dlang.io/is/ctq77S
Re: argument parsing into structure
On Wednesday, 26 June 2019 at 23:35:59 UTC, Jesse Phillips wrote: On Wednesday, 26 June 2019 at 14:58:08 UTC, Basile B. wrote: On Wednesday, 26 June 2019 at 09:40:06 UTC, JN wrote: On Wednesday, 26 June 2019 at 05:38:32 UTC, Jesse Phillips wrote: Sometimes a good API isn't the right answer. I like getopt as it is but I wanted a little different control. So I wrote up an article on my work around. https://dev.to/jessekphillips/argument-parsing-into-structure-4p4n I have another technique for sub commands I should write about too. http://code.dlang.org/packages/argsd I think we are several having written alternative getopt() systems. I made one too last year, it allows to write tools very nicely. Recent example for a device flasher: The thing is, I didn't write an alternative, mine utilizes getopt which was the point of the article. I could build out new functionality onto a standard API which didn't support it. It isn't as robust as it could be. Sorry budy, I didn't want to ruin your announce. Hold on ;)
Re: argument parsing into structure
On Wednesday, 26 June 2019 at 09:40:06 UTC, JN wrote: On Wednesday, 26 June 2019 at 05:38:32 UTC, Jesse Phillips wrote: Sometimes a good API isn't the right answer. I like getopt as it is but I wanted a little different control. So I wrote up an article on my work around. https://dev.to/jessekphillips/argument-parsing-into-structure-4p4n I have another technique for sub commands I should write about too. http://code.dlang.org/packages/argsd I think we are several having written alternative getopt() systems. I made one too last year, it allows to write tools very nicely. Recent example for a device flasher: --- module app; import iz.options; struct Handler { private: public: /// Handle the "--help" command. @Argument("--help", "prints the command line usage", ArgFlags(ArgFlag.allowShort)) static void getHelp(); /// Handle the "--info" command. @Argument("--info", "prints the list of connected mods", ArgFlags(ArgFlag.allowShort)) static void getInfo(); /// Handle the "--readFlash" command. @Argument("--readFlash", "read the firmware from the mod and save it to the specified file") static void readFlash(string fname); /// Handle the "--writeFlash" command. @Argument("--writeFlash", "read the firmware from the specified file and send it to the mod") static void saveFlash(string fname); /// Handle the "--reset" command. @Argument("--reset", "reset to factory state", ArgFlags(ArgFlag.allowShort)) static void reset(); } void main(string[] args) { if (!handleArguments!(CanThrow, Handler)(args[1..$]) || args.length == 1) writeln(help!Handler); } --- https://github.com/Basile-z/iz/blob/master/import/iz/options.d#L379 Though I've detected several little flaws since it was written, but I don't care much anymore about my own stuff these days...
Re: Is it possible to escape a reserved keyword in Import/module?
On Wednesday, 19 June 2019 at 21:21:53 UTC, XavierAP wrote: On Wednesday, 19 June 2019 at 18:56:57 UTC, BoQsc wrote: I would like to make sure that my modules do not interfere with d lang. Is there any way to escape reserved words? The only reason C# allows this is for interop or code generation for other languages that use the same keyword. For example "class" is an HTML attribute. There is no excuse to do this for any other reason -- and C# gurus would also agree. I would like to make sure that my modules do not interfere Then don't name them as keywords :) I used twice a similar system (&) that exists in ObjFPC. The context was a RTTI inspector and allowed to have enum members displayed without using a prefix or a translation table. Just to say, it's rarely useful but nice to have. I would have preferred # so much more that the "body" -> "do" change, which was a bad decision because focused on a detail. You mentioned "class"...
Re: Is it possible to escape a reserved keyword in Import/module?
On Wednesday, 19 June 2019 at 19:07:30 UTC, Jonathan M Davis wrote: On Wednesday, June 19, 2019 12:56:57 PM MDT BoQsc via Digitalmars-d-learn wrote: I would like to make sure that my modules do not interfere with d lang. Is there any way to escape reserved words? https://dlang.org/spec/lex.html#keywords > import alias; C:\Users\Juozas\Desktop\om.d(2): Error: identifier expected following import C:\Users\Juozas\Desktop\om.d(2): Error: ; expected > module abstract; C:\Users\Juozas\Desktop\commands\alias.d(1): Error: identifier expected following module You can never use keywords as identifiers in D (or any language in the C family that I've ever heard of). C# can use them when they are prefixed with a little "@" before [1]. At some point the idea was brought for D [2]. [1] https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/index [2] https://github.com/dlang/DIPs/pull/52/files
Re: my first kernel in betterC D
On Sunday, 16 June 2019 at 16:14:26 UTC, Laeeth Isharc wrote: https://github.com/kaleidicforks/mkernel-d I spent a few minutes on just turning the C code to betterC D - was curious to see if it would work. It seems to. I didn't try loading with GRUB. The dub.sdl isn't quite right, so best run ./build.sh Cannot push to code.dlang.org - it complains about registering a forked package, even after renaming. IIRC there used to be a protection supposed to prevent hijacking of package names, e.g someone that would register from a fork before the original package, blocking the original author, considered as more legit. But https://github.com/dlang/dub-registry/pull/425/files seems to say that it should work now.
Re: D IDE dexed - v3.7.10 available
On Thursday, 13 June 2019 at 20:12:41 UTC, Machine Code wrote: On Monday, 10 June 2019 at 20:34:14 UTC, Basile B. wrote: A small update of this IDE dedicated to the D languages and its tools [1]. Only some small fixes and adjustments, see [2] for details and pre-compiled binaries. [1] https://github.com/Basile-z/dexed [2] https://github.com/Basile-z/dexed/releases/tag/v3.7.10 You're the author? Thank you very much! When I was working on project on linux, this was the best I could find. It has a issue with crashing but deleting a temporary file (sorry don't remeber name now...) fixed the issue. So you've written this IDE by youtself? I was interesting in building a small IDE. Do you have any books (not sure if exists) on topic? or links/resource that might be helpful? I don't think there's anything on the topic because this is not technical: UX best practices (the way the toolkit is made should enforce that automatically), observer pattern, mediator pattern, processes and standard streams, much serialization too, and you're good.
Re: D IDE dexed - v3.7.10 available
On Tuesday, 11 June 2019 at 21:05:05 UTC, Kaylan Tussey wrote: On Monday, 10 June 2019 at 20:34:14 UTC, Basile B. wrote: A small update of this IDE dedicated to the D languages and its tools [1]. Only some small fixes and adjustments, see [2] for details and pre-compiled binaries. [1] https://github.com/Basile-z/dexed [2] https://github.com/Basile-z/dexed/releases/tag/v3.7.10 I tried this ide. Best one aside from vd+vs imo. But it has one problem. It's written in a language i'm not familiar with :(. I can't add any functionality I wanted, such as a really nice directory/file manipulator. I found myself getting down and dirty in windows explorer then directing dexed to the folders/files. Unless I missed something? There's not much to add to the mini explorer anymore but shell actions (rename, delete, etc.). It looks easy at first glance but a naive implementation will not allow to undo/redo from the Windows file explorer. You can suggest changes here : https://github.com/Basile-z/dexed/issues. The suggestion for the shell actions is already opened.
Re: D IDE dexed - v3.7.10 available
On Thursday, 13 June 2019 at 05:09:34 UTC, gleb.tsk wrote: On Monday, 10 June 2019 at 20:34:14 UTC, Basile B. wrote: [1] https://github.com/Basile-z/dexed [2] https://github.com/Basile-z/dexed/releases/tag/v3.7.10 Thank you, very interesting. But... lazbuild -B -r dexed.lpi TEditorToolBarOptions.Load: Using old configuration in editortoolbar.xml. Hint: (lazarus) [RunTool] /usr/bin/fpc "-iWTOTP" Hint: (lazarus) [RunTool] /usr/bin/fpc "-va" "-Fr/usr/lib64/fpc/msg/errore.msg" "compilertest.pas" Error: (lazbuild) Broken dependency: DexedDesignControls See https://github.com/Basile-z/dexed/issues/456. Unfortunately the reporter never gave any feedback so I had closed. All I know is that installing the package in Laz, then rebuilding Laz then building the project will work.
Re: D compiler need -nogc witch and document of library also need nogc button
On Tuesday, 11 June 2019 at 08:08:14 UTC, Basile B. wrote: On Tuesday, 11 June 2019 at 08:05:31 UTC, dangbinghoo wrote: hi there, I think that D compiler needs -nogc switch to fully disable gc for a project, and document of phobos also needs a friendly way to list-out all @nogc API. we already have -betterC, and betterC disabled GC, why we could not have a standalone option for doing this? binghoo dang you can do this with a druntime switch or by calling GC.disable in your main() the druntime switch is "--DRT-gc=gc:manual". You pass it the compiled program after its own args.
Re: D compiler need -nogc witch and document of library also need nogc button
On Tuesday, 11 June 2019 at 08:05:31 UTC, dangbinghoo wrote: hi there, I think that D compiler needs -nogc switch to fully disable gc for a project, and document of phobos also needs a friendly way to list-out all @nogc API. we already have -betterC, and betterC disabled GC, why we could not have a standalone option for doing this? binghoo dang you can do this with a druntime switch or by calling GC.disable in your main()
D IDE dexed - v3.7.10 available
A small update of this IDE dedicated to the D languages and its tools [1]. Only some small fixes and adjustments, see [2] for details and pre-compiled binaries. [1] https://github.com/Basile-z/dexed [2] https://github.com/Basile-z/dexed/releases/tag/v3.7.10
Re: The D Blog in 2018
On Sunday, 2 June 2019 at 20:08:28 UTC, Murilo wrote: Hi everyone. I don't mean to spam Sure otherwise you would not post this 3 times in a row: - https://forum.dlang.org/post/tjoipokamsvpbemzd...@forum.dlang.org - https://forum.dlang.org/reply/hebsehdcxhlhkzwxh...@forum.dlang.org
Re: Very simple null reference escape
On Sunday, 2 June 2019 at 07:55:27 UTC, Amex wrote: A.B If A is null, crash. A?.B : writeln("HAHA"); No crash, ignored, equivalent to if (A is null) writeln("HAHA"); else A.B; safeAccess from iz does this : https://github.com/Basile-z/iz/blob/master/import/iz/sugar.d#L1666
Re: import and call
On Sunday, 2 June 2019 at 19:38:11 UTC, Amex wrote: Tired of having to import a single function to call it. Since mod.foo(x); doesn't work since mod is not defined. we have to do import mod : foo; foo(x); Why not mod:foo(x)? or mod#foo(x) or mod@foo(x) or whatever Reduces 50% of the lines and reduces the import symbol. I realize that we could do import m = mod; m.foo(x); but the idea is to only import the single function. I'm not sure if it matters. I thought importing single functions were suppose to be faster. Am I wrong? The idea is to reduce having to litter the code with imports which I find I'm always having to do, or to make them global... just for a few calls in to them. Expression based import is possible using mixin, see https://dlang.org/blog/2017/02/13/a-new-import-idiom/
Re: Let's celebrate Dlang on D day
On Saturday, 25 May 2019 at 03:22:50 UTC, Murilo wrote: On the 6th of June(6/6) we celebrate the D day on Normandy, but I have decided to turn it into our own holiday to celebrate the D language. So on this day please take the time to tell the world about this language and to invite more people into our community. I will try to give some talks at universities in order to get the attention of the people. I suggest you all do similar stuff. In the Dlang facebook group https://www.facebook.com/groups/662119670846705/ which has already reached 135 members, we will be doing lots of fun stuff. Please show up and join the group to participate. I will try to turn this into an actual holiday. I hope you can all help me out. I dont think it's a good idea. One of mine grandpa served in the french Marine when they destroyed the ships at Toulon. The other went from algeria, from a spanish colony, to here... This D day idea is completly stupid.
Re: Bitfields
On Wednesday, 22 May 2019 at 08:54:45 UTC, Russel Winder wrote: On Tue, 2019-05-21 at 19:14 +, Era Scarecrow via Digitalmars-d-learn wrote: […] I worked on/with bitfields in the past, the limit sizes is more or less for natural int types that D supports. Rust bitfield crate and it's macros are the same, the underlying type for a bitfield must be a primitive integer type. Fortunately, Rust has i128 and u128 which is enough for my 112 bit EIT header. Boris Barboris suggested using BitArray and I willinvestigate but the size_t/byte problem would need to go away. However this limitation is kinda arbitrary, as for simplicity it relies on shifting bits, going larger or any byte size is possible depending on what needs to be stored, but ti's the speed that really takes a penalty when you aren't using native types or you have to do a lot of shifting to get the job done. What's the layout of what you need? I'll see if i can't make something that would work for you. Would be better if you can use a object that breaks the parts down and you can actually fully access those parts, then just re-store it into the limited space you want for storage, which then would be faster than bitfields (although not by much) I found an interesting way forward in the source code of dvbsnoop. It basically uses the byte sequence as a backing store and then has a function to do the necessary accesses to treat it as a bit array. If D's Bit Array can work with bytes instead of size_t then it is exactly what dvbsnoop does (in C) but adds writing as well as reading. The Rust solution using bitfield with a u128 backing it seems to work, but it is all very clumsy even if it is efficacious. If the type of operations you need on your 128 bit container is simple enough (bitwise op) you can implement them, backed by two ulong in a custom struct. I did something similar for the tokens if my language[1]. There are probably wrong stuff in there but for my usage (include and bt) that works seamlessly. [1]: https://github.com/Basile-z/styx/blob/master/src/styx/data_structures.d#L40
Re: Performance of tables slower than built in?
On Wednesday, 22 May 2019 at 08:25:58 UTC, Basile B. wrote: On Wednesday, 22 May 2019 at 00:22:09 UTC, JS wrote: I am trying to create some fast sin, sinc, and exponential routines to speed up some code by using tables... but it seems it's slower than the function itself?!? [...] Hi, lookup tables ARE faster but the problem you have here, and I'm surprised that nobody noticed it so far, is that YOUR SWITCH LEADS TO A RUNTIME STRING COMPARISON AT RUNTIME. Just replace it with a static if (Method = "Linear") { /*...*/} else { /*...*/} Oh no... I meant "==" obviously Also takes care to the type used. With DMD the implicit coercion of float and double can lead to extra conversions. You'll directly see a 15% gain after refactoring the switch.
Re: Performance of tables slower than built in?
On Wednesday, 22 May 2019 at 00:22:09 UTC, JS wrote: I am trying to create some fast sin, sinc, and exponential routines to speed up some code by using tables... but it seems it's slower than the function itself?!? [...] Hi, lookup tables ARE faster but the problem you have here, and I'm surprised that nobody noticed it so far, is that YOUR SWITCH LEADS TO A RUNTIME STRING COMPARISON AT RUNTIME. Just replace it with a static if (Method = "Linear") { /*...*/} else { /*...*/} Also takes care to the type used. With DMD the implicit coercion of float and double can lead to extra conversions. You'll directly see a 15% gain after refactoring the switch.
Re: D GUI Framework (responsive grid teaser)
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote: Hi, we are currently build up our new technology stack and for this create a 2D GUI framework. https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0 The screencast shows a responsive 40x40 grid. Layouting the grid takes about 230ms, drawing it about 10ms. The mouse clicks are handled via a reactive message stream and routed to all graphical objects that are hit using a spatial-index. The application code part is about 50 lines of code, the rest is handled by the framework. With all this working now, we have all necessary building blocks working together. Next steps are to create more widgets and add a visual style system. The widgets themself are style-free and wire-frame only for debugging purposes. What kind of layouting ? GTK-like ? DelphiVCL-like ? Flex-like ?
Re: D GUI Framework (responsive grid teaser)
On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote: On 2019-05-19 21:21:55 +, Ola Fosheim Grøstad said: Interesting, is each cell a separate item then? So assuming 3GHz cpu, we get 0.23*3e9/1600 = 431250 cycles per cell? That's a lot of work. Here is a new screencast: https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um%2015.20.59.mov?dl=0 I optimized the whole thing a bit, so now a complete screen with layouting, hittesting, drawing takes about 28ms, that's 8x faster than before. Drawing is still around 10ms, layouting around 16ms, spatial index handling 2ms. So this gives us 36 FPS which is IMO pretty good for a desktop app target. There might be some 2-3ms speed-up still possible but not worth the effort yet. openGL backend I presume ?