What is the recommend tool for D linting from CI pipelines? Does such a tool exist at all?
Hi, many companies started to use CI pipelines, and as part of their pipelines they introduced mandatory linting for source code. There are tools for many languages, esp. for C/C++. These tools usually return '0' on success, and something else on linting errors. That is pretty much the standard POSIX way to return a resultcode back to the calling programm/shell. The CI pipeline aborts, when something else then '0' is returned. However, for the D language I found only 'dscanner'. It has a report option, but seems to return always a '0'. I have to check the JSON style output to find out, if any issues are reported or not. This makes things more complicated than needed. So, my simple question are: * What is the recommended way to do automated linting for D sources in CI pipelines or similiar? * Is there any other tool than dscanner to do serious linting for D language? Carsten
Re: _getmaxstdio / _setmaxstdio
On Thursday, October 10, 2019 5:03:29 PM MDT Damian via Digitalmars-d-learn wrote: > Missing _getmaxstdio / _setmaxstdio? > I'd like to try and increase the limit of open files without > resorting to Windows API, is it possible or will I have to resort > to the WinAPI to achieve this? Phobos doesn't have anything like that (and if it did, it would just be a cross-platform wrapper which used the Windows API on Windows whatever the equivalent would be on other platforms - assuming that there even is an equivalent on other platforms). So, unless there's a library on code.dlang.org which has such a wrapper, you'll need to use the Windows API directly. But it's not like that would be hard. All you'd need to do would be to declare the appropriate function declaration (which would probably be extern(Windows) in this case) and make sure that you're linked against the appropriate library. Given that the header is stdio.h, that would presumably be Microsoft's C runtime library, which probably means that you'd need to tell dmd to use Microsoft's C runtime and not dmc's C runtime. - Jonathan M Davis
Re: How Different Are Templates from Generics
On Friday, 11 October 2019 at 17:50:42 UTC, Jonathan M Davis wrote: Generic functions and types operate on Object underneath the hood. If you have Container and Container, you really just have Container with some syntactic niceties to avoid explicit casts. You get type checks to ensure that Container isn't given a Bar unless Bar is derived from Foo, and the casts to and from Object when giving Container a Foo are taken care of for you, but it's still always Container underneath the hood. In the case of Java, the type of T in Container or foo() is truly only a compile time thing, so the bytecode only has Container and no clue what type is actually supposed to be used (the casts are there where the container or function is used, but the container or function has no clue what the type is; it just sees Object). That makes it possible to cheat with reflection and put something not derived from Foo in Container but will then usually result in runtime failures when the casts the compiler inserted are run. C# doesn't have that kind of type erasure in that the information that Container contains Foo rather than Object is maintained at runtime, but you still have a Container. It's just a Container with some metadata which keeps track of the fact that for this particular object of Container, Object is always supposed to be a Foo. As I'm a lot less familiar with C# than Java, I'm not all that familiar with what the practical benefits that gives are, though I'd expect that it would mean that reflection code would catch when you're trying to put a Bar into Container and wouldn't let you. Note that for generics to work, they have to a common base type, and you only ever get one version of a generic class or function even if it gets used with many different types derived from Object. For a primitive type like int or float (as well as for structs in the case of C#), they have to be put into a type derived from Object in order to be used with generics (as I expect you're aware, C# calls this boxing and unboxing). Templates don't act like this at all. Unlike Java, C# actually does generate different code pieces for different value types [1] and reuses the same generated code for reference types. [1] https://alexandrnikitin.github.io/blog/dotnet-generics-under-the-hood/
Re: GtkD ListG Howto?
On Friday, 11 October 2019 at 20:40:25 UTC, mipri wrote: I get the segfault to go away with ListG list = new ListG(null); which is usage you can find in APILookupGLib.txt Ah! Thanks, mipri. I didn't think to read through the unit tests.
Re: Template mixin + operator overloading question
On Friday, 11 October 2019 at 13:13:46 UTC, Dennis wrote: On Friday, 11 October 2019 at 12:45:59 UTC, Boyan Lazov wrote: Any ideas what I'm doing wrong? Nothing, it's a bug. https://issues.dlang.org/show_bug.cgi?id=19476 Alright, I see Well, the alias workaround works, so that seems just as good. Thanks!
Re: GtkD ListG Howto?
On Friday, 11 October 2019 at 19:53:33 UTC, Ron Tarrant wrote: Pixbuf airportImage1, airportImage2, airportImage3, airportImage4; void * image1, image2, image3, image4; airportImage1 = new Pixbuf("images/airport_25.png"); airportImage2 = new Pixbuf("images/airport_35.png"); airportImage3 = new Pixbuf("images/airport_60.png"); airportImage4 = new Pixbuf("images/airport_100.png"); image1 = image2 = image3 = image4 = ListG listG = null; I get the segfault to go away with ListG list = new ListG(null); which is usage you can find in APILookupGLib.txt listG = listG.append(image1); listG = listG.append(image2); listG = listG.append(image3); listG = listG.append(image4); setIconList(listG);
GtkD ListG Howto?
Hi all, I'm trying to add an icon list to a GTK Window using setIconList(), but what the function expects is a ListG of Pixbufs. The way I understand it, I have to instantiate the Pixbufs, build a ListG of void pointers to the Pixbufs, and pass that to setIconList(). Here is how I assume this process would play out: ``` Pixbuf airportImage1, airportImage2, airportImage3, airportImage4; void * image1, image2, image3, image4; airportImage1 = new Pixbuf("images/airport_25.png"); airportImage2 = new Pixbuf("images/airport_35.png"); airportImage3 = new Pixbuf("images/airport_60.png"); airportImage4 = new Pixbuf("images/airport_100.png"); image1 = image2 = image3 = image4 = ListG listG = null; listG = listG.append(image1); listG = listG.append(image2); listG = listG.append(image3); listG = listG.append(image4); setIconList(listG); ``` But this, although it compiles, just dies when it hits all those append() statements. Would someone please tell me where I'm going off track?
Re: How Different Are Templates from Generics
On Friday, October 11, 2019 12:09:20 PM MDT Just Dave via Digitalmars-d- learn wrote: > Thanks for the thorough explanation. Most of that is how I was > thinking it worked. However, that leaves me perplexed. If > templates just generate code then how come: > > Wouldnt.. > > class SomeClass(T) : ISomeInterface!T > > and.. > > class SomeOtherClass(T) : ISomeInterface!T > > ...generate two different interfaces? Two interfaces that do the > same thing, but two interfaces nonetheless? I assume each type in > D has some form of type id underlying everything, which wouldn't > that make the follow: > > if (instance1 is ISomeInterface) > { > Console.WriteLine("Instance1 is interface!"); > } > > fail? Or is there some extra magic that is making it work with my > experiments? You get a different template instantiation for each set of template arguments. So, if you have ISomeInterface!int, and you use ISomeinterface!int somewhere else, because they're both instantiating ISomeInterface with the same set of template arguments, you only get one instantiation. So, class SomeClass : ISomeInterface!int and class SomeOtherClass : ISomeInterface!int would both be implementing the exact same interface. And if you then have class SomeClass(T) : ISomeInterface!T and class SomeOtherClass(T) : ISomeInterface!T then SomeClass!int and SomeOtherClass!int would both be implementing the same interface, because in both cases, it would be ISomeInterface!int. SomeClass!int and SomeOtherClass!float would not be implementing the same interface, because it would be ISomeInterface!int and ISomeInterface!float, but ISomeInterface!int doesn't result in multiple instantiations even if it's used in different parts of the code. - Jonathan M Davis
Re: How Different Are Templates from Generics
Thanks for the thorough explanation. Most of that is how I was thinking it worked. However, that leaves me perplexed. If templates just generate code then how come: Wouldnt.. class SomeClass(T) : ISomeInterface!T and.. class SomeOtherClass(T) : ISomeInterface!T ...generate two different interfaces? Two interfaces that do the same thing, but two interfaces nonetheless? I assume each type in D has some form of type id underlying everything, which wouldn't that make the follow: if (instance1 is ISomeInterface) { Console.WriteLine("Instance1 is interface!"); } fail? Or is there some extra magic that is making it work with my experiments?
Re: Undefined symbol: _dyld_enumerate_tlv_storage (OSX)
On 2019-10-11 18:48, Robert M. Münch wrote: On 2019-10-10 18:31:25 +, Daniel Kozak said: What dmd version? I think I had an older one like 2.085 or so. I updated to 2.088 and it now seems to work. https://issues.dlang.org/show_bug.cgi?id=20019 I'm on OSX 10.14.6, so this might not be directly related to Catalina but maybe more to the XCode Version installed: | => xcrun --show-sdk-version 10.15 So, it's possible to run 10.14 with SDK version 10.15 which seems to trigger the problem. No, I don't think that's the problem. I have the same setup and I don't have this problem. What result do you get if you run the following command: nm /usr/lib/system/libdyld.dylib | grep _dyld_enumerate_tlv_storage -- /Jacob Carlborg
Re: How Different Are Templates from Generics
On Friday, October 11, 2019 8:43:49 AM MDT Just Dave via Digitalmars-d-learn wrote: > I come from both a C++ and C# background. Those have been the > primary languages I have used. In C# you can do something like > this: > > public interface ISomeInterface > { > T Value { get; } > } > > public class SomeClass : ISomeInterface > { > T Value { get; set; } > } > > public class SomeOtherClass : ISomeInterface > { > T Value { get; set; } > } > > public static class Example > { > public static void Foo() > { > var instance1 = new SomeClass(){ Value = 4; }; > var instance2 = new SomeClass(){ Value = 2; }; > > if (instance1 is ISomeInterface) > { > Console.WriteLine("Instance1 is interface!"); > } > > if (instance2 is ISomeInterface) > { > Console.WriteLine("Instance2 is interface!"); > } > } > } > > Expected output is both WriteLines get hit: > > Instance1 is interface! > > Instance2 is interface! > > > So now the 'D' version: > > interface ISomeInterface(T) > { > T getValue(); > } > > class SomeClass(T) : ISomeInterface!T > { > private: > T t; > > public: > this(T t) > { > this.t = t; > } > > T getValue() > { > return t; > } > } > > class SomeOtherClass(T) : ISomeInterface!T > { > private: > T t; > > public: > this(T t) > { > this.t = t; > } > > T getValue() > { > return t; > } > } > > ...which seems to work the same way with preliminary testing. I > guess my question is...templates are different than generics, but > can I feel confident continuing forward with such a design in D > and expect this more or less to behave as I would expect in C#? > Or are there lots of caveats I should be aware of? Generics and templates are syntactically similiar but are really doing very different things. Generic functions and types operate on Object underneath the hood. If you have Container and Container, you really just have Container with some syntactic niceties to avoid explicit casts. You get type checks to ensure that Container isn't given a Bar unless Bar is derived from Foo, and the casts to and from Object when giving Container a Foo are taken care of for you, but it's still always Container underneath the hood. In the case of Java, the type of T in Container or foo() is truly only a compile time thing, so the bytecode only has Container and no clue what type is actually supposed to be used (the casts are there where the container or function is used, but the container or function has no clue what the type is; it just sees Object). That makes it possible to cheat with reflection and put something not derived from Foo in Container but will then usually result in runtime failures when the casts the compiler inserted are run. C# doesn't have that kind of type erasure in that the information that Container contains Foo rather than Object is maintained at runtime, but you still have a Container. It's just a Container with some metadata which keeps track of the fact that for this particular object of Container, Object is always supposed to be a Foo. As I'm a lot less familiar with C# than Java, I'm not all that familiar with what the practical benefits that gives are, though I'd expect that it would mean that reflection code would catch when you're trying to put a Bar into Container and wouldn't let you. Note that for generics to work, they have to a common base type, and you only ever get one version of a generic class or function even if it gets used with many different types derived from Object. For a primitive type like int or float (as well as for structs in the case of C#), they have to be put into a type derived from Object in order to be used with generics (as I expect you're aware, C# calls this boxing and unboxing). Templates don't act like this at all. Templates are literally templates for generating code. A template is nothing by itself. Something like struct Container(T) { T[] data; } or T foo(T)(T t) { return t; } doesn't result in any code being in the binary until unless template is instantiated with a specific type, and when that template is instantiated, code is generated based on the type that it's instantiated with. So, Container!int and Container!Foo result in two different versions of Container being generated and put in the binary - one which operates on int, and one which operates on Foo. There is no conversion to Object going on here. The code literally uses int and Foo directly and is generated specifically for those types. Not only does that mean that the generated code can be optimized for the specific type rather than being for any Object, but it also means that the code itself could do
Re: How Different Are Templates from Generics
On Friday, 11 October 2019 at 14:43:49 UTC, Just Dave wrote: I come from both a C++ and C# background. Those have been the primary languages I have used. Probably the D templates relate to C# generics the same way that C++ templates do.
Re: Undefined symbol: _dyld_enumerate_tlv_storage (OSX)
On 2019-10-10 18:31:25 +, Daniel Kozak said: What dmd version? I think I had an older one like 2.085 or so. I updated to 2.088 and it now seems to work. https://issues.dlang.org/show_bug.cgi?id=20019 I'm on OSX 10.14.6, so this might not be directly related to Catalina but maybe more to the XCode Version installed: | => xcrun --show-sdk-version 10.15 So, it's possible to run 10.14 with SDK version 10.15 which seems to trigger the problem. Thanks for the hints. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Fastest way to check if a predicate can take a single parameter of a specific type
On Friday, 11 October 2019 at 09:05:25 UTC, Per Nordlöw wrote: I want to check whether auto _ = pred(T.init); static assert(is(typeof(_) == bool)); compiles or not. Which one __traits(compiles, { auto _ = pred(T.init); static assert(is(typeof(_) == bool)); }) and is(typeof(pred(T.init)) == bool) is preferred compilation performance-wise? Seems like the second one requires strictly less semantic analysis and is shorter and easier to read, so I'd go with that. Note that `pred(T.init)` will fail if pred accepts its argument by ref. If you want to handle that case, you have to do something like is(ReturnType!((T arg) => pred(arg)) == bool))
How Different Are Templates from Generics
I come from both a C++ and C# background. Those have been the primary languages I have used. In C# you can do something like this: public interface ISomeInterface { T Value { get; } } public class SomeClass : ISomeInterface { T Value { get; set; } } public class SomeOtherClass : ISomeInterface { T Value { get; set; } } public static class Example { public static void Foo() { var instance1 = new SomeClass(){ Value = 4; }; var instance2 = new SomeClass(){ Value = 2; }; if (instance1 is ISomeInterface) { Console.WriteLine("Instance1 is interface!"); } if (instance2 is ISomeInterface) { Console.WriteLine("Instance2 is interface!"); } } } Expected output is both WriteLines get hit: Instance1 is interface! Instance2 is interface! So now the 'D' version: interface ISomeInterface(T) { T getValue(); } class SomeClass(T) : ISomeInterface!T { private: T t; public: this(T t) { this.t = t; } T getValue() { return t; } } class SomeOtherClass(T) : ISomeInterface!T { private: T t; public: this(T t) { this.t = t; } T getValue() { return t; } } ...which seems to work the same way with preliminary testing. I guess my question is...templates are different than generics, but can I feel confident continuing forward with such a design in D and expect this more or less to behave as I would expect in C#? Or are there lots of caveats I should be aware of?
Blog Post #78: Notebook, Part II
Continuing the series on the GTK Notebook, we look at multiple tabs, reordering tabs, and stuffing images into tabs. Exciting stuff, no? Here it is: https://gtkdcoding.com/2019/10/11/0078-notebook-ii-multiple-tabs.html
Re: Template mixin + operator overloading question
On Friday, 11 October 2019 at 12:45:59 UTC, Boyan Lazov wrote: Any ideas what I'm doing wrong? Nothing, it's a bug. https://issues.dlang.org/show_bug.cgi?id=19476
Template mixin + operator overloading question
Hello, I seem to have a problem when I use a template mixin and then try to overload operators both in the mixin and in a struct from where it's instantiated. So the idea is that I have a few operators defined in the mixin, then a few more in the struct, and I want to forward all operations not explicitly defined in the struct to the ones in the mixin (and I don't want to do alias this for a variety of unrelated reasons). Basically the simplest code that gives me problems is: import std.stdio; mixin template Impl(T) { T v; int opBinary(string s: "+")(T other) { writeln("Single +"); return 0; } int opBinary(string s: "+")(T[] other) { writeln("Array +"); return 0; } } struct Pt { mixin Impl!float impl; int opBinary(string s: "*")(float other) { writeln("Single *"); return 0; } int opBinary(string s, T)(T v) { //Pt already has opBinary defined, so the operators in the mixin not visible //Thought that delegating to the mixin should be done this way writeln("Delegate ", s); return impl.opBinary!(s)(v); } } void main() { Pt pt; int r = pt + [1f, 2f]; writeln("R: ", r); } This results in an infinite loop though. Seems like Pt.opBinary!("+", float[]) is called over and over. Which is a bit un-intuitive to me - I'm not sure why calling impl.opBinary!(s) can result in endless recursion. The problem appears only when I have 2 operators in the mixin that I try to forward to - e.g. the two overloads of "+" in this case. If I have just 1, it works ok, if I have none, the error message is helpful. Any ideas what I'm doing wrong? Thanks!
Re: Undefined symbol: _dyld_enumerate_tlv_storage (OSX)
On 2019-10-10 20:12, Robert M. Münch wrote: I have two project I want to compile and both times get this error: Undefined symbols for architecture x86_64: "_dyld_enumerate_tlv_storage", referenced from: __d_dyld_getTLSRange in libphobos2.a(osx_tls.o) I'm wondering where this comes from as I didn't see it in the past. Any idea? Any D application needs to be compiled with DMD 2.087.1 or later or the corresponding version of LDC to be able to run on macOS Catalina. That includes DMD itself. The oldest version of DMD that runs on Catalina is 2.088.0, since any given version of DMD is compiled with the previous version. That means that all D applications out there for macOS needs to be recompiled. -- /Jacob Carlborg
Fastest way to check if a predicate can take a single parameter of a specific type
I want to check whether auto _ = pred(T.init); static assert(is(typeof(_) == bool)); compiles or not. Which one __traits(compiles, { auto _ = pred(T.init); static assert(is(typeof(_) == bool)); }) and is(typeof(pred(T.init)) == bool) is preferred compilation performance-wise?
Re: Functional Programming in D
On Thursday, 10 October 2019 at 16:05:13 UTC, bachmeier wrote: On Thursday, 10 October 2019 at 08:59:49 UTC, Russel Winder wrote: My impressions is that the complaints about Scala are similar to C++: too many features that clash with one another and make the language complicated, plus extremely slow compilation times. I haven't seen a lot of complaints about mixing imperative and functional. Scala compile times are slow, because Scala has more compilation phases than C++. I guess that is because of feature bloat in the language as such, IMHO not necessarily because FP and OOP are mixed into the same language. Scala is just packed with too many language constructs that are also in many cases quite extensive. Then there is a problem with implicit conversions in Scala not being scalable in compilation times, see https://dzone.com/articles/implicits-scala-conversion In Scala3 (due to be released in spring 2020) implicits were replaced by what they call delegates (and extension methods were introduced). Whether that reduces compilation times I don't know. But the compiler in Scala3 is based on a complete new approach to further reduce compilation times. However, Scala3 is a new language. Whether people will make the move from Scala to Scala3 remains to be seen.
Re: D man pages
On Thursday, 10 October 2019 at 19:19:42 UTC, Daniel Kozak wrote: On Thursday, 10 October 2019 at 18:52:32 UTC, Jarek wrote: On Monday, 23 September 2019 at 12:31:16 UTC, Adam D. Ruppe wrote: [...] Hello, thanks for reply. This is my first dlang work: import std.stdio; import std.conv; import core.sys.posix.dirent; [...] You should use fromStringZ: https://dlang.org/phobos/std_string.html#.fromStringz Thanks, now it works.
Re: Undefined symbol: _dyld_enumerate_tlv_storage (OSX)
On Thursday, 10 October 2019 at 18:31:25 UTC, Daniel Kozak wrote: What dmd version? https://issues.dlang.org/show_bug.cgi?id=20019 Ah, I should have read this before replying; that's precisely the issue I had.
Re: How can I make a program which uses all cores and 100% of cpu power?
On Fri, 2019-10-11 at 00:41 +, Murilo via Digitalmars-d-learn wrote: > I have started working with neural networks and for that I need a > lot of computing power but the programs I make only use around > 30% of the cpu, or at least that is what Task Manager tells me. > How can I make it use all 4 cores of my AMD FX-4300 and how can I > make it use 100% of it? Why do you want to get CPU utilisation to 100%? I would have thought you'd want to get the neural net to be as fast as possible, this does not necessarily imply that all CPU cycles must be used. A neural net is, at it's heart, a set of communicating nodes. This is as much an I/O bound model as it is compute bound one – nodes are generally waiting for input as much as they are computing a value. The obvious solution architecture for a small computer is to create a task per node on a thread pool, with a few more threads in the pool than you have processors, and hope that you can organise the communication between tasks so as to avoid cache misses. This can be tricky when using multi-core processors. It gets even worse when you have hyperthreads – many organisations doing CPU bound computations switch off hyperthreads as they cause more problems than theysolve. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: Undefined symbol: _dyld_enumerate_tlv_storage (OSX)
On Thursday, 10 October 2019 at 18:12:51 UTC, Robert M. Münch wrote: I have two project I want to compile and both times get this error: Undefined symbols for architecture x86_64: "_dyld_enumerate_tlv_storage", referenced from: __d_dyld_getTLSRange in libphobos2.a(osx_tls.o) I'm wondering where this comes from as I didn't see it in the past. Any idea? I had the same missing symbol at runtime, when trying to run an already compiled binary (LDC 1.16 I think) after a Catalina update. In that case, recompiling (LDC 1.17 or DMD 2.088.0) was apparently enough to mitigate the issue.
Re: How can I make a program which uses all cores and 100% of cpu power?
On 10/10/2019 05:41 PM, Murilo wrote: > I have started working with neural networks and for that I need a lot of > computing power but the programs I make only use around 30% of the cpu, > or at least that is what Task Manager tells me. How can I make it use > all 4 cores of my AMD FX-4300 and how can I make it use 100% of it? Your threads must allocate as little memory as possible because memory allocation can trigger garbage collection and garbage collection stops all threads (except the one that's performing collection). We studied the effects of different allocation schemes during our last local D meetup[1]. The following program has two similar worker threads. One allocates in an inner scope, the other one uses a static Appender and clears its state as needed. The program sets 'w' to 'worker' inside main(). Change it to 'worker2' to see a huge difference: On my 4-core laptop its 100% versus 400% CPU usage. import std.random; import std.range; import std.algorithm; import std.concurrency; import std.parallelism; enum inner_N = 100; void worker() { ulong result; while (true) { int[] arr; foreach (j; 0 .. inner_N) { arr ~= uniform(0, 2); } result += arr.sum; } } void worker2() { ulong result; static Appender!(int[]) arr; while (true) { arr.clear(); foreach (j; 0 .. inner_N) { arr ~= uniform(0, 2); } result += arr.data.sum; } } void main() { // Replace with 'worker2' to see the speedup alias w = worker; auto workers = totalCPUs.iota.map!(_ => spawn()).array; w(); } The static Appender is thread-safe because each thread gets their own copy due to data being thread-local by default in D. However, it doesn't mean that the functions are reentrant: If they get called recursively perhaps indirectly, then the subsequent executions would corrupt previous executions' Appender states. Ali [1] https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyzmbzb/ Are you someone in the Bay Area but do not come to our meetups? We've been eating your falafel wraps! ;)