Re: `this` and nested structs
On Thursday, 10 May 2018 at 03:23:50 UTC, Mike Franklin wrote: Consider the following code: --- struct S { // intentionally not `static` struct SS { int y() { return x; } // Error: need `this` for `x` of type `int` } int x; SS ss; } void main() { S s; s.ss.y(); } --- If I change `return x;` to `return this.x;` then of course it emits the following error: Error: no property `x` for type `SS` My understanding is that `SS` should have a context pointer to an instance of `S`, but how do I navigate the members of `S` and `SS`. Is this a bug? Thanks, Mike My understanding is that nested structs have an implicit context pointer to their containing scope. Nesting with hidden context pointer is only for nested structs inside functions. https://dlang.org/spec/struct.html#nested This is a source a confusion unfortunately.
`this` and nested structs
Consider the following code: --- struct S { // intentionally not `static` struct SS { int y() { return x; } // Error: need `this` for `x` of type `int` } int x; SS ss; } void main() { S s; s.ss.y(); } --- If I change `return x;` to `return this.x;` then of course it emits the following error: Error: no property `x` for type `SS` My understanding is that `SS` should have a context pointer to an instance of `S`, but how do I navigate the members of `S` and `SS`. Is this a bug? Thanks, Mike My understanding is that nested structs have an implicit context pointer to their containing scope.
Re: Binderoo additional language support?
On 05/09/2018 03:50 PM, Ethan wrote: On Tuesday, 8 May 2018 at 14:28:53 UTC, jmh530 wrote: I don't really understand what to use binderoo for. So rather than fill out the questionnaire, maybe I would just recommend you do some work on wiki, blog post, or simple examples. Been putting that off until the initial proper stable release, it's still in a pre-release phase. But tl;dr - It acts as an intermediary layer between a host application written in C++/.NET and libraries written in D. And as it's designed for rapid iteration, it also supports recompiling the D libraries and reloading them on the fly. Full examples and documentation will be coming. Would it make sense to build a REPL or Jupyter kernel on top of Binderoo?
Re: D GPU execution module: A survey of requirements.
On Thursday, 10 May 2018 at 00:10:07 UTC, H Paterson wrote: Welp... It's not quite what I would have envisioned, but seems to fill the role. Thanks for pointing Dcompute out to me - I only found it mentioned in a dead link on the D wiki. Time to find a new project... I'm sure the people who work on Dcompute (or libmir) would appreciate any help you're willing to provide.
Re: Binderoo additional language support?
On Wednesday, 9 May 2018 at 19:50:41 UTC, Ethan wrote: Been putting that off until the initial proper stable release, it's still in a pre-release phase. But tl;dr - It acts as an intermediary layer between a host application written in C++/.NET and libraries written in D. And as it's designed for rapid iteration, it also supports recompiling the D libraries and reloading them on the fly. Full examples and documentation will be coming. Great. Thanks.
Re: D GPU execution module: A survey of requirements.
On Wednesday, 9 May 2018 at 23:37:14 UTC, Henry Gouk wrote: On Wednesday, 9 May 2018 at 23:26:19 UTC, H Paterson wrote: Hello, I'm interested in writing a module for executing D code on GPUs. I'd like to bounce some ideas off D community members to see what this module needs do. [...] Check out https://github.com/libmir/dcompute Welp... It's not quite what I would have envisioned, but seems to fill the role. Thanks for pointing Dcompute out to me - I only found it mentioned in a dead link on the D wiki. Time to find a new project...
[Issue 15609] Populate vtable in debuginfo
https://issues.dlang.org/show_bug.cgi?id=15609 --- Comment #1 from Manu--- MSVC emits the vptr as "__vfptr", which is a pointer to an array of void* Please have DMD match this array of void* convention, so we can inspect the state of the vtable in D just the same as in C++? --
Re: D GPU execution module: A survey of requirements.
On Wednesday, 9 May 2018 at 23:26:19 UTC, H Paterson wrote: Hello, I'm interested in writing a module for executing D code on GPUs. I'd like to bounce some ideas off D community members to see what this module needs do. What about DCompute project? [...]
Re: D GPU execution module: A survey of requirements.
On Wednesday, 9 May 2018 at 23:26:19 UTC, H Paterson wrote: Hello, I'm interested in writing a module for executing D code on GPUs. I'd like to bounce some ideas off D community members to see what this module needs do. [...] Check out https://github.com/libmir/dcompute
D GPU execution module: A survey of requirements.
Hello, I'm interested in writing a module for executing D code on GPUs. I'd like to bounce some ideas off D community members to see what this module needs do. I'm conducting this survey in case my project becomes sufficiently developed to be contributed to the D standard library. I'd appreciate if you could take the time to share your thoughts on how a GPU execution module should work, and to critique my ideas. --- First, The user interface... I haven't thought about what a good (and D style) API for GPU execution should look like, and I'd appreciate open suggestions. I've got a strong preference for code that encapsulates *all* the details of GPU control, and was thinking about using D's template system to pass individual functions to the GPU executor: import GPUCompute; int[] doubleArray(int[] input) { foreach(ref value; input) { value *= 2; } return input; } auto gpuDoubleArray = generateGPUFunction!doubleArray; gpuDoubleArray([1, 2, 3]); Can you provide feedback on this idea? What other approaches can you think off? I expect an initial version will only support functions for GPU execution, but I suppose if the module becomes successful I could expand the code to buffer and manipulate entire classes on the GPU. (How important is GPU object execution to you?) --- Secondly, Internal workings: Currently, I'm thinking of using CTFE to generate OpenCL or Vulkan compute kernels for each requested GPU function. The module can encapsulate code to compile the compute kernels for the native GPU at runtime, probably when either the user program opens or the GPU execution module enters scope. The hard part of this will be getting access to the AST that the compiler generates for the functions. I still need to research this, and I'd appreciate being directed to any relevant material. I'd rather not have to import large parts of DMD into (what I hope) will eventually be part of the D standard library, but I think importing parts of DMD will be preferable to writing a new parser from scratch. Are their any alternative ideas for the general approach to the module? --- Thirdly, Open submissions: How would use D GPU execution? What kind of tasks would you run; what would your code look like? How do you want your code to look? What do you think the minimum requirements of the module needs to be, before it becomes useless to you? --- Thanks for your time, and I apologize my questions are open ended and vague: This is all pre-planning work for now, and I don't promise it'll come to anything - even a draft or architecture plan. Cheers, H Paterson.
Re: Simple web server benchmark - vibe.d is slower than node.js and Go?
On Wednesday, 9 May 2018 at 21:55:15 UTC, Daniel Kozak wrote: On which system? AFAIK HTTPServerOption.reusePort works on Linux but maybe not on others OSes. Other question is what events driver is use (libasync, libevent, vibe-core) On Wed, May 9, 2018 at 9:12 PM, Arun Chandrasekaran via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Monday, 30 October 2017 at 17:23:02 UTC, Daniel Kozak wrote: Maybe this one: import vibe.d; import std.regex; import std.array : appender; static reg = ctRegex!"^/greeting/([a-z]+)$"; void main() { setupWorkerThreads(logicalProcessorCount); runWorkerTaskDist(); runApplication(); } void runServer() { auto settings = new HTTPServerSettings; settings.options |= HTTPServerOption.reusePort; settings.port = 3000; settings.serverString = null; listenHTTP(settings, ); } void handleRequest(HTTPServerRequest req, HTTPServerResponse res) { switch(req.path) { case "/": res.writeBody("Hello World", "text/plain"); break; default: auto m = matchFirst(req.path, reg); string message = "Hello, "; auto app = appender(message); app.reserve(32); app ~= m[1]; res.writeBody(app.data, "text/plain"); } } On Mon, Oct 30, 2017 at 5:41 PM, ade90036 via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Thursday, 21 September 2017 at 13:09:33 UTC, Daniel Kozak wrote: wrong version, this is my letest version: https://paste.ofcode.org/qWsQi kdhKiAywgBpKwANFR On Thu, Sep 21, 2017 at 3:01 PM, Daniel Kozakwrote: my version: https://paste.ofcode.org/RLX7GM6SHh3DjBBHd7wshj On Thu, Sep 21, 2017 at 2:50 PM, Sönke Ludwig via Digitalmars-d < digitalmars-d@puremagic.com> wrote: Am 21.09.2017 um 14:41 schrieb Vadim Lopatin: [...] Oh, sorry, I forgot the reusePort option, so that multiple sockets can listen on the same port: auto settings = new HTTPServerSettings("0.0.0.0:3000"); settings.options |= HTTPServerOption.reusePort; listenHTTP(settings, ); Hi, would it be possible to re-share the example of vibe.d woth multithreaded support. The pastebin link has expired and the pull request doesnt have the latest version. Thanks Ade With vibe.d 0.8.2, even when multiple worker threads are setup, only one thread handles the requests: ``` import core.thread; import vibe.d; import std.experimental.all; auto reg = ctRegex!"^/greeting/([a-z]+)$"; void main() { writefln("Master %d is running", getpid()); setupWorkerThreads(logicalProcessorCount + 1); runWorkerTaskDist(); runApplication(); } void runServer() { auto settings = new HTTPServerSettings; settings.options |= HTTPServerOption.reusePort; settings.port = 8080; settings.bindAddresses = ["127.0.0.1"]; listenHTTP(settings, ); } void handleRequest(HTTPServerRequest req, HTTPServerResponse res) { writeln("My Thread Id: ", to!string(thisThreadID)); // simulate long runnig task Thread.sleep(dur!("seconds")(3)); if (req.path == "/") res.writeBody("Hello, World! from " ~ to!string(thisThreadID), "text/plain"); else if (auto m = matchFirst(req.path, reg)) res.writeBody("Hello, " ~ m[1] ~ " from " ~ to!string(thisThreadID), "text/plain"); } ``` That could be the reason for slowness. Ubuntu 17.10 64 bit, DMD v2.079.1, E7-4860, 8 core 32 GB RAM. With slight modifcaition to capture the timestamp of the request on the server: import std.datetime.systime : Clock; auto tm = Clock.currTime().toISOExtString(); writeln(tm, " My Thread Id: ", to!string(thisThreadID)); // simulate long runnig task Thread.sleep(dur!("seconds")(3)); if (req.path == "/") res.writeBody(tm ~ " Hello, World! from " ~ to!string(thisThreadID), "text/plain"); Launch two parallel curls.. and here is the server log.. Master 13284 is running [vibe-6(5fQI) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-7(xljY) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-2(FVCk) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-3(peZP) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-8(c5pQ) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-4(T/oM) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-5(zc5i) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-1(Rdux) INF] Listening for requests on http://0.0.0.0:8080/ [vibe-0(PNMK) INF] Listening for requests on http://0.0.0.0:8080/ 2018-05-09T15:32:41.5424275 My Thread Id: 140129463940864 2018-05-09T15:32:44.5450092 My Thread Id: 140129463940864 2018-05-09T15:32:56.3998322 My Thread Id: 140129463940864 2018-05-09T15:32:59.4022579 My Thread Id: 140129463940864 2018-05-09T15:33:12.4973215 My Thread Id: 140129463940864 2018-05-09T15:33:15.4996923 My Thread Id: 140129463940864 PS: Your top posting makes reading your replies difficult
Re: is the ubuntu sourceforge repository safe?
On Tuesday, 1 August 2017 at 10:01:17 UTC, Michael wrote: On Monday, 24 July 2017 at 11:02:55 UTC, Russel Winder wrote: On Sun, 2017-07-23 at 18:23 +, Michael via Digitalmars-d wrote: I stopped using it. It kept causing error messages in my package manager and I couldn't update it properly so I've just stuck to downloading the updates on release. If we are talking about D-Apt here http://d-apt.sourceforge.net/ it seems to be working fine for me on Debian Sid. 2.075 just installed this morning. I stopped using it a while ago as it was constantly causing me problems with being unable to check for new package updates. It was right when sourceforge was issuing security warnings and I couldn't be bothered to try and deal with it. Just following up on this, because I had the same problem: 1. Use wget or curl to download the .deb right from the archive wget http://downloads.dlang.org/releases/2018/dmd_2.080.0-0_i386.deb 2. Try to install it with dpkg dpkg -i dmd_2.080.0-0_i386.deb ### If you experience errors, add the following steps. If not, skip them. 3. Update your cache sudo apt-get update 4. Download the dependencies, if you need to. In my case, I needed libc6-dev and gcc, which you *would normally* install like so: sudo apt-get install libc6-dev gcc But I had errors when trying to do that, which were resolved by running: sudo apt --fix-broken install ### 5. Finally, run `dmd --version` to test that it works!
Re: Simple web server benchmark - vibe.d is slower than node.js and Go?
On which system? AFAIK HTTPServerOption.reusePort works on Linux but maybe not on others OSes. Other question is what events driver is use (libasync, libevent, vibe-core) On Wed, May 9, 2018 at 9:12 PM, Arun Chandrasekaran via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Monday, 30 October 2017 at 17:23:02 UTC, Daniel Kozak wrote: > >> Maybe this one: >> >> import vibe.d; >> import std.regex; >> import std.array : appender; >> >> static reg = ctRegex!"^/greeting/([a-z]+)$"; >> >> void main() >> { >> setupWorkerThreads(logicalProcessorCount); >> runWorkerTaskDist(); >> runApplication(); >> } >> >> void runServer() >> { >> auto settings = new HTTPServerSettings; >> settings.options |= HTTPServerOption.reusePort; >> settings.port = 3000; >> settings.serverString = null; >> listenHTTP(settings, ); >> } >> >> void handleRequest(HTTPServerRequest req, >> HTTPServerResponse res) >> { >> switch(req.path) >> { >> case "/": res.writeBody("Hello World", "text/plain"); >> break; >> default: >> auto m = matchFirst(req.path, reg); >> string message = "Hello, "; >> auto app = appender(message); >> app.reserve(32); >> app ~= m[1]; >> res.writeBody(app.data, "text/plain"); >> } >> } >> >> On Mon, Oct 30, 2017 at 5:41 PM, ade90036 via Digitalmars-d < >> digitalmars-d@puremagic.com> wrote: >> >> On Thursday, 21 September 2017 at 13:09:33 UTC, Daniel Kozak wrote: >>> >>> wrong version, this is my letest version: https://paste.ofcode.org/qWsQi kdhKiAywgBpKwANFR On Thu, Sep 21, 2017 at 3:01 PM, Daniel Kozakwrote: my version: https://paste.ofcode.org/RLX7GM6SHh3DjBBHd7wshj > > On Thu, Sep 21, 2017 at 2:50 PM, Sönke Ludwig via Digitalmars-d < > digitalmars-d@puremagic.com> wrote: > > Am 21.09.2017 um 14:41 schrieb Vadim Lopatin: > >> >> [...] >> >>> >>> Oh, sorry, I forgot the reusePort option, so that multiple sockets >> can listen on the same port: >> >> auto settings = new HTTPServerSettings("0.0.0.0:3000"); >> settings.options |= HTTPServerOption.reusePort; >> listenHTTP(settings, ); >> >> > Hi, would it be possible to re-share the example of vibe.d woth >>> multithreaded support. >>> >>> The pastebin link has expired and the pull request doesnt have the >>> latest version. >>> >>> Thanks >>> >>> Ade >>> >> > With vibe.d 0.8.2, even when multiple worker threads are setup, only one > thread handles the requests: > > ``` > import core.thread; > import vibe.d; > import std.experimental.all; > > auto reg = ctRegex!"^/greeting/([a-z]+)$"; > > void main() > { > writefln("Master %d is running", getpid()); > setupWorkerThreads(logicalProcessorCount + 1); > runWorkerTaskDist(); > runApplication(); > } > > void runServer() > { > auto settings = new HTTPServerSettings; > settings.options |= HTTPServerOption.reusePort; > settings.port = 8080; > settings.bindAddresses = ["127.0.0.1"]; > listenHTTP(settings, ); > } > > void handleRequest(HTTPServerRequest req, > HTTPServerResponse res) > { > writeln("My Thread Id: ", to!string(thisThreadID)); > // simulate long runnig task > Thread.sleep(dur!("seconds")(3)); > > if (req.path == "/") > res.writeBody("Hello, World! from " ~ to!string(thisThreadID), > "text/plain"); > else if (auto m = matchFirst(req.path, reg)) > res.writeBody("Hello, " ~ m[1] ~ " from " ~ > to!string(thisThreadID), "text/plain"); > } > ``` > > That could be the reason for slowness. >
[Issue 18845] Extern(C++) class with no virtual functions
https://issues.dlang.org/show_bug.cgi?id=18845 --- Comment #2 from Manu--- What does "safe casting" mean? You mean that it might require pointer adjustment? I mean, it's absolutely necessary that when casting extern(C++) classes, that some special logic is applied which may need to perform a pointer adjustment, just like when casting C++ classes in C++ ;) We can't escape that. We either need to support it, or it's just broken. At very least, we should emit an error when an extern(C++) class is declared with no virtual members saying it's not supported. --
[Issue 18846] New: VisualD - show vtable in debugger
https://issues.dlang.org/show_bug.cgi?id=18846 Issue ID: 18846 Summary: VisualD - show vtable in debugger Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: visuald Assignee: nob...@puremagic.com Reporter: turkey...@gmail.com Compile and debug this code (using Mago): --- C++: #include class Something { public: virtual void x(); int data = 10; }; void Something::x() { printf("X"); } void dfunc(Something*); void main() { Something s; dfunc(); } --- D: -- extern(C++) class Something { abstract void x(); int data; } extern(C++) void dfunc(Something s) { s.x(); } --- Put a breakpoint in dfunc(). If you inspect 's' in C++, you will see "__vfptr ..." and "data = 10" If you inspect 's' from D, you will only see "data = 10" In C++, __vfptr is an array of "void*" that you can open and see a list of all the virtual functions. I really want to be able to see __vfptr from D's debuginfo too. At very least, it will help to debug mis-alignments between C++ and extern(C++) vtables. It's very easy to create a vtable mismatch from D. If the debuginfo doesn't want to emit this member (it should), then I think it might also be possible to fabricate its appearance with the natvis? --
[Issue 18845] Extern(C++) class with no virtual functions
https://issues.dlang.org/show_bug.cgi?id=18845 Ethan Watsonchanged: What|Removed |Added CC||goober...@gmail.com --- Comment #1 from Ethan Watson --- See: The giant workaround I had to do for Binderoo for this very thing in my DConf 2017 talk. I believe the reason classes always have a vtable is so safe casting is still guaranteed. But that's just an assumption. --
[Issue 18845] Extern(C++) class with no virtual functions
https://issues.dlang.org/show_bug.cgi?id=18845 Manuchanged: What|Removed |Added Keywords||C++ --
[Issue 18845] New: Extern(C++) class with no virtual functions
https://issues.dlang.org/show_bug.cgi?id=18845 Issue ID: 18845 Summary: Extern(C++) class with no virtual functions Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: turkey...@gmail.com I just encountered a gotcha where D doesn't support extern(C++) classes with no vtable. In C++, if a class has no virtual functions, there is no vtable. If a virtual is added to a derived class, the vtable is pre-pended to the base. Is it possible to replicate this in extern(C++)? We have had a bunch of gotchas related to this. You can argue that a base class with no virtual functions is a struct, and that may be true, except that its *intent* is that it shall be derived from. D can not derive a class from a struct, so the base-with-no-virtuals needs to be expressed as a class in D for the arrangement to translate and interact with D code. --
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 18:04:40 UTC, Per Nordlöw wrote: On Wednesday, 9 May 2018 at 17:52:48 UTC, Meta wrote: I wasn't able to reproduce it on dmd-nightly: https://run.dlang.io/is/9wT8tH What version of the compiler are you using? Ahh, the struct needs to be in a unittest block for it to happen: struct R { @disable this(this); int* _ptr; } unittest { struct S { @disable this(this); int* _ptr; } struct T { int* _ptr; } pragma(msg, "R: ", typeof(R.tupleof)); pragma(msg, "S: ", typeof(S.tupleof)); pragma(msg, "T: ", typeof(T.tupleof)); } prints R: (int*) S: (int*, void*) T: (int*) Why is that? It's a context pointer to the enclosing function/object/struct. Mark the struct as static to get rid of it.
Re: Why The D Style constants are written in camelCase?
On Wednesday, 9 May 2018 at 15:20:12 UTC, Jonathan M Davis wrote: On Wednesday, May 09, 2018 14:12:41 Dmitry Olshansky via Digitalmars-d-learn wrote: [...] To an extent that's true, but anyone providing a library for use by others in the D community should seriously consider following it with regards to public symbols so that they're consistent with how stuff is named across the ecosystem. It's not the end of the world to use a library that did something like use PascalCase instead of camelCase for its function names, or which used lowercase and underscores for its type names, or did any number of other things which are perfectly legitimate but don't follow the D style. However, they tend to throw people off when they don't follow the naming style of the rest of the ecosystem and generally cause friction when using 3rd party libraries. [...] If this issue https://github.com/dlang-community/dfmt/issues/227 is fixed we could potentially summarize that in .editorconfig file so that anyone who wishes can easily adopt it.
Re: Need help with the dmd package on NixOS
On Friday, 4 May 2018 at 20:27:33 UTC, Thomas Mader wrote: The dmd package on NixOS doesn't work anymore in their master branch. Since dmd still works correctly on the stable branch I tried to examine the differences of libphobos2.a. I switched back to version 2.079.0 on master to have the same version as on stable. First I diffed the outputs of 'ar -t libphobos2.a' but there weren't any differences. But executing 'nm libphobos2.a' results in many errors like: nm: /home/thomad/devel/nixpkgs/result/lib/libphobos2.a(object_1_257.o): no group info for section .text._D6object6Object5opCmpMFCQqZi nm: /home/thomad/devel/nixpkgs/result/lib/libphobos2.a(object_1_257.o): no group info for section .text._D6object6Object5opCmpMFCQqZi nm: object_1_257.o: Bad value nm: /home/thomad/devel/nixpkgs/result/lib/libphobos2.a(object_5_391.o): no group info for section .text._D6object9Interface9__xtoHashFNbNeKxSQBjQBfZm nm: /home/thomad/devel/nixpkgs/result/lib/libphobos2.a(object_5_391.o): no group info for section .text._D6object9Interface9__xtoHashFNbNeKxSQBjQBfZm nm: object_5_391.o: Bad value nm on the static lib of the stable branch has just one error: nm: threadasm.o: no symbols Looking into the output of both nm calls shows that a lot of sections are missing in the static phobos lib on the master branch. The first being object_1_257.o. Now I wonder how something like that is possible.
Re: Binderoo additional language support?
On Monday, 7 May 2018 at 17:28:55 UTC, Ethan wrote: 13 responses so far. Cheers to those 13. 4 responses since that post. And all four have listed "Plain old ordinary C" as something they want supported. Classic. Now it's in front of every other option. Supporting C is step one to supporting Java too. So that's cool. And also Python. And Swift. And Rust. And basically everything. So I've gone and done a thing to my branch of Binderoo - I now generate C function wrappers alongside the C++ function wrappers. Environments that can support C++ calling conventions will absolutely want to stick to them, especially on x64 as the default calling conventions use registers extensively rather than pushing everything to the stack. And environments that don't support those conventions can stick to the C function pointer and be merry. I'll need to clean up my minimal C API and work on generating code for those languages before I can say "They're supported!" but at the very least the groundwork is there.
Re: Binderoo additional language support?
On Tuesday, 8 May 2018 at 14:28:53 UTC, jmh530 wrote: I don't really understand what to use binderoo for. So rather than fill out the questionnaire, maybe I would just recommend you do some work on wiki, blog post, or simple examples. Been putting that off until the initial proper stable release, it's still in a pre-release phase. But tl;dr - It acts as an intermediary layer between a host application written in C++/.NET and libraries written in D. And as it's designed for rapid iteration, it also supports recompiling the D libraries and reloading them on the fly. Full examples and documentation will be coming.
Re: Simple web server benchmark - vibe.d is slower than node.js and Go?
On Monday, 30 October 2017 at 17:23:02 UTC, Daniel Kozak wrote: Maybe this one: import vibe.d; import std.regex; import std.array : appender; static reg = ctRegex!"^/greeting/([a-z]+)$"; void main() { setupWorkerThreads(logicalProcessorCount); runWorkerTaskDist(); runApplication(); } void runServer() { auto settings = new HTTPServerSettings; settings.options |= HTTPServerOption.reusePort; settings.port = 3000; settings.serverString = null; listenHTTP(settings, ); } void handleRequest(HTTPServerRequest req, HTTPServerResponse res) { switch(req.path) { case "/": res.writeBody("Hello World", "text/plain"); break; default: auto m = matchFirst(req.path, reg); string message = "Hello, "; auto app = appender(message); app.reserve(32); app ~= m[1]; res.writeBody(app.data, "text/plain"); } } On Mon, Oct 30, 2017 at 5:41 PM, ade90036 via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Thursday, 21 September 2017 at 13:09:33 UTC, Daniel Kozak wrote: wrong version, this is my letest version: https://paste.ofcode.org/qWsQi kdhKiAywgBpKwANFR On Thu, Sep 21, 2017 at 3:01 PM, Daniel Kozakwrote: my version: https://paste.ofcode.org/RLX7GM6SHh3DjBBHd7wshj On Thu, Sep 21, 2017 at 2:50 PM, Sönke Ludwig via Digitalmars-d < digitalmars-d@puremagic.com> wrote: Am 21.09.2017 um 14:41 schrieb Vadim Lopatin: [...] Oh, sorry, I forgot the reusePort option, so that multiple sockets can listen on the same port: auto settings = new HTTPServerSettings("0.0.0.0:3000"); settings.options |= HTTPServerOption.reusePort; listenHTTP(settings, ); Hi, would it be possible to re-share the example of vibe.d woth multithreaded support. The pastebin link has expired and the pull request doesnt have the latest version. Thanks Ade With vibe.d 0.8.2, even when multiple worker threads are setup, only one thread handles the requests: ``` import core.thread; import vibe.d; import std.experimental.all; auto reg = ctRegex!"^/greeting/([a-z]+)$"; void main() { writefln("Master %d is running", getpid()); setupWorkerThreads(logicalProcessorCount + 1); runWorkerTaskDist(); runApplication(); } void runServer() { auto settings = new HTTPServerSettings; settings.options |= HTTPServerOption.reusePort; settings.port = 8080; settings.bindAddresses = ["127.0.0.1"]; listenHTTP(settings, ); } void handleRequest(HTTPServerRequest req, HTTPServerResponse res) { writeln("My Thread Id: ", to!string(thisThreadID)); // simulate long runnig task Thread.sleep(dur!("seconds")(3)); if (req.path == "/") res.writeBody("Hello, World! from " ~ to!string(thisThreadID), "text/plain"); else if (auto m = matchFirst(req.path, reg)) res.writeBody("Hello, " ~ m[1] ~ " from " ~ to!string(thisThreadID), "text/plain"); } ``` That could be the reason for slowness.
Re: unit-threaded v0.7.45 - now with more fluency
On 2018-05-08 09:07, Nick Sabalausky (Abscissa) wrote: The question is: Why "should.equal" instead of "shouldEqual"? The dot only seems there to be cute. It scales better. This way only one "should" function is needed and one "not" function. Otherwise there would be a lot of duplication, i.e. "shouldEqual" and "shouldNotEqual". Hopefully the library can have a one generic implementation of "not", where it doesn't if the assertion is "equal", "be" or some other assertion. -- /Jacob Carlborg
Re: "Start a Minimal web server" example do not work.
On 08/05/2018 21:36, BoQsc wrote: On Tuesday, 8 May 2018 at 19:19:26 UTC, Seb wrote: On Tuesday, 8 May 2018 at 18:40:34 UTC, BoQsc wrote: On Tuesday, 8 May 2018 at 18:38:10 UTC, BoQsc wrote: On Tuesday, 8 May 2018 at 17:35:13 UTC, Jesse Phillips wrote: [...] Tested with these versions so far, and had all the same errors: C:\Users\Vaidas>dmd --version DMD32 D Compiler v2.079.1 C:\Users\Vaidas>dub --version DUB version 1.8.1, built on Apr 14 2018 C:\Users\Vaidas>dmd --version DMD32 D Compiler v2.080.0 C:\Users\Vaidas>dub --version DUB version 1.9.0, built on May 1 2018 Linking... C:\D\dmd2\windows\bin\lld-link.exe: warning: eventcore.lib(sockets_106c_952.obj): undefined symbol: SetWindowLongPtrA C:\D\dmd2\windows\bin\lld-link.exe: warning: eventcore.lib(sockets_106c_952.obj): undefined symbol: GetWindowLongPtrA error: link failed Error: linker exited with status 1 C:\D\dmd2\windows\bin\dmd.exe failed with exit code 1. Unfortunately, the MinGW version that the replacement libraries are built from omit this symbol. Please file a bug report. For Win32 (--arch=x86_mscoff) this symbol is aliased to SetWindowLongA which should be fine. That's with DMD's bundled LLD linker. Have you tried: 1) installing MS Visual Studio (as others have mentioned their linker works) 2) Using LDC (they usually ship a newer version of the LLD linker) I have installed the one suggested by the dmd-2.080.0.exe installer: Microsoft Visual Studio Community 2017 Version 15.7.0 VisualStudio.15.Release/15.7.0+27703.1 Microsoft .NET Framework Version 4.7.02556 What you actually need is Visual C++ (linker and runtime libraries). For the missing symbol above you also need the Windows SDK which is usually included in the Visual C++ installation.
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 17:52:48 UTC, Meta wrote: I wasn't able to reproduce it on dmd-nightly: https://run.dlang.io/is/9wT8tH What version of the compiler are you using? Ahh, the struct needs to be in a unittest block for it to happen: struct R { @disable this(this); int* _ptr; } unittest { struct S { @disable this(this); int* _ptr; } struct T { int* _ptr; } pragma(msg, "R: ", typeof(R.tupleof)); pragma(msg, "S: ", typeof(S.tupleof)); pragma(msg, "T: ", typeof(T.tupleof)); } prints R: (int*) S: (int*, void*) T: (int*) Why is that?
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:07:37 UTC, Per Nordlöw wrote: Why (on earth) does struct S { @disable this(this); int* _ptr; } pragma(msg, typeof(S.tupleof)); prints (int*, void*) when struct S { int* _ptr; } pragma(msg, typeof(S.tupleof)); prints (int*) ?!!! I wasn't able to reproduce it on dmd-nightly: https://run.dlang.io/is/9wT8tH What version of the compiler are you using?
[Issue 18121] Needleless findSplit* methods
https://issues.dlang.org/show_bug.cgi?id=18121 --- Comment #1 from hst...@quickfur.ath.cx --- Found a similar need for needleless overloads of findSplit* today too. The context is that I'm trying to tokenize a string, and if it starts with a digit, say "1234abcd" I'd like to be able to split it into "1234" and "abcd". So ideally: -- auto input = "1234abcd"; auto r = input.findSplitBefore!(ch => !isDigit(ch)); assert(r[0] == "1234" && r[1] == "abcd"); -- The current overloads do not allow this, though. --
[Issue 18121] Needleless findSplit* methods
https://issues.dlang.org/show_bug.cgi?id=18121 hst...@quickfur.ath.cx changed: What|Removed |Added CC||hst...@quickfur.ath.cx --
Re: Wait-free MPSC and MPMC implement in D
On Wednesday, May 09, 2018 07:13:08 Shachar Shemesh via Digitalmars-d wrote: > I have a motto in life - if you assume you're great, you most likely > aren't. I have grown out of the "my code has no bugs" phase of my life > quite a while ago. The scary truth is that no matter what you do, you're likely going to have bugs in your software that you've simply never found, and they could be minor issues that will never matter or major issues that could completely blow up in your face later. There are plenty of steps that we can and should take in order to try to write software without bugs, but even if we somehow manage to write a piece of software that has no bugs (which is _highly_ unlikely in any software of any real size), we have no way of knowing that we've reached that point. A lack of bug reports just means that no one has reported a bug, not that we don't have any. > The fact that code is used, to good effect and for a long period of time > does not mean that it doesn't have problems, some of them might be quite > fundamental. A great example of this would be openssl. It's used by huge amounts of software and yet it's terrible software. Stuff like operating systems have bugs reported on them all the time, and those often have thousands of developers and millions of users. > fundamental. "This code is highly tested" is not a good answer to "I > think I found a problem". Yeah. Thorough testing is a great sign (and a lack of testing is a very bad sign), but it's not a silver bullet. It's easy to do what looks like very thorough testing and find that you've missed stuff when you actually look at the code coverage. And 100% code coverage doesn't mean that that you caught everything, just that you're actually running tests that touch all of the parts of the code. std.datetime had very thorough test coverage when it was first released, but at least one bug still crept through (and was fortunately found later). I have no clue whether there are any bugs left. I hope not, but there's no way to know, and any time I touch the code, I risk adding more bugs, even if I'm fixing other problems. dxml had 100% test coverage (or as close as is possible anyway), and yet when Johan passed it through ldc's fuzzer, it found three cases where I'd failed to call empty before calling front which had not been caught by the tests. Maybe it has more such problems, maybe it doesn't. And it may or may not have other bugs. I have no way to know, and on some level, that's a bit unnerving, but it's just how life goes as a software developer. If someone finds a bug in my software, I want to know about it. It's quite possible that what someone is pointing out is invalid (e.g. when Johan first fuzz-tested dxml, he'd used the API incorrectly, violating the pre-condition for a function he was calling, so what he initially found was not a bug), but if we have any hope of even approaching perfect software, we need to know about the problems that are found so that they can be fixed. And to an extent, a lack of bug reports indicates a lack of use, not a lack of bugs. > So, my plea is this. If you find an area worth improving in Mecca, > please don't hesitate to post. If you find you do hesitate, please > please feel free to send me a private message. I second this for any libraries that I make available and would hope that this would be the attitude of anyone releasing software. There are plenty of cases where folks disagree on where things should go and what is or isn't a bug, but we need to know where the problems are if we have any hope of fixing them. - Jonathan M Davis
Re: Documentation for assumeUnique
On Tuesday, 8 May 2018 at 14:35:15 UTC, Seb wrote: On Monday, 7 May 2018 at 15:32:56 UTC, bachmeier wrote: You can see the messed up documentation here: https://dlang.org/library/std/exception/assume_unique.html Fix: https://github.com/dlang/dlang.org/pull/2364 Nice. Note: The [Edit|Run|Open in IDE] buttons that result from that macro render differently from unittest examples.
Re: dxml behavior after exception: continue parsing
On Tuesday, May 08, 2018 16:18:40 Jesse Phillips via Digitalmars-d-learn wrote: > On Monday, 7 May 2018 at 22:24:25 UTC, Jonathan M Davis wrote: > > I've been considering adding more configuration options where > > you say something like you don't care if any invalid characters > > are encountered, in which case, you could cleanly parse past > > something like an unescaped &, but you'd then potentially be > > operating on invalid XML without knowing it and could get > > undesirable results depending on what exactly is wrong with the > > XML. I haven't decided for sure whether I'm going to add any > > such configuration options or how fine-grained they'd be, but > > either way, the current behavior will continue to be the > > default behavior. > > > > - Jonathan M Davis > > I'm not going to ask for that (configuration). I may look into > cloning dxml and changing it to parse the badly formed XML. Well, for the general case at least, being able to configure the parser to not care about certain types of validation is the best that I can think of at the moment for dealing with invalid XML (especially with the issues caused by the fact that only one range actually does the validation, making selective skipping of invalid stuff while parsing a very iffy proposition). dxml was designed with the idea that it would be operating on valid XML, and designing a parser to operate on invalid XML can get very tricky - to the point that it may simply be best for the programmer to design their own solution tailored to their particular use case if they're going to be encountering a lot of invalid XML. If all that's needed is to tell the parser to allow stuff like lone ampersands, then that's quite straightforward, but if you're dealing with anything more wrong than that, then things get hairy fast. It's those sorts of problems that have made html parsers so wildly inconsistent in what they do. Personally, I think that we'd have all been better off if the various protocols (particularly those related to the web) had always called for strict validation and rejected anything that didn't follow the spec. Instead, we've got this whole idea of "be strict in what you emit but relax in what you accept," and the result is that we've got a lot of incorrect implementations and a lot of invalid data floating around. And of course, if you don't accept something and someone else does, then your code is considered buggy even if it follows the protocol perfectly and the data is clearly invalid. So, in general, we're all kind of permanently screwed. :( If I can do reasonable things to make dxml better handle bad data, then I'm open to it, but given dxml's design, the options are somewhat limited, and it's just plain a hard problem in general. - Jonathan M Davis
Re: Bugzilla & PR sprint on the first weekend of every month
On Tuesday, 8 May 2018 at 18:48:15 UTC, Seb wrote: What do you guys think about having a dedicated "Bugzilla & PR sprint" at the first weekend of very month? We could organize this a bit by posting the currently "hot" bugs a few days ahead and also make sure that there are plenty of "bootcamp" bugs, s.t. even newcomers can start to get involved. Even if you aren't too much interested in this effort, being a bit more active on Slack/IRC or responsive on GitHub on this weekend would help, s.t. newcomers interested in squashing D bugs get over the initial hurdles pretty quickly and we can finally resolve the long-stalled PRs and find a consensus on them. What do you think? Is this something worth trying? Maybe the DLF could also step in and provide small goodies for all bug hunters of the weekend (e.g. a "D bug hunter" shirt if you got more than X PRs merged). Yeah I think it's worth a try. I would probably emphasize PR reviews but I'd never discourage people from fixing bugs as well. I'd participate so long as I'm available.
Re: sumtype 0.3.0
On Wednesday, 9 May 2018 at 14:56:20 UTC, Paul Backus wrote: [snip] What length actually does, after all the compile-time stuff is expanded, is essentially this: switch(v.tag) { case 0: return sqrt(v.value!Rectangular.x**2 + v.value!Rectangular.y**2); case 1: return v.value!Polar.r; } It's the same thing you'd get if you were implementing a tagged union by hand in C. It's not exactly the same as a function specialized for Rectangular, because the entire point of a sum type or tagged union is to allow runtime dispatch based on the tag. However, the process of choosing which function goes with which tag takes place entirely at compile time. Thanks. That makes sense.
Re: Why The D Style constants are written in camelCase?
On Wednesday, May 09, 2018 14:12:41 Dmitry Olshansky via Digitalmars-d-learn wrote: > On Wednesday, 9 May 2018 at 09:38:14 UTC, BoQsc wrote: > > The D Style suggest to camelCase constants, while Java naming > > conventions always promoted uppercase letter. > > > > Is there an explanation why D Style chose to use camelCase > > instead of all UPPERCASE for constants, was there any technical > > problem that would appear while writing in all UPPERCASE? > > It is D style for standard library. It is mostly arbitrary but in > general sensible. > That’s it. To an extent that's true, but anyone providing a library for use by others in the D community should seriously consider following it with regards to public symbols so that they're consistent with how stuff is named across the ecosystem. It's not the end of the world to use a library that did something like use PascalCase instead of camelCase for its function names, or which used lowercase and underscores for its type names, or did any number of other things which are perfectly legitimate but don't follow the D style. However, they tend to throw people off when they don't follow the naming style of the rest of the ecosystem and generally cause friction when using 3rd party libraries. Stuff like how code is formatted or how internal symbols are named are completely irrelevant to that, but there's a reason that the D style guide provides naming conventions separately from saying anything about how Phobos code should look. The D ecosystem at large is better off if libraries in general follow the same naming conventions for their public symbols. Obviously, not everyone is going to choose to follow the official naming conventions, but IMHO, their use should be actively encouraged with regards to public symbols in libraries that are made publicly available. - Jonathan M Davis
Re: auto: useful, annoying or bad practice?
On Wednesday, May 09, 2018 14:31:00 Jesse Phillips via Digitalmars-d wrote: > On Wednesday, 9 May 2018 at 13:22:56 UTC, bauss wrote: > > Using "auto" you can also have multiple return types. > > > > auto foo(T)(T value) > > { > > > > static if (is(T == int)) return "int: " ~ to!string(value); > > else return value; > > > > } > > > > You cannot give that function a specific return type as it's > > either T or it's string. It's not a single type. > > Its funny, because you example makes this look like a very bad > feature. But there are legitimate cases which doesn't actually > chance the api of the returned type. Basically, it falls into the same situation as ranges and Voldemort types. If you have a situation where it doesn't make sense to restrict the return type to a specific type, where the return type depends on the template arguments, or where the return type should be hidden, then auto is great. But in all of those situations, the API has to be well-known, or the caller can't do anything useful. Worst case, you end up with a base API that's common to all of the return types (e.g. forward range) but where you have to test whether a particular set of template arguments results in a return type which supports more (e.g. a random-access range). Ultimately, the key is that the user of the function needs to be able to know how to use the return type. In some cases, that means returning a specific type, whereas in others, it means using auto and being clear in the documentation about what kind of API the return type has. As long as the API is clear, then auto can be fantastic, but if the documentation is poorly written (or non-existant), then it can be a serious problem. - Jonathan M Davis
Re: sumtype 0.3.0
On Wednesday, 9 May 2018 at 13:33:44 UTC, jmh530 wrote: On Sunday, 6 May 2018 at 19:18:02 UTC, Paul Backus wrote: [snip] - Zero runtime overhead compared to hand-written C Just to clarify, would the run-time performance of the length function in the example be equivalent to if it had been specialized for the Rectangular types (e.g. double length(Rectacular r) { ... })? It looks like the match is using compile-time functionality to choose the right function to call, but I wanted to be sure. What length actually does, after all the compile-time stuff is expanded, is essentially this: switch(v.tag) { case 0: return sqrt(v.value!Rectangular.x**2 + v.value!Rectangular.y**2); case 1: return v.value!Polar.r; } It's the same thing you'd get if you were implementing a tagged union by hand in C. It's not exactly the same as a function specialized for Rectangular, because the entire point of a sum type or tagged union is to allow runtime dispatch based on the tag. However, the process of choosing which function goes with which tag takes place entirely at compile time.
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:36:38 UTC, Per Nordlöw wrote: On Wednesday, 9 May 2018 at 14:34:02 UTC, Per Nordlöw wrote: On Wednesday, 9 May 2018 at 14:20:41 UTC, Per Nordlöw wrote: If so, we can temporarily modify the trait to exclude the last `void*` member of the `S.tuple`. Given that it's always added as the last member. Also note that pragma(msg, __traits(isDisabled, S.this(this))); fails to compile as Error: identifier expected following `.`, not `this` Ahh, but both pragma(msg, __traits(isDisabled, S.__postblit)); pragma(msg, __traits(isDisabled, S.__xpostblit)); prints true for a struct with `@disable this(this);` Which one should I pick to check if last element of `S.tupleof` should be discarded? Managed to put together the hack private template mustAddGCRangeOfStructOrUnion(T) if (is(T == struct) || is(T == union)) { import std.traits : hasUDA; import std.meta : anySatisfy; static if (__traits(hasMember, T, "__postblit")) { static if (__traits(isDisabled, T.__postblit)) { enum mustAddGCRangeOfStructOrUnion = anySatisfy!(mustAddGCRangeOfMember, T.tupleof[0 .. $ - 1]); } else { enum mustAddGCRangeOfStructOrUnion = anySatisfy!(mustAddGCRangeOfMember, T.tupleof); } } else { enum mustAddGCRangeOfStructOrUnion = anySatisfy!(mustAddGCRangeOfMember, T.tupleof); } } defined here https://github.com/nordlow/phobos-next/blob/master/src/gc_traits.d#L81 Destroy.
Re: serialport v1.0.0
On Sunday, 6 May 2018 at 22:02:05 UTC, Oleg B wrote: Stable version of serialport package * Blocking `SerialPortBlk` for classic usage * Non-blocking `SerialPortNonBlk` and `SerialPortFR` for usage in fibers or in vibe-d * Variative initialization and configuration * Hardware flow control config flag Doc: http://serialport.dpldocs.info/v1.0.0/serialport.html Dub: http://code.dlang.org/packages/serialport Git: https://github.com/deviator/serialport I wonder if someone can benchmark serialport lib against this test: http://codeandlife.com/2012/07/03/benchmarking-raspberry-pi-gpio-speed/
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:34:02 UTC, Per Nordlöw wrote: On Wednesday, 9 May 2018 at 14:20:41 UTC, Per Nordlöw wrote: If so, we can temporarily modify the trait to exclude the last `void*` member of the `S.tuple`. Given that it's always added as the last member. Also note that pragma(msg, __traits(isDisabled, S.this(this))); fails to compile as Error: identifier expected following `.`, not `this` Ahh, but both pragma(msg, __traits(isDisabled, S.__postblit)); pragma(msg, __traits(isDisabled, S.__xpostblit)); prints true for a struct with `@disable this(this);` Which one should I pick to check if last element of `S.tupleof` should be discarded?
Re: auto: useful, annoying or bad practice?
On Wednesday, 9 May 2018 at 13:22:56 UTC, bauss wrote: Using "auto" you can also have multiple return types. auto foo(T)(T value) { static if (is(T == int)) return "int: " ~ to!string(value); else return value; } You cannot give that function a specific return type as it's either T or it's string. It's not a single type. Its funny, because you example makes this look like a very bad feature. But there are legitimate cases which doesn't actually chance the api of the returned type.
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:20:41 UTC, Per Nordlöw wrote: If so, we can temporarily modify the trait to exclude the last `void*` member of the `S.tuple`. Given that it's always added as the last member. Also note that pragma(msg, __traits(isDisabled, S.this(this))); fails to compile as Error: identifier expected following `.`, not `this`
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:20:41 UTC, Per Nordlöw wrote: If so, we can temporarily modify the trait to exclude the last `void*` member of the `S.tuple`. Given that it's always added as the last member. Note that `std.traits.isCopyable!S` cannot be used, because it will return true when `S` has uncopyable members regardless of whether S.tupleof have any extra void* element or not (because of S's disabled postblit).
Re: Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
On Wednesday, 9 May 2018 at 14:07:37 UTC, Per Nordlöw wrote: This prevents the trait `mustAddGCRangeOfStructOrUnion` [1] from detecting when a container with manual memory management doesn't have to be scanned by the GC as in, for instance, enum NoGc; struct S { @disable this(this); // disable S postlib @NoGc int* _ptr; } static assert(!mustAddGCRangeOfStructOrUnion!S); // is false when postblit of `S` is disabled [1] https://github.com/nordlow/phobos-next/blob/master/src/gc_traits.d#L81 Can we statically check if the postblit has been disabled via @disable this(this); ? If so, we can temporarily modify the trait to exclude the last `void*` member of the `S.tuple`. Given that it's always added as the last member.
Re: Why The D Style constants are written in camelCase?
On Wednesday, 9 May 2018 at 09:38:14 UTC, BoQsc wrote: The D Style suggest to camelCase constants, while Java naming conventions always promoted uppercase letter. Is there an explanation why D Style chose to use camelCase instead of all UPPERCASE for constants, was there any technical problem that would appear while writing in all UPPERCASE? It is D style for standard library. It is mostly arbitrary but in general sensible. That’s it.
Extra .tupleof field in structs with disabled postblit blocks non-GC-allocation trait
Why (on earth) does struct S { @disable this(this); int* _ptr; } pragma(msg, typeof(S.tupleof)); prints (int*, void*) when struct S { int* _ptr; } pragma(msg, typeof(S.tupleof)); prints (int*) ?!!! This prevents the trait `mustAddGCRangeOfStructOrUnion` [1] from detecting when a container with manual memory management doesn't have to be scanned by the GC as in, for instance, enum NoGc; struct S { @disable this(this); // disable S postlib @NoGc int* _ptr; } static assert(!mustAddGCRangeOfStructOrUnion!S); // is false when postblit of `S` is disabled [1] https://github.com/nordlow/phobos-next/blob/master/src/gc_traits.d#L81
Re: sumtype 0.3.0
On Sunday, 6 May 2018 at 19:18:02 UTC, Paul Backus wrote: [snip] - Zero runtime overhead compared to hand-written C Just to clarify, would the run-time performance of the length function in the example be equivalent to if it had been specialized for the Rectangular types (e.g. double length(Rectacular r) { ... })? It looks like the match is using compile-time functionality to choose the right function to call, but I wanted to be sure.
Re: auto: useful, annoying or bad practice?
On Wednesday, 9 May 2018 at 12:44:34 UTC, Jonathan M Davis wrote: On Monday, April 30, 2018 21:11:07 Gerald via Digitalmars-d wrote: [...] I think that the overall consensus is that it's great but that you do have to be careful about using it when it reduces clarity without adding other benefits. [...] Using "auto" you can also have multiple return types. auto foo(T)(T value) { static if (is(T == int)) return "int: " ~ to!string(value); else return value; } You cannot give that function a specific return type as it's either T or it's string. It's not a single type.
Re: sumtype 0.3.0
On Monday, 7 May 2018 at 21:35:44 UTC, Paul Backus wrote: On Monday, 7 May 2018 at 19:28:16 UTC, Sönke Ludwig wrote: Another similar project: http://taggedalgebraic.dub.pm/ There's also tagged_union and minivariant on dub, that I've found. I'm definitely far from the first person to be dissatisfied with `Algebraic`, or to try my hand at writing a replacement. The main difference between all of those and sumtype is that sumtype has pattern matching. Personally, I consider that an essential feature--arguably *the* essential feature--which is why I went ahead with Yet Another implementation anyway. I agree - it's the same reason I was going to write one. But now I don't have to. :) Atila
Re: auto: useful, annoying or bad practice?
On Monday, April 30, 2018 21:11:07 Gerald via Digitalmars-d wrote: > I'll freely admit I haven't put a ton of thought into this post > (never a good start), however I'm genuinely curious what people's > feeling are with regards to the auto keyword. > > Speaking for myself, I dislike the auto keyword. Some of this is > because I have a preference for static languages and I find auto > adds ambiguity with little benefit. Additionally, I find it > annoying that the phobos documentation relies heavily on auto > obscuring return types and making it a bit more difficult to > follow what is happening which gives me a bad taste for it. > > Having said, the thing that really started my thinking about this > was this post I made: > > https://forum.dlang.org/thread/fytefnejxqdgotjkp...@forum.dlang.org > > Where in order to declare a public variable for the RedBlackTree > lowerBound/upperBound methods I had to fall back on using the > ReturnType template to declare a variable. Jonathan was nice > enough to point me in the right direction and maybe there's a way > to do this without having to fall back on ReturnType. However > this made be wonder if reliance on auto could discourage API > writers from having sane return types. > > So I'm curious, what's the consensus on auto? I think that the overall consensus is that it's great but that you do have to be careful about using it when it reduces clarity without adding other benefits. I remember when std.algorithm didn't use auto in any of its function signatures, because there was a bug in ddoc that made it so that functions that returned auto didn't show up in the docs. It was terrible. Seeing it would have scared off _way_ more people than any concerns over auto returns being confusing. You really, really, really don't want to know what many auto return types look like - especially if ranges are involved. You end up with templates wrapping templates, and seemingly simple stuff ends up looking ugly fast - e.g. until returns something like Until!("a == b", string, char). Having auto in the documentation is _must_ nicer. Really, auto return functions make things possible that would never be possible without it simply because the code would be too hard to read. In the cases where all you care about is what API a return type has and not what the return type is, auto is a true enabler. Voldemort types then take than a step further by removing the possibility of referring to the type by name and forcing you to go off of its API, which improves maintenance from the perspective of the person maintaining the function (because then they can change the return type as much as they like so long as its API is the same). But even without Voldemort types, auto return types simplify the information to remove all of that extra stuff that you don't care about the type. Of course, that can be taken too far and/or handled badly. The user of a function doesn't necessary have to care what the return type is, but they _do_ have to know enough to know what its API is. And that means that whenever a function returns auto, it needs to be clear in its documentation about what it's returning. If it's not, then obviously, the use auto becomes a problem. At least an explicit return type would have then made it possible for the user to look up the return type, whereas with auto and bad documentation, they're forced to use stuff like typeof and pragma(msg, ...) to figure out what the type is or to go look at the source code. So, while auto is awesome, anyone writing such a function needs to do a good job with documentation and try to put themselves in the shoes of whoever is using the function. Another thing that can be taken from that is that if a function is designed to return something specific as opposed to an arbitrary type with a specific API, then it's generally better to put the type in the signature so that it's clear rather than use auto. As for auto inside functions, I'd argue that it should be used heavily, and I think that most of us do. The cases where there's really no debate are functions that return auto, and when using the type's name would just be duplicating information. e.g. it's just bad practice to do Type foo = new Type(42); instead of auto foo = new Type(42); Avoiding auto buys you nothing and just increases your code maintenance if you change the type later. It's when you start using auto all over the place that it gets more debatable. e.g. auto foo = someFunc(); foo.doSomething(); auto bar = someOtherFunc(foo); Depending on what the codes doing, heavy use of auto can sometimes make such functions hard to read, and some common sense should be used. However, by using auto, you're frequently making it easier to refactor code when types change, as they sometimes do. In that snippet, so long as I can call doSomething on foo (be it as a member function or via UFCS) and so long as someOtherFunc accepts whatever type foo is, I don't necessarily care if the type of foo
Re: Generating a method using a UDA
On Wednesday, 9 May 2018 at 10:16:22 UTC, Melvin wrote: I'm trying to find a friendly syntax for defining things in a framework. For context, I've been looking into finding a solution for this problem (https://github.com/GodotNativeTools/godot-d/issues/1) on the Godot-D project. I've done some investigating already, and it looks like I can only achieve what I want with a mixin, but I'd like to get a second opinion. Say we have a class that defines a custom Signal (an event). In an ideal world, the syntax would work similarly to this: class SomeNode : GodotScript!Node { @Signal void testSignal(float a, long b); // The declaration above would trigger the generation of this line void testSignal(float a, long b) { owner.emitSignal("testSignal", a, b); } @Method emitTest() { testSignal(3.1415, 42); } } The reason I want to use a UDA is to stay consistent with the other UDAs already defined for Properties and Methods. It also looks friendlier than using a mixin. Does anyone here have any thoughts as to how this could work? My main issue is injecting that generated method without resorting to using a mixin. I was hoping that any code I needed could be generated in the template that SomeNode inherits, but that doesn't look possible because I can't inspect the subclass (for good reason). hi, i actually have something like that, which i should put on github. i used it to learn about D's introspection, so its more of a prototype and will need some more work. it looks like this: class Test { mixin signalsOf!SigList; interface SigList { @Signal void someFun(int); } void someFunHandler(int){} } signalsOf takes a type/template or function list, introspects them then generates the actual signal functions. the additional api is similar to qt's api. void main() { Test t = new Test; t.connect!"someFun"(); t.someFun(4); // emit the signal t.disconnect!"someFun"(); } you can have different connection types and i also have string based connection and auto connection based on a naming convetion like signalname: someSig and slotname: onSomeSig.
Re: Generating a method using a UDA
On Wednesday, 9 May 2018 at 10:16:22 UTC, Melvin wrote: class SomeNode : GodotScript!Node { @Signal void testSignal(float a, long b); // The declaration above would trigger the generation of this line void testSignal(float a, long b) { owner.emitSignal("testSignal", a, b); } @Method emitTest() { testSignal(3.1415, 42); } } The reason I want to use a UDA is to stay consistent with the other UDAs already defined for Properties and Methods. It also looks friendlier than using a mixin. Does anyone here have any thoughts as to how this could work? My main issue is injecting that generated method without resorting to using a mixin. I was hoping that any code I needed could be generated in the template that SomeNode inherits, but that doesn't look possible because I can't inspect the subclass (for good reason). I'm afraid a mixin is the only real solution. You could also mark testSignal as abstract, and have a template generate an implementation class, but that would pollute every place where you want to use a SomeNode, with something like SomeNode n = implement!SomeNode(); -- Simen
Re: Why The D Style constants are written in camelCase?
On Wednesday, May 09, 2018 09:38:14 BoQsc via Digitalmars-d-learn wrote: > The D Style suggest to camelCase constants, while Java naming > conventions always promoted uppercase letter. > > Is there an explanation why D Style chose to use camelCase > instead of all UPPERCASE for constants, was there any technical > problem that would appear while writing in all UPPERCASE? > > > > Java language references: > https://en.wikipedia.org/wiki/Naming_convention_(programming)#Java > https://www.javatpoint.com/java-naming-conventions > http://www.oracle.com/technetwork/java/codeconventions-135099.html > https://medium.com/modernnerd-code/java-for-humans-naming-conventions-6353 > a1cd21a1 > > > > D lang reference: > https://dlang.org/dstyle.html#naming_constants Every language makes its own choices with regards to how it goes about things, some of which are purely subjective. As I understand it, the idea of having constants being all uppercase comes from C, where it was done to avoid problems with the preprocessor. Typically, in C, macro names are in all uppercase so that symbols which aren't intended to involve macros don't end up with code being replaced by macros accidentally. Because constants in C are typically macros, the style of using all uppercase for constants has then gotten into some languages which are descendants of C, even if they don't have macros and don't have the same technical reasons as to why all uppercase would be desired (Java would be one such language). Ultimately, the fact that Java uses all uppercase letters for constants is a convention and not good or bad from a technical perspective. Ultimately, the reason that D does not follow that convention is that Andrei Alexandrescu didn't like it, and it's arguably a highly subjective choice, but there are reasons why it can matter. "Constants" are used so frequently in D and with so many different constructs (templates, enums, static const, etc.) that having them be all uppercase would have a tendancy to result in a _lot_ of symbols which were all uppercase. Code is often written in such a way that you don't have to care whether a symbol is an enum, a function, or a const/immutable static variable. e.g. in this code enum a = foo; foo has to be known at compile-time. However, foo could be a number of different kinds of symbols, and I don't necessarily care which it is. Right now, it could be another enum, but maybe tomorrow, it makes more sense for me to refactor my code so that it's a function. If I named enums in all caps, then I would have had enum A = FOO; and then when I changed FOO to a function, I would have had to have changed it to enum A = foo; By having the coding style make everything that could be used as a value be camelCase, you don't have to worry about changing the casing of symbol names just because the symbol was changed. You then only have to change the use of the symbol if the change to the symbol actually makes it act differently enough to require that the code be changed. If the code continues to work as-is, you don't have to change anything. Obviously, different kinds of symbols aren't always interchangeable, but the fact that they frequently are can be quite valuable and can reduce code maintenance, whereas having those kinds of symbols be named with different casing would just increase code maintenance. So, for D, using camelCase is advantageous from a code maintenance perspective, and I'd argue that the result is that using all uppercase for constants is just making your life harder for no real benefit. That's not true for Java, because Java has a lot fewer constructs, and they're rarely interchangeable. So, using all uppercase doesn't really cause any problems in Java, but D is not Java, so its situation is different. All that being said, you're obviously free to do whatever you want in your own code. I'd just ask that any public APIs that you make available in places like code.dlang.org follow the D naming conventions, because that will cause fewer problems for other people using your code. - Jonathan M Davis
Re: unit-threaded v0.7.45 - now with more fluency
On Wednesday, 9 May 2018 at 10:37:52 UTC, Cym13 wrote: On Wednesday, 9 May 2018 at 04:40:37 UTC, Nick Sabalausky (Abscissa) wrote: On 05/08/2018 05:05 AM, Cym13 wrote: [...] No, it really doesn't mean the same thing at all. Not when you look away from the unimportant implementation details and towards the big picture: [...] With UFCS I find that in my code a dot means "function composition" more often than "is a member of". Maybe it's just that I like writting in a functional style, but UFCS chains are very much endorsed by the language, so I wouldn't call it a stretch. I agree with this in this case.
Re: Geany editor: Dlang code autocomplete
On Tuesday, 8 May 2018 at 22:27:19 UTC, Alexibu wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Hi! Does anyone else use Geany as Dlang code editor? I use Geany for D. It already performs autocomplete. I am not sure how good it is. It isn't something I'm that interested in, but I do seem to use it. I suspect if I configured paths for it to scan for files to find symbols in it would work better. I use Geany for D too, but the generic auto completion works well enough for me. There used to be issues with DCD crashing randomly when I used it with Textadept, but I suppose that is fixed now. Please keep us updated on the issue.
[Issue 18844] std.utf.decode skips valid character on invalid multibyte sequence
https://issues.dlang.org/show_bug.cgi?id=18844 --- Comment #1 from FeepingCreature--- Repro: string s = cast(string) [cast(ubyte) 'ä', 't']; size_t i = 0; auto ch = decode!(UseReplacementDchar.yes, string)(s, i); writefln("ch = %s, i = %s, should be 1", ch, i); ch = �, i = 2, should be 1. --
[Issue 18844] New: std.utf.decode skips valid character on invalid multibyte sequence
https://issues.dlang.org/show_bug.cgi?id=18844 Issue ID: 18844 Summary: std.utf.decode skips valid character on invalid multibyte sequence Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: default_357-l...@yahoo.de When decoding an invalid UTF-8 string, like cast(string) [cast(ubyte) 'ä', 't'], with Yes.useReplacementDchar, std.utf.decode will advance the cursor past the letter where the multibyte sequence hit an error, even if that letter is in itself a valid start of a new byte sequence. As a result, decode will advance the index to 2, leading the string to decode as "�" when it should decode as "�t". --
Re: unit-threaded v0.7.45 - now with more fluency
On Wednesday, 9 May 2018 at 04:40:37 UTC, Nick Sabalausky (Abscissa) wrote: On 05/08/2018 05:05 AM, Cym13 wrote: [...] No, it really doesn't mean the same thing at all. Not when you look away from the unimportant implementation details and towards the big picture: [...] With UFCS I find that in my code a dot means "function composition" more often than "is a member of". Maybe it's just that I like writting in a functional style, but UFCS chains are very much endorsed by the language, so I wouldn't call it a stretch.
Generating a method using a UDA
I'm trying to find a friendly syntax for defining things in a framework. For context, I've been looking into finding a solution for this problem (https://github.com/GodotNativeTools/godot-d/issues/1) on the Godot-D project. I've done some investigating already, and it looks like I can only achieve what I want with a mixin, but I'd like to get a second opinion. Say we have a class that defines a custom Signal (an event). In an ideal world, the syntax would work similarly to this: class SomeNode : GodotScript!Node { @Signal void testSignal(float a, long b); // The declaration above would trigger the generation of this line void testSignal(float a, long b) { owner.emitSignal("testSignal", a, b); } @Method emitTest() { testSignal(3.1415, 42); } } The reason I want to use a UDA is to stay consistent with the other UDAs already defined for Properties and Methods. It also looks friendlier than using a mixin. Does anyone here have any thoughts as to how this could work? My main issue is injecting that generated method without resorting to using a mixin. I was hoping that any code I needed could be generated in the template that SomeNode inherits, but that doesn't look possible because I can't inspect the subclass (for good reason).
Re: Why The D Style constants are written in camelCase?
On Wednesday, 9 May 2018 at 09:38:14 UTC, BoQsc wrote: The D Style suggest to camelCase constants, while Java naming conventions always promoted uppercase letter. Is there an explanation why D Style chose to use camelCase instead of all UPPERCASE for constants, was there any technical problem that would appear while writing in all UPPERCASE? Java language references: https://en.wikipedia.org/wiki/Naming_convention_(programming)#Java https://www.javatpoint.com/java-naming-conventions http://www.oracle.com/technetwork/java/codeconventions-135099.html https://medium.com/modernnerd-code/java-for-humans-naming-conventions-6353a1cd21a1 D lang reference: https://dlang.org/dstyle.html#naming_constants Just because.
Re: Why The D Style constants are written in camelCase?
On Wednesday, 9 May 2018 at 09:51:37 UTC, bauss wrote: Just because. To add on to this. D is not Java, it's not C++, it's not C# etc. D is D and D has its own conventions. You're free to write your constants in all uppercase if you want. I guess if I should come up with an actual reason then it would be that constants are so common in D as not just constant values, but as "variables" to compile-time functions that are evaluated. Which is different from eg. Java where you only have constant values.
Re: Geany editor: Dlang code autocomplete
On Wednesday, 9 May 2018 at 09:44:07 UTC, Basile B. wrote: On Wednesday, 9 May 2018 at 08:48:41 UTC, Denis Feklushkin wrote: On Tuesday, 8 May 2018 at 22:52:26 UTC, Basile B. wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Unlike it, we have almost everything ready: https://github.com/denizzzka/geany_dlang Hello have a look here: https://github.com/denizzzka/geany_dlang. I cant tell if it's good or not. Personally i'm on another editor. denizzzka is me :-) Oops, sorry, i don't know how did i manage to miss this, especially since your name is on your profile. And the link to the project was there in the first post too. Definitively a big fail from my part lol.
Re: Geany editor: Dlang code autocomplete
On Wednesday, 9 May 2018 at 08:48:41 UTC, Denis Feklushkin wrote: On Tuesday, 8 May 2018 at 22:52:26 UTC, Basile B. wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Unlike it, we have almost everything ready: https://github.com/denizzzka/geany_dlang Hello have a look here: https://github.com/denizzzka/geany_dlang. I cant tell if it's good or not. Personally i'm on another editor. denizzzka is me :-) Oops, sorry, i don't know how did i manage to miss this, especially since your name is on your profile.
Why The D Style constants are written in camelCase?
The D Style suggest to camelCase constants, while Java naming conventions always promoted uppercase letter. Is there an explanation why D Style chose to use camelCase instead of all UPPERCASE for constants, was there any technical problem that would appear while writing in all UPPERCASE? Java language references: https://en.wikipedia.org/wiki/Naming_convention_(programming)#Java https://www.javatpoint.com/java-naming-conventions http://www.oracle.com/technetwork/java/codeconventions-135099.html https://medium.com/modernnerd-code/java-for-humans-naming-conventions-6353a1cd21a1 D lang reference: https://dlang.org/dstyle.html#naming_constants
Re: Geany editor: Dlang code autocomplete
On Tuesday, 8 May 2018 at 19:23:44 UTC, Seb wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Hi! Does anyone else use Geany as Dlang code editor? I'm looking at the ongoing fundraising for another editor. Unlike it, we have almost everything ready: https://github.com/denizzzka/geany_dlang BTW I quickly skimmed your linked issue and it seems that Geany already supports autocompletion for plugins, just not re-using the naive default completion. Currently it can be supported only as dirty hack like as by link above.
Re: Geany editor: Dlang code autocomplete
On Tuesday, 8 May 2018 at 22:27:19 UTC, Alexibu wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Hi! Does anyone else use Geany as Dlang code editor? I use Geany for D. It already performs autocomplete. I am not sure how good it is. dcd does this job better than internal Geany autocompletion
Re: unit-threaded v0.7.45 - now with more fluency
On Saturday, 5 May 2018 at 15:51:11 UTC, Johannes Loher wrote: On Saturday, 5 May 2018 at 13:28:41 UTC, Atila Neves wrote: For those not in the know, unit-threaded is an advanced testing library for D that runs tests in threads by default. It has a lot of features: http://code.dlang.org/packages/unit-threaded New: * Bug fixes * Better integration testing * unitThreadedLight mode also runs tests in threads * More DDoc documentation (peer pressure from Adam's site) * Sorta kinda fluent-like asserts On the new asserts (should and should.be are interchangeable): 1.should == 1 1.should.not == 2 1.should.be in [1, 2, 3] 4.should.not.be in [1, 2, 3] More controversially (due to a lack of available operators to overload): // same as .shouldApproxEqual 1.0.should ~ 1.0001; 1.0.should.not ~ 2.0; // same as .shouldBeSameSetAs [1, 2, 3].should ~ [3, 2, 1]; [1, 2, 3].should.not ~ [1, 2, 2]; I also considered adding `.should ~=`. I think it even reads better, but apparently some people don't. Let me know? The operator overloads are completely optional. Atila Personally, I don't like that kind of "abuse" of operators at all. I think it looks really unusual and it kind of breaks your "flow" when reading the code. I agree with this. If the comments weren't added, nobody reading the code would have any idea what it actually does except for whoever wrote it.
Re: Geany editor: Dlang code autocomplete
On Tuesday, 8 May 2018 at 22:52:26 UTC, Basile B. wrote: On Tuesday, 8 May 2018 at 07:47:26 UTC, Denis Feklushkin wrote: Unlike it, we have almost everything ready: https://github.com/denizzzka/geany_dlang Hello have a look here: https://github.com/denizzzka/geany_dlang. I cant tell if it's good or not. Personally i'm on another editor. denizzzka is me :-)
Re: Wait-free MPSC and MPMC implement in D
On Wednesday, 9 May 2018 at 04:13:08 UTC, Shachar Shemesh wrote: On 09/05/18 03:20, Andy Smith wrote: [...] Let me start off by saying that it is great that people appreciate and enjoy Mecca. With that said, I would be wary of the direction this thread is threatening to take. [...] Well said, and re-reading apologies if the tone seemed a little hostile on last comment. That certainly wasn't the intention. Cheers, A.
[Issue 18828] [-betterC] helpless error in object.d
https://issues.dlang.org/show_bug.cgi?id=18828 --- Comment #9 from Mike Franklin--- A feeble attempt at a fix https://github.com/dlang/druntime/pull/2178 --
[Issue 15388] extern(C++) - typeof(null) should mangle as nullptr_t
https://issues.dlang.org/show_bug.cgi?id=15388 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 15388] extern(C++) - typeof(null) should mangle as nullptr_t
https://issues.dlang.org/show_bug.cgi?id=15388 --- Comment #3 from github-bugzi...@puremagic.com --- Commit pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/c38ae836e3ab171a0d543fb3eaf7ff94b199cee4 Fix issue 15388 - extern(C++) - typeof(null) should mangle as nullptr_t --