Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 04:31:43 UTC, Laeeth Isharc wrote: He mentions D, a bit dismissively. http://esr.ibiblio.org/?p=7724=1#comment-1912717 Couldn't read that without cringing.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 04:31:43 UTC, Laeeth Isharc wrote: He mentions D, a bit dismissively. http://esr.ibiblio.org/?p=7724=1#comment-1912717 I think that the date he mentions in that paragraph (2001) speaks a lot for his argument, i.e. completely outdated.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 16:38:58 UTC, Ola Fosheim Grostad wrote: It [C]is flawed... ESR got that right, not sure how anyone can disagree. Well I 'can' disagree ;-) Is a scalpel flawed because someone tried to use it to screw in a screw? Languages are just part of an evolutionary chain. No part of the chain should be considered flawed - unless it was actually flawed - in that it didn't meet the demands of the environment in which it was initially conceived. In that circumstance, it must be considered flawed, and evolutionary forces will quickly take care of that. But a programming language is not flawed, simply because people use it an environment where it was not designed to operate. If I take the average joe blow out of his comfy house, and put him in the middle of raging battle field, is Joe Blow flawed, because he quickly got shot down? What's flawed there, is the decision to take Joe Blow and put him in the battlefield. Corporate needs/strategy, skews ones view of the larger environment, and infects language design. I think it's infected Go, from the get Go. I am glad D is not being designed by a corporate, otherwise D would be something very different, and far less interesting. The idea that C is flawed, also skews ones view of the larger environment, and so it too infects language design. This is where Eric got it wrong, in my opinion. He's looking for the language that can best fix the flaws of C. In fact C has barely had to evolve (which is not a sign of something that is flawed), because it works just fine, in enviroments for which it was designed to work in. And those enviroments still exist today. They will still exist tomorrow..and the next day...and the next..and... So language designers..please stop the senseless bashing of C. Why does anyone need array index validation anyway? I don't get it. If you're indexing incorrectly into an array..you're a fool. btw. The conditions under which C evolved, are well documented here. It's a facinating read. https://www.bell-labs.com/usr/dmr/www/chist.pdf
Re: BinaryHeap as member
On Tuesday, 14 November 2017 at 04:13:16 UTC, Era Scarecrow wrote: On Monday, 13 November 2017 at 16:26:20 UTC, balddenimhero wrote: In the course of writing a minimal example I removed more than necessary in the previous pastebin (the passed IntOrder has not even been used). Thus here is the corrected one: https://pastebin.com/SKae08GT. I'm trying to port this to D. Throwing together a sample involves wrapping the value in a new value. Still the idea is put across... Not sure if this is the best way to do this, but only takes a little dereferencing to access the value. Thanks for actually implementing the workaround you suggested on the IRC for my minmal example. I'm using this for now as it seems that using delegates as predicates is problematic in my use case. It still feels kind of dirty though, and maybe I'll eventually just implement a standard min-heap that is templated with a comparator (as in C++).
Re: NIO+Multithreaded TCPSocket listener, very low cpu utilisation
On Tuesday, 14 November 2017 at 19:57:54 UTC, ade90036 wrote: while(true) { listeningSet.add(listener); if (Socket.select(listeningSet, null, null, dur!"nsecs"(150)) > 0) { Why do you ever timeout? This loop consumes 100 % (a single core) when idle on my machine.
Re: NIO+Multithreaded TCPSocket listener, very low cpu utilisation
On Tuesday, 14 November 2017 at 19:57:54 UTC, ade90036 wrote: socket.send("HTTP/1.1 200 OK Server: dland:v2.076.1 Date: Tue, 11 Nov 2017 15:56:02 GMT Content-Type: text/plain; charset=UTF-8 Content-Length: 32 Hello World!"); } Some cosmetic changes: It is possible, that your HTTP client gets confused by the data sent? ``` socket.send("HTTP/1.1 200 OK Server: dland:v2.076.1 Date: Tue, 11 Nov 2017 15:56:02 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 51 Hello World!".to_retlf); string to_retlf (string s) { import std.algorithm; import std.string; return s .lineSplitter .map!(a => chomp (a)) .join ("\r\n"); } ``` The Content-Length given is too short. The Content-Type also was wrong.
NIO+Multithreaded TCPSocket listener, very low cpu utilisation
Hi Forum, Let's cut the chase, i'm a newby in Dlang. I have 15+ years experience in java and 7+ years experience in C++. I found D very fascinating and the sugar coated syntax very appealing to my style of coding. (groovy like) I've been trying to learn Dland and bring it thought the motions by creating a very simple and basic TCPSocket listerner that when you send a request it responds with an HTTP response over a specific port. (localhost:4445) I have the code working on a worked thread and when the socket accept() it defers the processing of the request (socket) in a different thread backed by TaskPool(8). i have 8 logical core, which is a macBook pro retina 16gb, i7. What i'm expecting to see is the CPU of my 8 core I7 go through the roof and nearly melt (hope not) but at-least have the fan on at sustainable level and obtain full CPU utilisation. What i'm benchmarking it against is a JAVA NIO2 implementation. This implementation achieves very high CPU utilisation and high throughput. The process utilisation averages 400% at times and the fan is really searching for cold air. (Nic) However, when i run the Dlang program i see i very poor CPU utilisation. The fan is always in silent mode. Not sure if you are familiar with MacOS cpu metrics, but they are based per core. So dland program reports 100% under the process monitor (which equates to one core) and the overall system CPU utilisation is 13%. I would have expected to see a much higher cpu utilisation but it is not happening. I have been trying different variation of the same implementation but not luck. I'm starting to suspect that this is an BUG related to the macOS but i would like to confirm or atleast have a second pair of eyes having a look. ```Code import std.algorithm : remove; import std.conv : to; import core.thread: Thread; import std.socket : InternetAddress, Socket, SocketException, SocketSet, TcpSocket, SocketShutdown; import core.time : Duration, dur; import std.stdio : writeln, writefln; import std.parallelism : task, TaskPool; void main(string[] args) { ushort port; if (args.length >= 2) port = to!ushort(args[1]); else port = 4447; auto listener = new TcpSocket(); assert(listener.isAlive); listener.blocking = false; listener.bind(new InternetAddress(port)); listener.listen(100); writefln("Listening on port %d.", port); auto taskPool = new TaskPool(8); new Thread({ auto listeningSet = new SocketSet(); while(true) { listeningSet.add(listener); if (Socket.select(listeningSet, null, null, dur!"nsecs"(150)) > 0) { if (listeningSet.isSet(listener))// connection request { Socket socket = null; scope (failure) { writefln("Error accepting"); if (socket) socket.close(); } socket = listener.accept(); assert(socket.isAlive); assert(listener.isAlive); //writefln("Connection from %s established.", socket.remoteAddress().toString()); auto task = task!handle_socket(socket); taskPool.put(task); } } listeningSet.reset(); } }).start(); } void handle_socket(Socket socket) { auto socketSet = new SocketSet(); while(true) { socketSet.add(socket); if (Socket.select(socketSet, null, null, dur!"nsecs"(150)) > 0) { char[1024] buf; auto datLength = socket.receive(buf[]); if (datLength == Socket.ERROR) writeln("Connection error."); else if (datLength != 0) { //writefln("Received %d bytes from %s: \"%s\"", datLength, socket.remoteAddress().toString(), buf[0..datLength]); //writefln("Writing response"); socket.send("HTTP/1.1 200 OK Server: dland:v2.076.1 Date: Tue, 11 Nov 2017 15:56:02 GMT Content-Type: text/plain; charset=UTF-8 Content-Length: 32 Hello World!"); } // release socket resources now socket.shutdown(SocketShutdown.BOTH); socket.close(); break; } socketSet.reset(); } } ``` You help in understanding this matter is extremelly helpfull. Regards
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 04:31:43 UTC, Laeeth Isharc wrote: He mentions D, a bit dismissively. http://esr.ibiblio.org/?p=7724=1#comment-1912717 Eh, he parrots decade-old anti-D talking points about non-technical, organizational issues and doesn't say anything about the language itself, who knows if he's even tried it. As for the the rest, the usual bunk from him, a fair amount of random theorizing only to reach conclusions that many others reached years ago: C has serious problems and more memory-safe languages are aiming to replace it, while C++ doesn't have a chance for the same reason it took off, it bakes in all of C's problems and adds more on top. He's basically just jumping on the same bandwagon that a lot of people are already on, as it starts to pick up speed. Good for him that he sees it picking up momentum and has jumped in instead of being left behind clinging to the old tech, but no big deal if he didn't.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 06:32:55 UTC, lobo wrote: And I fixed it all right – took me two weeks of struggle. After which I swore a mighty oath never to go near C++ again. ...[snip]" Reminds me of the last time I touched C++. A friend wanted help with the Unreal Engine. While skeptical the actual headers and code I was going into were... really straight forward. #IfDef's to encapsulate and control if something was/wasn't used, and simple C syntax with no overrides or special code otherwise. But it was ugly... it was verbose... it was still hard to find my way around. And I still don't want to ever touch C++ if I can avoid it.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 11:55:17 UTC, codephantom wrote: The reason he can dismiss D, so easily, is because of his starting premise that C is flawed. As soon as you begin with that premise, you justify searching for C's replacement, which makes it difficult to envsion something like D. Well, in another thread he talked about the Tango split, so not sure where he is coming from. That's why we got C++, instead of D. Because the starting point for C++, was the idea that C was flawed. No, the starting point for C++ was that Simula is better for a specific kind of modelling than C. C is not flawed. It doesn't need a new language to replace it. It is flawed... ESR got that right, not sure how anyone can disagree. The only thing C has going for it is that CPU designs have been adapted to C for decades. But that is changing. C no longer models the hardware in a reasonable manner. If that was the starting point for Go and Rust, then it is ill conceived. It wasn't really. The startingpoint for Go was just as much a language used to implement Plan 9. Don't know about Rust, but it looks like a ML spinoff. One should also not make the same error, by starting with the premise that we need a simpler language to replace the complexity of the C++ language. Why not? Much of the evolved complexity of C++ can be removed by streamlining. If that was the starting point for Go and Rust, then it is ill conceived. It was the starting point for D... What we need, is a language that provides you with the flexibility to model your solution to a problem, *as you see fit*. If that were my starting point, then it's unlikely I'd end up designing Go or Rust. Only something like D can result from that starting point. Or C++, or ML, or BETA, or Scala, or etc etc... Because then, it's unlikely he would get away with being so dismissive of D. If he is dismissive of C++ and Rust then he most likely will remain dismissive od D as well?
Re: string version of array
On Tuesday, 14 November 2017 at 14:00:54 UTC, Dr. Assembly wrote: On Tuesday, 14 November 2017 at 08:21:59 UTC, Tony wrote: On Tuesday, 14 November 2017 at 07:56:06 UTC, rikki cattermole wrote: Thanks. That flipped function calling syntax definitely takes some getting used to. if you consider this as a property, it makes alot of sense the var.propName syntax. To give it a name I suppose it's this: UFCS: https://dlang.org/spec/function.html#pseudo-member
Re: string version of array
On Tuesday, 14 November 2017 at 08:21:59 UTC, Tony wrote: On Tuesday, 14 November 2017 at 07:56:06 UTC, rikki cattermole wrote: On 14/11/2017 7:54 AM, Tony wrote: Is there an easy way to get the string representation of an array, as would be printed by writeln(), but captured in a string? struct Foo { int x; } void main() { Foo[] data = [Foo(1), Foo(2), Foo(3)]; import std.conv : text; import std.stdio; writeln(data.text); } --- [Foo(1), Foo(2), Foo(3)] Thanks. That flipped function calling syntax definitely takes some getting used to. if you consider this as a property, it makes alot of sense the var.propName syntax.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 04:31:43 UTC, Laeeth Isharc wrote: He mentions D, a bit dismissively. http://esr.ibiblio.org/?p=7724=1#comment-1912717 The reason he can dismiss D, so easily, is because of his starting premise that C is flawed. As soon as you begin with that premise, you justify searching for C's replacement, which makes it difficult to envsion something like D. That's why we got C++, instead of D. Because the starting point for C++, was the idea that C was flawed. C is not flawed. It doesn't need a new language to replace it. If that was the starting point for Go and Rust, then it is ill conceived. One should also not make the same error, by starting with the premise that we need a simpler language to replace the complexity of the C++ language. If that was the starting point for Go and Rust, then it is ill conceived. What we need, is a language that provides you with the flexibility to model your solution to a problem, *as you see fit*. If that were my starting point, then it's unlikely I'd end up designing Go or Rust. Only something like D can result from that starting point. I'd like Eric to go write a new article, with that being the starting point. Because then, it's unlikely he would get away with being so dismissive of D.
Re: ESR on post-C landscape
On Tuesday, 14 November 2017 at 06:32:55 UTC, lobo wrote: "[snip]...Then came the day we discovered that a person we incautiously gave commit privileges to had fucked up the games’s AI core. It became apparent that I was the only dev on the team not too frightened of that code to go in. And I fixed it all right – took me two weeks of struggle. After which I swore a mighty oath never to go near C++ again. ...[snip]" Either no one manages SW in his team so that this "bad" dev could run off and to build a monster architecture, which would take weeks, or this guy has no idea how to revert commit. ESR got famous for his cathedral vs bazaar piece, which IMO was basically just a not very insightful allegory over waterfall vs evolutionary development models, but since many software developers don't know the basics of software development he managed to become infamous for it… But I think embracing emergence has hurt open source projects more than it has helped it. D bears signs of too much emergence too, and is still trying correct those «random moves» with DIPs. ESR states «C is flawed, but it does have one immensely valuable property that C++ didn’t keep – if you can mentally model the hardware it’s running on, you can easily see all the way down. If C++ had actually eliminated C’s flaws (that it, been type-safe and memory-safe) giving away that transparency might be a trade worth making. As it is, nope.» I don't think this is true, you can reduce C++ down to the level where it is just like C. If he cannot mentally model the hardware in C++ that basically just means he has never tried to get there… I also think he is in denial if he does not see that C++ is taking over C. Starting a big project in C today sounds like a very bad idea to me. Actually, one could say that one of the weaknesses of C++ is that it limited by a relatively direct mapping to the underlying hardware and therefore makes some types of optimization and convenient programming harder. *shrug*
Re: minElement on array of const objects
On Monday, 13 November 2017 at 14:28:27 UTC, vit wrote: On Monday, 13 November 2017 at 12:15:26 UTC, Nathan S. wrote: [...] Is Unqual necessary here?: https://github.com/dlang/phobos/blob/master/std/algorithm/searching.d#L1284 https://github.com/dlang/phobos/blob/master/std/algorithm/searching.d#L1340 And here https://github.com/dlang/phobos/blob/master/std/algorithm/searching.d#L1301 https://github.com/dlang/phobos/blob/master/std/algorithm/searching.d#L1355 is necessary something like RebindableOrUnqual instead of Unqual: template RebindableOrUnqual(T){ static if (is(T == class) || is(T == interface) || isDynamicArray!T || isAssociativeArray!T)alias RebindableOrUnqual = Rebindable!T; else alias RebindableOrUnqual = Unqual!T; } Thanks, I filed https://issues.dlang.org/show_bug.cgi?id=17982 to track the issue.
Re: string version of array
On Tuesday, 14 November 2017 at 07:56:06 UTC, rikki cattermole wrote: On 14/11/2017 7:54 AM, Tony wrote: Is there an easy way to get the string representation of an array, as would be printed by writeln(), but captured in a string? struct Foo { int x; } void main() { Foo[] data = [Foo(1), Foo(2), Foo(3)]; import std.conv : text; import std.stdio; writeln(data.text); } --- [Foo(1), Foo(2), Foo(3)] Thanks. That flipped function calling syntax definitely takes some getting used to.
Re: string version of array
On Tuesday, 14 November 2017 at 07:56:06 UTC, rikki cattermole wrote: On 14/11/2017 7:54 AM, Tony wrote: Is there an easy way to get the string representation of an array, as would be printed by writeln(), but captured in a string? struct Foo { int x; } void main() { Foo[] data = [Foo(1), Foo(2), Foo(3)]; import std.conv : text; import std.stdio; writeln(data.text); } --- [Foo(1), Foo(2), Foo(3)] Why not import std.conv : to; writeln(data.to!string); ?
Re: string version of array
On 14/11/2017 8:16 AM, Andrea Fontana wrote: On Tuesday, 14 November 2017 at 07:56:06 UTC, rikki cattermole wrote: On 14/11/2017 7:54 AM, Tony wrote: Is there an easy way to get the string representation of an array, as would be printed by writeln(), but captured in a string? struct Foo { int x; } void main() { Foo[] data = [Foo(1), Foo(2), Foo(3)]; import std.conv : text; import std.stdio; writeln(data.text); } --- [Foo(1), Foo(2), Foo(3)] Why not import std.conv : to; writeln(data.to!string); ? .text is essentially short hand, that's all. I use it as it is more descriptive as to my intention.
Re: string version of array
On 14/11/2017 7:54 AM, Tony wrote: Is there an easy way to get the string representation of an array, as would be printed by writeln(), but captured in a string? struct Foo { int x; } void main() { Foo[] data = [Foo(1), Foo(2), Foo(3)]; import std.conv : text; import std.stdio; writeln(data.text); } --- [Foo(1), Foo(2), Foo(3)]