Error: template cannot deduce function from argument types.
Hi, I've been trying to reduce a bug in the containers (8824). From the example below it seems the dup method is passing the constructor an array of dchars and the template is failing. Is this a compiler bug, or ? import std.range, std.traits; struct Array2(T) { private T[] _payload; size_t insertAt(R)(size_t index, R rng) if (isInputRange!R isImplicitlyConvertible!(ElementType!R, T)) { return 0; } this(U)(U[] values...) if (isImplicitlyConvertible!(U, T)) { insertAt(0, values); } @property Array2 dup() { return Array2(_payload); } } unittest { Array2!int a; //passes Array2!char a; //fails }
Re: What hashing algorithm is used for the D implementation of associative arrays?
On Saturday, 9 August 2014 at 09:33:12 UTC, Gary Willoughby wrote: What hashing algorithm is used for the D implementation of associative arrays? Where in the D source does the AA code live? https://github.com/D-Programming-Language/druntime/blob/master/src/rt/aaA.d I think it uses the objects generic TypeInfo getHash function - see line 170.
Building phobos documentation
I'm having some trouble with building Phobos documentation locally on Win32. I've been referring to this guide: http://wiki.dlang.org/Building_DMD#Building_the_Docs I don't want to pull it from github and I don't really need the tools building either. My make command is: make -f win32.mak DMD=..\dmd\src\dmd\src html I keep getting error: don't know how to make '../dlang.org/std.ddoc'
Re: Is there any way to differentiate between a type and an alias?
On 25/05/2014 12:04, Rene Zwanenburg wrote: Given alias GLenum = uint; void glSomeFunction(GLenum, uint); Now, is there some way to differentiate between GLenum and uint when using ParameterTypeTuple!glSomeFunction? I'm writing a function which shows the arguments a GL function was called with when an error occurs. The GLenum needs to be printed as a stringified version of the constant's name, while the uint is just an uint. Apparently this is deprecated even though alias cannot replicate it. typedef uint GLenum; writeln(GLenum.stringof);
Array!T and find are slow
I've found bench-marking my program that std.algorithm.find is very slow on Array!T, due to the fact it iterates on a range instead of a plain array. I've written some search functions, which are many times faster, is it worth making a pull request? http://dpaste.dzfl.pl/63b54aa27f35#
Re: Array!T and find are slow
On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger wrote: Could you post a short benchmark snippet explicitly showing the problem? Benchmark found here: http://dpaste.dzfl.pl/0058fc8341830
Re: Array!T and find are slow
On Wednesday, 14 May 2014 at 17:57:30 UTC, monarch_dodra wrote: BTW, this is a more general issue: Given a generic algorithm std.foo, how can I write my own (better optimized) object.foo, and make sure *that* is called instead? I initially filed the issue for retro, while indeed mentioning that find was also open to the improvement: https://issues.dlang.org/show_bug.cgi?id=12583 This spawned the thread: http://forum.dlang.org/thread/op.xeuot6g2eav7ka@stevens-macbook-pro-2.local Unfortunately, nothing came of it. It's an interesting problem to solve. Would having a specialized container range, which has search, removal, etc primitives work better?
Re: Socket server + thread: cpu usage
On Tuesday, 29 April 2014 at 17:44:21 UTC, Tim wrote: On Tuesday, 29 April 2014 at 17:35:08 UTC, Tim wrote: On Tuesday, 29 April 2014 at 17:19:41 UTC, Adam D. Ruppe wrote: On Tuesday, 29 April 2014 at 17:16:33 UTC, Tim wrote: Is there anything I'm doing wrong? You should be using a blocking socket. With them, the operating system will put your thread on hold until a new connection comes in. Without them, it will endlessly loop doing absolutely nothing except checking if a new connection is there yet. Horribly, horribly inefficient. Alright, this would solve the server-cpu-usage-problem. But what about incoming connections? When I create a new thread and use non-blocking socket I've exactly the same problem. I can also solve this problem by using blocking sockets, but what happens if I do the following: while(oMyBlockingSocket.isAlive) { oMyBlockingSocket.receive(...); } ... and close the connection on the client side? I would never receive anything and receive() waits never comes back or would this throw an exception or similar? Sorry... I totally forgot... I can use SocketSet's as I already asked: http://forum.dlang.org/thread/ogghezngzrvvoqaod...@forum.dlang.org#post-kivp3e:24jif:241:40digitalmars.com ... but I thought it's also possible using non blocking sockets by using Thread.yield(). You want to add something like this to your while loop: Thread.sleep( dur!(msecs)( 20 ) ); That will give back some time to the CPU.
Can I circumvent nothrow?
So I have this procedure: extern (C) void signal_proc(int sn) @system nothrow Which can call this: shutdown_system() @system nothrow Problem I have is inside shutdown_system() I have code that can't possibly be @nothrow because their are a lot of subsystems to shutdown. What I've done for now is just strip the @nothrow attribute of the C function in core.stdc.signal. It feels very hackish but it works..
Re: Can I circumvent nothrow?
On Sunday, 27 April 2014 at 01:53:15 UTC, Ali Çehreli wrote: On 04/26/2014 06:39 PM, Damian Day wrote: Problem I have is inside shutdown_system() I have code that can't possibly be @nothrow because their are a lot of subsystems to shutdown. You can wrap the contents of shutdown_system() with a try-catch block and swallow all exceptions that way. Ali Oh right, didn't realize it would be that simple. Thank you!!