Re: Issues getting DCD to work on Windows
On Friday, 17 June 2016 at 16:58:42 UTC, OpenJelly wrote: Trying to set up an IDE on Windows 7 with code completion but my issues keep coming back to DCD. The tests failed the one time I could get the tests to go beyond it waiting for another instance of DCD to close. The path is added to my PATH variable, I've rebuilt it from source with the .bat file to install it on Windows and rebuilt it with dub as per the github instructions... I even changed one of its source files to import winsock2 instead of winsock because the build output kept warning me about a depreciation... probably made it worse as I've now got prompts telling me about Socket OS Exceptions. Anyone got a clue what I could be doing wrong and how I can fix it? I compiled it with dub and started using it with VS Code just fine. Now and again a Socket exception pops up, but it works. You say you're running the tests? That means you compiled it, right? dub build --build=release You might want to add -ax86_64 to that as well if you are running 64-bit Windows.
Re: ARSD PNG memory usage
On Saturday, 18 June 2016 at 01:57:49 UTC, Joerg Joergonson wrote: Ok. Also, maybe the GC hasn't freed some of those temporaries yet. The way GC works in general is it allows allocations to just continue until it considers itself under memory pressure. Then, it tries to do a collection. Since collections are expensive, it puts them off as long as it can and tries to do them as infrequently as reasonable. (some GCs do smaller collections to spread the cost out though, so the details always differ based on implementation) So, you'd normally see it go up to some threshold then stabilize there, even if it is doing a lot of little garbage allocations. However, once the initialization is done here, it shouldn't be allocating any more. The event loop itself doesn't when all is running normally.
Templated class defaults and inheritence
I have something like class X; class subfoo : X; class subbaz : X; class foo : X { subfoo bar; } class baz : X; which I have modified so that class subbaz : subfoo; class baz : foo; (essentially baz is now a derivation of foo while before it was of X) the problem is that subbaz uses subfoo bar; when it also needs to use a derived type. (so it is a full derivation of foo and subfoo) To accomplish that I parameterized foo so I can do class foo!T : X { T bar; } and I can now do class baz : foo!subbaz; There are two problems with this though: 1. How can I create a default foo!(T = subfoo) so I can just instantiate classes like new foo() and it is the same as foo!subfoo()? I tried creating a class like class foo : foo!subfoo; but I get a collision. I guess an alias will work here just fine though?(just thought of it) 2. The real problem is that baz isn't really a true derivation of foo like it should be. foo!subfoo and foo!subbaz are different types. I want the compiler to realize that foo!subbaz(and hence baz) is really a derived foo!subfoo and ultimately X. I'm pretty sure D can do this, just haven't figure out how. Thanks.
Re: ARSD PNG memory usage
On Saturday, 18 June 2016 at 01:20:16 UTC, Joerg Joergonson wrote: Error: undefined identifier 'sleep', did you mean function 'Sleep'? "import core.thread; sleep(10);" It is `Thread.sleep(10.msecs)` or whatever time - `sleep` is a static member of the Thread class. They mention to use PeekMessage and I don't see you doing that, not sure if it would change things though? I am using MsgWaitForMultipleObjectsEx which blocks until something happens. That something can be a timer, input event, other message, or an I/O thing... it doesn't eat CPU unless *something* is happening.
Re: ARSD PNG memory usage
On Saturday, 18 June 2016 at 01:46:32 UTC, Adam D. Ruppe wrote: On Saturday, 18 June 2016 at 01:44:28 UTC, Joerg Joergonson wrote: I simply removed your nextpowerof2 code(so the width and height wasn't being enlarged) and saw no memory change). Obviously because they are temporary buffers, I guess? right, the new code free() them right at scope exit. If this is the case, then maybe there is one odd temporary still hanging around in png? Could be, though the png itself has relatively small overhead, and the opengl texture adds to it still. I'm not sure if video memory is counted by task manager or not... but it could be loading up the whole ogl driver that accounts for some of it. I don't know. Ok. Also, maybe the GC hasn't freed some of those temporaries yet. What's strange is that when the app is run, it seems to do a lot of small allocations around 64kB or something for about 10 seconds(I watch the memory increase in TM) then it stabilizes. Not a big deal, just seems a big weird(maybe some type of lazy allocations going on) Anyways, I'm much happier now ;) Thanks!
Re: ARSD PNG memory usage
On Saturday, 18 June 2016 at 01:44:28 UTC, Joerg Joergonson wrote: I simply removed your nextpowerof2 code(so the width and height wasn't being enlarged) and saw no memory change). Obviously because they are temporary buffers, I guess? right, the new code free() them right at scope exit. If this is the case, then maybe there is one odd temporary still hanging around in png? Could be, though the png itself has relatively small overhead, and the opengl texture adds to it still. I'm not sure if video memory is counted by task manager or not... but it could be loading up the whole ogl driver that accounts for some of it. I don't know.
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 14:39:32 UTC, kinke wrote: On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote: LDC x64 uses about 250MB and 13% cpu. I couldn't check on x86 because of the error phobos2-ldc.lib(gzlib.c.obj) : fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86' not sure what that means with gzlib.c.ojb. Must be another bug in ldc alpha ;/ It looks like you're trying to link 32-bit objects to a 64-bit Phobos. The only pre-built LDC for Windows capable of linking both 32-bit and 64-bit code is the multilib CI release, see https://github.com/ldc-developers/ldc/releases/tag/LDC-Win64-master. Yes, it looks that way but it's not the case I believe(I did check when this error first came up). I'm using the phobo's libs from ldc that are x86. I could be mistaken but phobos2-ldc.lib(gzlib.c.obj) suggests that the problem isn't with the entire phobos lib but gzlib.c.obj and that that is the only one marked incorrectly, since it's not for all the other imports, it seems something got marked wrong in that specific case?
Re: ARSD PNG memory usage
On Saturday, 18 June 2016 at 00:56:57 UTC, Joerg Joergonson wrote: On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote: [...] Yes, same here! Great! It runs around 122MB in x86 and 107MB x64. Much better! [...] Yeah, strange but good catch! It now works in x64! I modified it to to!wstring(title).dup simply to have the same title and classname. [...] I have the opposite on memory but not a big deal. [...] I will investigate this soon and report back anything. It probably is something straightforward. [...] I found this on non-power of 2 textures: https://www.opengl.org/wiki/NPOT_Texture https://www.opengl.org/registry/specs/ARB/texture_non_power_of_two.txt It seems like it's probably a quick and easy add on and you already have the padding code, it could easily be optional(set a flag or pass a bool or whatever). it could definitely same some serious memory for large textures. e.g., a 3000x3000x4 texture takes about 36MB or 2^25.1 bytes. Since this has to be rounded up to 2^26 = 67MB, we almost have doubled the amount of wasted space. Hence, allowing for non-power of two would probably reduce the memory footprint of my code to near 50MB(around 40MB being the minimum using uncompressed textures). I might try to get a working version of that at some point. Going to deal with the cpu thing now though. Thanks again. Never mind about this. I wasn't keeping in mind that these textures are ultimately going to end up in the video card memory. I simply removed your nextpowerof2 code(so the width and height wasn't being enlarged) and saw no memory change). Obviously because they are temporary buffers, I guess? If this is the case, then maybe there is one odd temporary still hanging around in png?
Re: ARSD PNG memory usage
The CPU usage is consistently very low on my computer. I still don't know what could be causing it for you, but maybe it is the temporary garbage... let us know if the new patches make a difference there. Ok, I tried the breaking at random method and I always ended up in system code and no stack trace to... seems it was an alternate thread(maybe GC?). I did a sampling profile and got this: Function Name Inclusive Exclusive Inclusive % Exclusive % _DispatchMessageW@4 10,361 5 88.32 0.04 [nvoglv32.dll] 7,874 745 67.12 6.35 _GetExitCodeThread@85,745 5,745 48.97 48.97 _SwitchToThread@0 2,166 2,166 18.46 18.46 So possibly it is simply my system and graphics card. For some reason NVidia might be using a lot of cpu here for no apparent reason? DispatchMessage is still taking quite a bit of that though? Seems like someone else has a similar issue: https://devtalk.nvidia.com/default/topic/832506/opengl/nvoglv32-consuming-a-ton-of-cpu/ https://github.com/mpv-player/mpv/issues/152 BTW, trying sleep in the MSG loop Error: undefined identifier 'sleep', did you mean function 'Sleep'? "import core.thread; sleep(10);" ;) Adding a Sleep(10); to the loop dropped the cpu usage down to 0-1% cpu! http://stackoverflow.com/questions/33948837/win32-application-with-high-cpu-usage/33948865 Not sure if that's the best approach though but it does work. They mention to use PeekMessage and I don't see you doing that, not sure if it would change things though?
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote: On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote: ok, then it's somewhere in TrueColorImage or the loading of the png. So, opengltexture actually does reallocate if the size isn't right for the texture... and your image was one of those sizes. The texture pixel size needs to be a power of two, so 3000 gets rounded up to 4096, which means an internal allocation. But it can be a temporary one! So ketmar tackled png.d's loaders' temporaries and I took care of gamehelper.d's... And the test program went down about to 1/3 of its memory usage. Try grabbing the new ones from github now and see if it works for you too. Yes, same here! Great! It runs around 122MB in x86 and 107MB x64. Much better! Well, It works on LDC x64! again ;) This seems like an issue with DMD x64? I was thinking maybe it has to do the layout of the struct or something, but not sure. I have a fix for this too, though I don't understand why it works I just .dup'd the string literal before passing it to Windows. I think dmd is putting the literal in a bad place for these functions (they do bit tests to see if it is a pointer or an atom, so maybe it is in an address where the wrong bits are set) Yeah, strange but good catch! It now works in x64! I modified it to to!wstring(title).dup simply to have the same title and classname. In any case, the .dup seems to fix it, so all should work on 32 or 64 bit now. In my tests, now that the big temporary arrays are manually freed, the memory usage is actually slightly lower on 32 bit, but it isn't bad on 64 bit either. I have the opposite on memory but not a big deal. The CPU usage is consistently very low on my computer. I still don't know what could be causing it for you, but maybe it is the temporary garbage... let us know if the new patches make a difference there. I will investigate this soon and report back anything. It probably is something straightforward. Anyways, We'll figure it all out at some point ;) I'm really liking your lib by the way. It's let me build a gui and get a lot done and just "work". Not sure if it will work on X11 with just a recompile, but I hope ;) It often will! If you aren't using any of the native event handler functions or any of the impl.* members, most things just work (exception being the windows hotkey functions, but those are marked Windows anyway!). The basic opengl stuff is all done for both platforms. Advanced opengl isn't implemented on Windows yet though (I don't know it; my opengl knowledge stops in like 1998 with opengl 1.1 so yeah, I depend on people's contributions for that and someone did Linux for me, but not Windows yet. I think.) I found this on non-power of 2 textures: https://www.opengl.org/wiki/NPOT_Texture https://www.opengl.org/registry/specs/ARB/texture_non_power_of_two.txt It seems like it's probably a quick and easy add on and you already have the padding code, it could easily be optional(set a flag or pass a bool or whatever). it could definitely same some serious memory for large textures. e.g., a 3000x3000x4 texture takes about 36MB or 2^25.1 bytes. Since this has to be rounded up to 2^26 = 67MB, we almost have doubled the amount of wasted space. Hence, allowing for non-power of two would probably reduce the memory footprint of my code to near 50MB(around 40MB being the minimum using uncompressed textures). I might try to get a working version of that at some point. Going to deal with the cpu thing now though. Thanks again.
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote: On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote: [...] So, opengltexture actually does reallocate if the size isn't right for the texture... and your image was one of those sizes. [...] Cool, I'll check all this out and report back. I'll look into the cpu issue too. Thanks!
Re: Variadic function with parameters all of a specific type
On Friday, 17 June 2016 at 21:20:01 UTC, Timon Gehr wrote: On 17.06.2016 23:00, Nordlöw wrote: I want to create a function that takes a variadic number of arguments all of a specific type, say T, without having to create GC-allocated heap array. Is there a better way than: f(Args...)(Args args) if (allSameType!(Args, T); in terms of template bloat? alias T=int; void foo(T[] a...)@nogc{} void bar()@nogc{ foo(1,2,3); } this. the compiler is smart, it is creating a slice of stack memory here. note that you can't just assign `a` to, for example, global or member: when execution of `foo` ends, `a` will still point to stack memory. if you need to store `a` somewhere, be sure to `.dup` it.
Re: Variadic function with parameters all of a specific type
On 17.06.2016 23:00, Nordlöw wrote: I want to create a function that takes a variadic number of arguments all of a specific type, say T, without having to create GC-allocated heap array. Is there a better way than: f(Args...)(Args args) if (allSameType!(Args, T); in terms of template bloat? alias T=int; void foo(T[] a...)@nogc{} void bar()@nogc{ foo(1,2,3); }
Re: How to get access to Voldemort / private thingies
On Friday, 17 June 2016 at 19:49:18 UTC, Johan Engelen wrote: Hi all, Is there another way to get access to Voldemort class methods, or private class members, other than using Voldemort data is pretty well protected though. Because unlike protection attributes, modularizing stuff in functions actually means something. I mean, D doesn't exactly make it easy. You can't normally define a function in a different file it's declared in. But if you use extern(C) to avoid mangling getObject, you can pretty much provide interface.d and secrets.o and without analyzing the binary machine code, there's no way to tell the size or nature of what getObject returns, aside from that it (claims) to have pointers to functions that match the interface. interface.d: interface Object { ... }; extern(C) Object getObject(); secrets.d: class Vold : Object { ... }; extern(C) Object getObject() { ... return new Vold(...); ... } secrets.o: Because of the guarantee that you can link to opaque .o files, there's no general way to introspect into just what a function does, because that function might not have any source at all. (I suppose you could instrument "new" itself in the raw runtime, to at least get the size of it. Assuming it wasn't malloc'd, or static...)
Variadic function with parameters all of a specific type
I want to create a function that takes a variadic number of arguments all of a specific type, say T, without having to create GC-allocated heap array. Is there a better way than: f(Args...)(Args args) if (allSameType!(Args, T); in terms of template bloat?
Re: How to get access to Voldemort / private thingies
On Friday, 17 June 2016 at 20:12:53 UTC, cy wrote: writeln("see ",wow," for any equipment you need."); Oh, and as you can see it's important to automate that, so you don't make any mistakes while copying.
Re: How to get access to Voldemort / private thingies
On Friday, 17 June 2016 at 19:49:18 UTC, Johan Engelen wrote: Hi all, Is there another way to get access to Voldemort class methods, or private class members, other than using "pragma(mangle, ...)" on user symbols? Well, I'm sure you know that's a horrible idea. Anyway, a trick I use in C++ is to copy and paste the class's source into my own file, and change everything to public, then just cast to my hiijacking class type. So, in D... import std.stdio; final class SeriousBusiness { private: int soconfidential; char wow; void thisIsTotallyProtectedHonest() { // import underlying.implementation; // import dependency.hell // import kitchen.sink writeln("We can totally put sensitive data in here"); } this(int s, char w) { this.soconfidential = s; this.wow = w; } } final class ProtectionAttributesSuck { int soconfidential; char wow; void thisIsTotallyProtectedHonest() { // import underlying.implementation; // import dependency.hell // import kitchen.sink writeln("We can totally put sensitive data in here"); writeln("see ",wow," for any equipment you need."); } } void main() { SeriousBusiness srs = new SeriousBusiness(42,'Q'); ProtectionAttributesSuck* hack = cast(ProtectionAttributesSuck*)&srs; writeln("the answer is ",hack.soconfidential); hack.thisIsTotallyProtectedHonest(); }
How to get access to Voldemort / private thingies
Hi all, Is there another way to get access to Voldemort class methods, or private class members, other than using "pragma(mangle, ...)" on user symbols? Example code: In library, and _should not_ be changed : ``` Object getObject() { class Vold : Object { int store; this(int i) { store = i; } } return new Vold(2); } class Thread { private: static struct AAA { int i; } __gshared AAA* sm_cbeg; } ``` And then the code seeking access: ``` pragma(mangle, "mangled name of Vold's constructor") private extern(C) void defaultTraceInfoCtor(Object) @nogc; struct A { int i; } __gshared extern { pragma(mangle, "the mangled symbol name") A* sm_cbeg; } ``` Go mad on the "code seeking access" :-) The problem is that the types of `defaultTraceInfoCtor` and `sm_cbeg` are incorrect. If I can get the types to match (internally in the compiler), I'm happy. Thanks, Johan
Re: std.parallelism.taskPool daemon threads not terminating
On Friday, 17 June 2016 at 14:29:57 UTC, Russel Winder wrote: A priori, assuming I am not missing anything, this behaviour seems entirely reasonable. I agree that when using non-daemon threads (and I personally think that should be the default) that it is. But I cannot bring that into accord with the documentation of taskPool (the property)[1]: Returns a lazily initialized global instantiation of TaskPool. [...] The worker threads in this pool are daemon threads, meaning that it is not necessary to call TaskPool.stop or TaskPool.finish before terminating the main thread. A daemon thread is automatically terminated when all non-daemon threads have terminated. A non-daemon thread will prevent a program from terminating as long as it has not terminated. The above - while not explicitly stating that daemon-threads do not prevent a program from terminating - are - to me - strongly suggesting it (and if they do indeed not, then I would ask how daemon threads are differnt from non-daemon threads in the context of TaskPool, since I'm unable to make it out from the documentation). The task is an infinite loop so it never terminates. This means the threadpool does not stop working, which means the program does not terminate. Yes, that example is intentionally chosen that way to make my point. I initially discovered this while putting of a synchronous read of STDIN in a loop, but that example might have diverted attention to something other than I intended. I suspect that daemon may not mean what you think it means. At least not with respect to the threadpool. I do, too, which is why I asked here, since after having read the relevant documentation several times with significant time delay in between I still cannot make out how else to interpret it (and I got no reply in #dlang IRC). [1] https://dlang.org/library/std/parallelism/task_pool.html
Issues getting DCD to work on Windows
Trying to set up an IDE on Windows 7 with code completion but my issues keep coming back to DCD. The tests failed the one time I could get the tests to go beyond it waiting for another instance of DCD to close. The path is added to my PATH variable, I've rebuilt it from source with the .bat file to install it on Windows and rebuilt it with dub as per the github instructions... I even changed one of its source files to import winsock2 instead of winsock because the build output kept warning me about a depreciation... probably made it worse as I've now got prompts telling me about Socket OS Exceptions. Anyone got a clue what I could be doing wrong and how I can fix it?
Re: Different struct sizeof between linux and windows
On Friday, June 17, 2016 13:21:04 Vladimir Panteleev via Digitalmars-d-learn wrote: > On Friday, 17 June 2016 at 13:11:35 UTC, Kagamin wrote: > > time_t is 64-bit on windows: > > https://msdn.microsoft.com/en-us/library/1f4c8f33.aspx > > Windows does not have the concept of "time_t". The C runtime in > use does. > > We use the DigitalMars C runtime for the 32-bit model, which is > the default one. The Microsoft one is used for 64-bit and 32-bit > COFF. I'm not sure how the MS C library deals with time_t, > however the time() function (as exported from the library file / > DLL) is the 32-bit version. If I were to guess, the C headers > define a macro which redirects time() calls to the 64-bit version > when appropriate. The D bindings don't copy that behavior. The VS C runtime uses a macro to indicate whether time_t should be treated as 32-bit or 64-bit on 32-bit systems. I thought that the default was 32-bit, but it looks like it's actually 64-bit, with the macro being _USE_32BIT_TIME_T. https://msdn.microsoft.com/en-us/library/1f4c8f33(v=vs.140).aspx I guess that that the correct way to handle that would be to make it so that druntime defines it as 64-bit by default and then has a version identifier to change the behavior, but I don't know how that sort of thing has been handled with the Win32 stuff in general. In the case of the stupid unicode-related macros, IIRC, the solution is to just force you to use either the A or W functions explicitly (preferably the W functions) rather than making either of them the default or using a version identifier. That approach really isn't an option here though, since the names don't changee but rather the types. - Jonathan M Davis
Re: Different struct sizeof between linux and windows
On Friday, 17 June 2016 at 16:16:48 UTC, Kagamin wrote: On Friday, 17 June 2016 at 13:21:04 UTC, Vladimir Panteleev wrote: Windows does not have the concept of "time_t". The C runtime in use does. The D bindings don't copy that behavior. D defining C runtime type different from C runtime causes this error. If I were to import the time() function from MSVCR*.dll, what size its return value would be?
Re: Different struct sizeof between linux and windows
On Friday, 17 June 2016 at 13:21:04 UTC, Vladimir Panteleev wrote: Windows does not have the concept of "time_t". The C runtime in use does. The D bindings don't copy that behavior. D defining C runtime type different from C runtime causes this error.
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote: ok, then it's somewhere in TrueColorImage or the loading of the png. So, opengltexture actually does reallocate if the size isn't right for the texture... and your image was one of those sizes. The texture pixel size needs to be a power of two, so 3000 gets rounded up to 4096, which means an internal allocation. But it can be a temporary one! So ketmar tackled png.d's loaders' temporaries and I took care of gamehelper.d's... And the test program went down about to 1/3 of its memory usage. Try grabbing the new ones from github now and see if it works for you too. Well, It works on LDC x64! again ;) This seems like an issue with DMD x64? I was thinking maybe it has to do the layout of the struct or something, but not sure. I have a fix for this too, though I don't understand why it works I just .dup'd the string literal before passing it to Windows. I think dmd is putting the literal in a bad place for these functions (they do bit tests to see if it is a pointer or an atom, so maybe it is in an address where the wrong bits are set) In any case, the .dup seems to fix it, so all should work on 32 or 64 bit now. In my tests, now that the big temporary arrays are manually freed, the memory usage is actually slightly lower on 32 bit, but it isn't bad on 64 bit either. The CPU usage is consistently very low on my computer. I still don't know what could be causing it for you, but maybe it is the temporary garbage... let us know if the new patches make a difference there. Anyways, We'll figure it all out at some point ;) I'm really liking your lib by the way. It's let me build a gui and get a lot done and just "work". Not sure if it will work on X11 with just a recompile, but I hope ;) It often will! If you aren't using any of the native event handler functions or any of the impl.* members, most things just work (exception being the windows hotkey functions, but those are marked Windows anyway!). The basic opengl stuff is all done for both platforms. Advanced opengl isn't implemented on Windows yet though (I don't know it; my opengl knowledge stops in like 1998 with opengl 1.1 so yeah, I depend on people's contributions for that and someone did Linux for me, but not Windows yet. I think.)
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote: LDC x64 uses about 250MB and 13% cpu. I couldn't check on x86 because of the error phobos2-ldc.lib(gzlib.c.obj) : fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86' not sure what that means with gzlib.c.ojb. Must be another bug in ldc alpha ;/ It looks like you're trying to link 32-bit objects to a 64-bit Phobos. The only pre-built LDC for Windows capable of linking both 32-bit and 64-bit code is the multilib CI release, see https://github.com/ldc-developers/ldc/releases/tag/LDC-Win64-master.
Re: std.parallelism.taskPool daemon threads not terminating
On Fri, 2016-06-17 at 00:14 +, Moritz Maxeiner via Digitalmars-d- learn wrote: > So, I am probably overlooking something obvious, but here goes: > According to my understanding of daemon threads and what is > documented here[1], > this following program should terminate once the druntime shuts > down, as the thread working on the task is supposed to be a > daemon thread: > > > import std.parallelism; > > > > void main() > > { > > taskPool.put(task({ while(true) {} })); > > } > > The actual behaviour (with dmd 2.071 and ldc2 1.0.0), however, is > that the program keeps running. A priori, assuming I am not missing anything, this behaviour seems entirely reasonable. The task is an infinite loop so it never terminates. This means the threadpool does not stop working, which means the program does not terminate. > In contract, this behaves as expected: > > > import core.thread; > > > > void main() > > { > > with (new Thread({ while(true) {} })) { > > isDaemon = true; > > start(); > > } > > } > > Commenting out setting the isDaemon property will achieve the > same behaviour as the taskPool example. Is this the intended > behaviour of taskPool (because it does have isDaemon set)? I suspect that daemon may not mean what you think it means. At least not with respect to the threadpool. > > [1] https://dlang.org/library/std/parallelism/task_pool.html -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
vibe.d - asynchronously wait() for process to exit
std.process.wait() will wait for a child process to exit and return its exit code. How can this be done in Vibe.d, without blocking other fibers and without creating a new thread? In my library I did it like this: https://github.com/CyberShadow/ae/blob/master/sys/process.d (register a SIGCHLD signal handler, which pings the main thread via a socket). Geod24 on IRC suggested signalfd + createFileDescriptorEvent. I think this would work, but isn't it possible to wrap the fd returned by signalfd into a Vibe.d stream and read it directly? I'm just not sure how. I noticed libasync also provides notification for POSIX signals, but I've no idea where to start with using that in a Vibe.d program.
Re: Default initialization of structs?
On Friday, 17 June 2016 at 12:31:33 UTC, David Nadlinger wrote: Structs cannot have a default constructor; .init is required to be a valid state (unless you @disable default construction). Except for nested structs :) They have the default constructor and their .init is not a valid state: has null context pointer.
Re: Different struct sizeof between linux and windows
On Friday, 17 June 2016 at 13:11:35 UTC, Kagamin wrote: time_t is 64-bit on windows: https://msdn.microsoft.com/en-us/library/1f4c8f33.aspx Windows does not have the concept of "time_t". The C runtime in use does. We use the DigitalMars C runtime for the 32-bit model, which is the default one. The Microsoft one is used for 64-bit and 32-bit COFF. I'm not sure how the MS C library deals with time_t, however the time() function (as exported from the library file / DLL) is the 32-bit version. If I were to guess, the C headers define a macro which redirects time() calls to the 64-bit version when appropriate. The D bindings don't copy that behavior.
Re: Different struct sizeof between linux and windows
time_t is 64-bit on windows: https://msdn.microsoft.com/en-us/library/1f4c8f33.aspx
Re: Default initialization of structs?
On Friday, 17 June 2016 at 11:10:12 UTC, Gary Willoughby wrote: Thanks, I forgot to mention I'm also doing lots of other stuff in the constructor to private fields too. struct Foo(T) { private int _bar; private void* _baz; this(int bar = 8) { this._bar = bar; this._baz = malloc(this._bar); } } So I have to at least run a constructor. Structs cannot have a default constructor; .init is required to be a valid state (unless you @disable default construction). This is a deliberate language restriction, although you can argue about its value. What you can do as a workaround is to provide a public static factory method while disabling default construction. — David
Re: Default initialization of structs?
On Friday, 17 June 2016 at 10:50:55 UTC, Gary Willoughby wrote: I have a struct where I need to perform default initialization of some members but the compiler doesn't allow to define a default constructor which allow optional arguments. This is a fairly recent change (2.068->2.069 or 2.070), so if you browse the release notes you may be able to figure out exactly why this is not allowed. -Johan
Re: OpenGL Setup?
I have been using Textadept ( http://foicica.com/textadept/ ) with Textadept-d ( https://github.com/Hackerpilot/textadept-d ). I use mostly on Linux for development, but I've recently spent two or three days on Windows and things worked well enough for me. (Coming for someone who has used Emacs for everything int he last 15+ years, this must mean something :-) ) Hope this helps, LMB On Fri, Jun 17, 2016 at 8:51 AM, OpenJelly via Digitalmars-d-learn < digitalmars-d-learn@puremagic.com> wrote: > On Thursday, 16 June 2016 at 19:52:58 UTC, OpenJelly wrote: > >> >> Trying to get VS Code to work with code-d... can't get dcd to work with > it. It says it's failed to kill the dcd server when I try to reload it. It > wasn't appearing in task manager (but dcd-client was) and manually starting > it up didn't make it work in vs code. Trying to restart it in cmd freezes > that window and the task refuses to kill. I'm trying to fix it now but I > don't even know why it's not working... > > The arsd stuff just gives me a thick list of internal errors when I try to > import it through dub. But I might almost have GLFW working... I can't > really tell yet, but I did finally find the right dll to link to (files in > the Windows x64 binaries kept giving me an error, but the one x64 dll in > the x86 download ended up working). > > Not keen to try vim if it doesn't have the features I need to compensate > for being a shitty programmer. >
Re: OpenGL Setup?
On Thursday, 16 June 2016 at 19:52:58 UTC, OpenJelly wrote: Trying to get VS Code to work with code-d... can't get dcd to work with it. It says it's failed to kill the dcd server when I try to reload it. It wasn't appearing in task manager (but dcd-client was) and manually starting it up didn't make it work in vs code. Trying to restart it in cmd freezes that window and the task refuses to kill. I'm trying to fix it now but I don't even know why it's not working... The arsd stuff just gives me a thick list of internal errors when I try to import it through dub. But I might almost have GLFW working... I can't really tell yet, but I did finally find the right dll to link to (files in the Windows x64 binaries kept giving me an error, but the one x64 dll in the x86 download ended up working). Not keen to try vim if it doesn't have the features I need to compensate for being a shitty programmer.
Re: Default initialization of structs?
The Factory-Pattern would be a good idea.
Re: Default initialization of structs?
On Friday, 17 June 2016 at 10:53:40 UTC, Lodovico Giaretta wrote: struct Foo(T) { private int _bar = 1; this(int bar) { this._bar = bar; } } auto foo = Foo!(string)(); This should do the trick. Thanks, I forgot to mention I'm also doing lots of other stuff in the constructor to private fields too. struct Foo(T) { private int _bar; private void* _baz; this(int bar = 8) { this._bar = bar; this._baz = malloc(this._bar); } } So I have to at least run a constructor.
Re: Default initialization of structs?
On Friday, 17 June 2016 at 10:50:55 UTC, Gary Willoughby wrote: I have a struct where I need to perform default initialization of some members but the compiler doesn't allow to define a default constructor which allow optional arguments. struct Foo(T) { private int _bar; this(int bar = 1) { this._bar = bar; } } auto foo = Foo!(string) // error Are there any patterns or idioms I could use to get the desired result? Or is it just the case if I use a constructor I have to pass values to it? struct Foo(T) { private int _bar = 1; this(int bar) { this._bar = bar; } } auto foo = Foo!(string)(); This should do the trick.
Default initialization of structs?
I have a struct where I need to perform default initialization of some members but the compiler doesn't allow to define a default constructor which allow optional arguments. struct Foo(T) { private int _bar; this(int bar = 1) { this._bar = bar; } } auto foo = Foo!(string) // error Are there any patterns or idioms I could use to get the desired result? Or is it just the case if I use a constructor I have to pass values to it?
Re: Different struct sizeof between linux and windows
On Friday, 17 June 2016 at 07:11:28 UTC, Vladimir Panteleev wrote: On Friday, 17 June 2016 at 06:54:36 UTC, Andre Pany wrote: Is this behavior correct? Yes. time_t is defined as C long on Linux (meaning it'll be 64-bit in 64-bit programs), however it's always 32-bit on the Windows C runtimes we use. Thanks for clarification. Kind regards André
Dub generate visuald, multiple configurations?
Is there a way to generate a single visuald project file for all dub configurations, selecting the configuration from the visual studio configuration manager? Or do I have to generate a separate project for each configuration?
Re: Accessing COM Objects
On Wednesday, 15 June 2016 at 21:06:01 UTC, Joerg Joergonson wrote: My thinking is that CoCreateinstance is suppose to give us a pointer to the interface so we can use it, if all this stuff is crashing does that mean the interface is invalid or not being assigned properly or is there far more to it than this? The problem is Photoshop hasn't provided an interface with methods that can be called directly. They don't exist on the interface, hence them being commented out. It's a mechanism known as late binding (everything is done at runtime rather than compile time). You need to ask the interface for the method's ID, marshal the parameters into a specific format, and then "invoke" the method using that ID. And you're not going to like it. Here's an example just to call the "Load" method: // Initialize the Photoshop class instance IDispatch psApp; auto iid = IID__Application; auto clsid = CLSID_Application; assert(SUCCEEDED(CoCreateInstance(&clsid, null, CLSCTX_ALL, &iid, cast(void**)&psApp))); scope(exit) psApp.Release(); // Get the ID of the Load method auto methodName = "Load"w.ptr; auto dispId = DISPID_UNKNOWN; iid = IID_NULL; assert(SUCCEEDED(psApp.GetIDsOfNames(&iid, &methodName, 1, 0, &dispId))); // Put the parameters into the expected format VARIANT fileName = { vt: VARENUM.VT_BSTR, bstrVal: SysAllocString("ps.psd"w.ptr) }; scope(exit) VariantClear(&fileName); DISPPARAMS params = { rgvarg: &fileName, cArgs: 1 }; // Finally call the method assert(SUCCEEDED(psApp.Invoke(dispId, &iid, 0, DISPATCH_METHOD, ¶ms, null, null, null))); tlb2d only outputs the late-bound methods as a hint to the user so they know the names of the methods and the expected parameters (well, it saves looking them up in OleView). Had Photoshop supplied a compile-time binding, you could have just called psApp.Load(fileName) like you tried. It's possible to wrap that ugly mess above in less verbose code using native D types, and the Juno COM library mentioned earlier enabled that, but the code is quite ancient (and is part of and depends on a larger library). I've been slowly working on a more modern library. You'd be able to just write this: auto psApp = makeReference!"Photoshop.Application"(); psApp.Load("ps.psd"); But I don't know when it'll be ready.
Re: GTKD - Application crashes - or not? [Coedit]
On Friday, 17 June 2016 at 06:18:59 UTC, Basile B. wrote: On Thursday, 16 June 2016 at 09:18:54 UTC, TheDGuy wrote: On Thursday, 16 June 2016 at 08:20:00 UTC, Basile B. wrote: Yes it's "WorkingDirectory" (and not current...). But otherwise you can use args[0]. Actually using the cwd in a program is often an error because there is no guarantee that the cwd is the path to the application ;) People often forget that (Generally speaking). If i use args[0] as workingDirectory i still get the same error. I created a custom Tool like this: https://picload.org/image/rgwapdac/coedit_run_options.png if i execute it via "Custom Tools" -> "Run this project" nothing happens. There was a bug I've fixed yesterday. There's a workaround: this would have worked when the tool option "clearMessages" is checked. Thanks a lot!
Re: Different struct sizeof between linux and windows
On Friday, 17 June 2016 at 06:54:36 UTC, Andre Pany wrote: Is this behavior correct? Yes. time_t is defined as C long on Linux (meaning it'll be 64-bit in 64-bit programs), however it's always 32-bit on the Windows C runtimes we use.
Re: ARSD PNG memory usage
On Friday, 17 June 2016 at 03:41:02 UTC, Adam D. Ruppe wrote: It actually has been on my todo list for a while to change the decoder to generate less garbage. I have had trouble in the past with temporary arrays being pinned by false pointers and the memory use ballooning from that, and the lifetime is really easy to manage so just malloc/freeing it would be an easy solution, just like you said, std.zlib basically sucks so I have to use the underlying C functions and I just haven't gotten around to it. did that. decoding still sux, but now it should suck less. ;-) encoder is still using "std.zlib", though. next time, maybe.