Re: Speed of synchronized
On Monday, 17 October 2016 at 06:38:08 UTC, Daniel Kozak wrote: Dne 17.10.2016 v 07:55 Christian Köstlin via Digitalmars-d-learn napsal(a): [...] I am still unable to get your java code working: [kozak@dajinka threads]$ ./gradlew clean build :clean :compileJava :processResources UP-TO-DATE :classes :jar :assemble :compileTestJava :processTestResources UP-TO-DATE :testClasses :test :check :build BUILD SUCCESSFUL Total time: 3.726 secs How I can run it? I have it, it is in build/test-results/test/TEST-counter.CounterTest.xml
Re: Speed of synchronized
Dne 17.10.2016 v 07:55 Christian Köstlin via Digitalmars-d-learn napsal(a): to run java call ./gradlew clean build -> counter.AtomicIntCounter@25992ae3 expected: 200 got: 100 in: 22ms counter.AtomicLongCounter@2539f946 expected: 200 got: 100 in: 17ms counter.ThreadSafe2Counter@527d56c2 expected: 200 got: 100 in: 33ms counter.ThreadSafe1Counter@6fd8b1a expected: 200 got: 100 in: 173ms counter.ThreadUnsafeCounter@6bb33878 expected: 200 got: 562858 in: 10ms I am still unable to get your java code working: [kozak@dajinka threads]$ ./gradlew clean build :clean :compileJava :processResources UP-TO-DATE :classes :jar :assemble :compileTestJava :processTestResources UP-TO-DATE :testClasses :test :check :build BUILD SUCCESSFUL Total time: 3.726 secs How I can run it?
Re: Speed of synchronized
On 17/10/16 06:55, Daniel Kozak via Digitalmars-d-learn wrote: > Dne 16.10.2016 v 10:41 Christian Köstlin via Digitalmars-d-learn napsal(a): > >> My question now is, why is each mutex based thread safe variant so slow >> compared to a similar java program? The only hint could be something >> like: >> https://blogs.oracle.com/dave/entry/java_util_concurrent_reentrantlock_vs >> that >> mentions, that there is some magic going on underneath. >> For the atomic and the non thread safe variant, the d solution seems to >> be twice as fast as my java program, for the locked variant, the java >> program seems to be 40 times faster? >> >> btw. I run the code with dub run --build=release >> >> Thanks in advance, >> Christian > Can you post your timings (both D and Java)? And can you post your java > code? Hi, thanks for asking. I attached my java and d sources. Both try to do more or less the same thing. They spawn 100 threads, that call increment on a counter object 1 times. The implementation of the counter object is exchanged, between a obviously broken thread unsafe implementation, some with atomic operations, some with mutex-implementations. to run java call ./gradlew clean build -> counter.AtomicIntCounter@25992ae3 expected: 200 got: 100 in: 22ms counter.AtomicLongCounter@2539f946 expected: 200 got: 100 in: 17ms counter.ThreadSafe2Counter@527d56c2 expected: 200 got: 100 in: 33ms counter.ThreadSafe1Counter@6fd8b1a expected: 200 got: 100 in: 173ms counter.ThreadUnsafeCounter@6bb33878 expected: 200 got: 562858 in: 10ms obviously the unsafe implementation is fastest, followed by atomics. the vrsion with reentrant locks performs very well, wheras the implementation with synchronized is the slowest. to run d call dub test (please mark, that the dub test build is configured like this: buildType "unittest" { buildOptions "releaseMode" "optimize" "inline" "unittests" "debugInfo" } , it should be release code speed and quality). -> app.AtomicCounter: got: 100 expected: 100 in 23 ms, 852 μs, and 6 hnsecs app.ThreadSafe1Counter: got: 100 expected: 100 in 3 secs, 673 ms, 232 μs, and 6 hnsecs app.ThreadSafe2Counter: got: 100 expected: 100 in 3 secs, 684 ms, 416 μs, and 2 hnsecs app.ThreadUnsafeCounter: got: 690073 expected: 100 in 8 ms and 540 μs from example got: 3 secs, 806 ms, and 258 μs here again, the unsafe implemenation is the fastest, atomic performs in the same ballpark as java only the thread safe variants are far off. thanks for looking into this, best regards, christian threads.tar.gz Description: GNU Zip compressed data
Re: Speed of synchronized
On 16/10/16 19:50, tcak wrote: > On Sunday, 16 October 2016 at 08:41:26 UTC, Christian Köstlin wrote: >> Hi, >> >> for an exercise I had to implement a thread safe counter. This is what >> I came up with: >> >> [...] > > Could you try that: > > class ThreadSafe3Counter: Counter{ > private long counter; > private core.sync.mutex.Mutex mtx; > > public this() shared{ > mtx = cast(shared)( new core.sync.mutex.Mutex ); > } > > void increment() shared { > (cast()mtx).lock(); > scope(exit){ (cast()mtx).unlock(); } > > core.atomic.atomicOp!"+="(this.counter, 1); > } > > long get() shared { > return counter; > } > } > > > Unfortunately, there are some stupid design decisions in D about > "shared", and some people does not want to accept them. > > Example while you are using mutex, so you shouldn't be forced to use > atomicOp there. As a programmer, you know that it will be protected > already. That is a loss of performance in the long run. thanks for the implementation. i think this is nicer, than using __gshared. i think using atomic operations and mutexes at the same time, does not make any sense. one or the other. thanks, Christian
Re: Cannot link with libphobos2.a with GCC 6.2 on Ubuntu 16.10
On Sunday, 16 October 2016 at 22:36:15 UTC, Nordlöw wrote: On Sunday, 16 October 2016 at 22:00:48 UTC, Nordlöw wrote: Which flag(s) in `src/posix.mak` did you change? Does make -f posix.mak MODEL_FLAG=-fPIC work? I'm sitting on a 16.04 system right now (which I don't dare to upgrade until this is fixed) so I'm just guessing. Well, I haven't made any changes anywhere at all. I always download the deb file and install it. My program was compiling on 16.04, and wasn't compiling on 16.10. So, I added -defaultlib=libphobos2.so -fPIC while compiling. That's it. But as you can guess, now I have to copy the libphobos on other computers as well as the executable. (libphotos2.so.0.71 is 9 MiB)
Re: Speed of synchronized
Dne 16.10.2016 v 10:41 Christian Köstlin via Digitalmars-d-learn napsal(a): My question now is, why is each mutex based thread safe variant so slow compared to a similar java program? The only hint could be something like: https://blogs.oracle.com/dave/entry/java_util_concurrent_reentrantlock_vs that mentions, that there is some magic going on underneath. For the atomic and the non thread safe variant, the d solution seems to be twice as fast as my java program, for the locked variant, the java program seems to be 40 times faster? btw. I run the code with dub run --build=release Thanks in advance, Christian Can you post your timings (both D and Java)? And can you post your java code?
Re: Render SVG To Display And Update Periodically
On 17/10/2016 2:20 PM, Jason C. Wells wrote: I have in mind a project to render instruments (speed, pressure, position) to a screen using SVG. I am able to produce the SVG easily enough. What I am looking for is a library/canvas/toolkit that I can use in D inside of a loop and update the instrument readouts. This whole project is a vehicle for me to expand my mechanical engineering abilities into instrumentation and control. I've decided that D is the language I want to learn to do this. I see things like Cairo and librsvg. Maybe a full blown GUI toolkit has the bits i need. The thing I have a hard time with is assessing whether or not these things are suitable for what I hope to do. Thanks in advance, Jason C. Wells You're going to need to find an svg rasterizer in some form or another. I don't think we have one in D and certainly not a complete one. Otherwise you could always just make it as web application and let the web browser do all the rendering, which seems easier.
Re: Is this should work?
On 10/16/2016 03:22 PM, markov wrote: void test(string line){ ... }; void main(){ string[] list; foreach (line; list) new Thread({test(line);}).start(); ... } It works in an unexpected way: In D, all those threads would close over the same 'line' loop variable, which happens to point to the last string, so they all see the last string: import std.stdio; import core.thread; void test(size_t i, string line){ writefln("Thread %s: %s", i, line); } void main(){ string[] list = [ "hello", "world" ]; foreach (i, line; list) new Thread({test(i, line);}).start(); } Prints: Thread 1: world// <-- Expected 0 and "hello" Thread 1: world What needs to happen is to give the thread separate variables. An easy solution is to start the thread with the arguments of a function: foreach (i, line; list) { void startThread(size_t j, string l) { new Thread({test(j, l);}).start();// <-- Uses function // arguments } startThread(i, line); } I happened to define a nested function. You can define it anywhere else. Ali
Render SVG To Display And Update Periodically
I have in mind a project to render instruments (speed, pressure, position) to a screen using SVG. I am able to produce the SVG easily enough. What I am looking for is a library/canvas/toolkit that I can use in D inside of a loop and update the instrument readouts. This whole project is a vehicle for me to expand my mechanical engineering abilities into instrumentation and control. I've decided that D is the language I want to learn to do this. I see things like Cairo and librsvg. Maybe a full blown GUI toolkit has the bits i need. The thing I have a hard time with is assessing whether or not these things are suitable for what I hope to do. Thanks in advance, Jason C. Wells
Re: Cannot link with libphobos2.a with GCC 6.2 on Ubuntu 16.10
On Sunday, 16 October 2016 at 22:00:48 UTC, Nordlöw wrote: Which flag(s) in `src/posix.mak` did you change? Does make -f posix.mak MODEL_FLAG=-fPIC work? I'm sitting on a 16.04 system right now (which I don't dare to upgrade until this is fixed) so I'm just guessing.
Is this should work?
void test(string line){ ... }; void main(){ string[] list; foreach (line; list) new Thread({test(line);}).start(); ... }
Re: Cannot link with libphobos2.a with GCC 6.2 on Ubuntu 16.10
On Sunday, 16 October 2016 at 20:01:21 UTC, tcak wrote: Hmm. As the error message says, I compiled the program by adding "-fPIC", it really has stopped giving error messages. That came to me weird. Which flag(s) in `src/posix.mak` did you change?
Re: Building DMD with DMD or LDC
On Saturday, 15 October 2016 at 07:39:31 UTC, ketmar wrote: On Friday, 14 October 2016 at 15:13:58 UTC, Jonathan M Davis wrote: On Thursday, October 13, 2016 19:07:44 Nordlöw via Digitalmars-d-learn wrote: Is there a large speed difference in compilation time depending on whether the DMD used is built using DMD or LDC? I would be shocked if there weren't. i did that out of curiosity some time ago, but with gdc, and then tested my projects, and phobos rebuilding. speed difference was so small that it can be a usual random deviation. This topic came up at the start of the year, and Iain pointed out that the compiler code overrides the default memory management, which increases performance enormously. But, that malloc override was _only enabled when built with DMD_. https://forum.dlang.org/post/vqjzqadpxwfzvlptp...@forum.dlang.org This was fixed for LDC here: https://github.com/dlang/dmd/pull/5631/files It resulted in a massive speed gain when the front-end is built with LDC. I no longer have the numbers, but DMD built with LDC is definitely faster. About 10% according to the old thread. Same for LDC built with LDC. (self promotion: when you compile the same code over and over, you gain another ~7% when using PGO: https://johanengelen.github.io/ldc/2016/04/13/PGO-in-LDC-virtual-calls.html) -Johan
Speed of synchronized
Hi, for an exercise I had to implement a thread safe counter. This is what I came up with: ---SNIP--- import std.stdio; import core.thread; import std.conv; import std.datetime; static import core.atomic; import core.sync.mutex; int NR_OF_THREADS = 100; int NR_OF_INCREMENTS = 1; interface Counter { void increment() shared; long get() shared; } class ThreadUnsafeCounter : Counter { long counter; void increment() shared { counter++; } long get() shared { return counter; } } class ThreadSafe1Counter : Counter { private long counter; synchronized void increment() shared { counter++; } long get() shared { return counter; } } class ThreadSafe2Counter : Counter { private long counter; __gshared Mutex lock; // http://forum.dlang.org/post/rzyooanimrynpmqly...@forum.dlang.org this() shared { lock = new Mutex; } void increment() shared { synchronized (lock) { counter++; } } long get() shared { return counter; } } class AtomicCounter : Counter { private long counter; void increment() shared { core.atomic.atomicOp!"+="(this.counter, 1); } long get() shared { return counter; } } void main() { void runWith(Counter)() { shared Counter counter = new shared Counter(); void doIt() { Thread[] threads; for (int i=0; ihttps://dlang.org/phobos/core_sync_mutex.html) at the end. My question now is, why is each mutex based thread safe variant so slow compared to a similar java program? The only hint could be something like: https://blogs.oracle.com/dave/entry/java_util_concurrent_reentrantlock_vs that mentions, that there is some magic going on underneath. For the atomic and the non thread safe variant, the d solution seems to be twice as fast as my java program, for the locked variant, the java program seems to be 40 times faster? btw. I run the code with dub run --build=release Thanks in advance, Christian
Re: Cannot link with libphobos2.a with GCC 6.2 on Ubuntu 16.10
On Sunday, 16 October 2016 at 17:42:44 UTC, tcak wrote: On Thursday, 13 October 2016 at 17:02:32 UTC, Nordlöw wrote: [...] I have upgraded my Ubuntu to 16.10 yesterday as well, and I am getting following error: /usr/bin/ld: obj/Debug/program.o: relocation R_X86_64_32 against symbol `_D9Exception7__ClassZ' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: /usr/lib/x86_64-linux-gnu/libphobos2.a(object_1_257.o): relocation R_X86_64_32 against symbol `__dmd_personality_v0' can not be used when making a shared object; recompile with -fPIC ... I guess the problem is same. Even though I have added "-defaultlib=libphobos2.so" to compiler options, problem persists. Hmm. As the error message says, I compiled the program by adding "-fPIC", it really has stopped giving error messages. That came to me weird.
Re: Speed of synchronized
On Sunday, 16 October 2016 at 08:41:26 UTC, Christian Köstlin wrote: Hi, for an exercise I had to implement a thread safe counter. This is what I came up with: [...] Could you try that: class ThreadSafe3Counter: Counter{ private long counter; private core.sync.mutex.Mutex mtx; public this() shared{ mtx = cast(shared)( new core.sync.mutex.Mutex ); } void increment() shared { (cast()mtx).lock(); scope(exit){ (cast()mtx).unlock(); } core.atomic.atomicOp!"+="(this.counter, 1); } long get() shared { return counter; } } Unfortunately, there are some stupid design decisions in D about "shared", and some people does not want to accept them. Example while you are using mutex, so you shouldn't be forced to use atomicOp there. As a programmer, you know that it will be protected already. That is a loss of performance in the long run.
Re: Cannot link with libphobos2.a with GCC 6.2 on Ubuntu 16.10
On Thursday, 13 October 2016 at 17:02:32 UTC, Nordlöw wrote: I just upgraded my Ubuntu to 16.10 and now my rebuilding of dmd from git master fails as /usr/bin/ld: idgen.o: relocation R_X86_64_32 against symbol `__dmd_personality_v0' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: /usr/lib/x86_64-linux-gnu/libphobos2.a(object_a_66e.o): relocation R_X86_64_32 against symbol `__dmd_personality_v0' can not be used when making a shared object; recompile with -fPIC What's wrong? Am I using the wrong GCC version? Should I use GCC 5 instead? GCC 6.2 is default on 16.10. I have upgraded my Ubuntu to 16.10 yesterday as well, and I am getting following error: /usr/bin/ld: obj/Debug/program.o: relocation R_X86_64_32 against symbol `_D9Exception7__ClassZ' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: /usr/lib/x86_64-linux-gnu/libphobos2.a(object_1_257.o): relocation R_X86_64_32 against symbol `__dmd_personality_v0' can not be used when making a shared object; recompile with -fPIC ... I guess the problem is same. Even though I have added "-defaultlib=libphobos2.so" to compiler options, problem persists.
Re: Using dub.json parameters in code
On Saturday, 15 October 2016 at 17:36:10 UTC, Gustav wrote: Hi, I want to use the variables from dub.json. For example, use the parameter "name" to display information message. Now I read dub.json. Is there a way to import them? Please excuse any mistakes as English is my second language. I don't know wether there's a dub specific API but you could use string imports for that: auto dubFile = import("dub.json"); // then parse it with your favourite json lib You have to use the -J flag for dmd to know where to find dub.json though.
Using dub.json parameters in code
Hi, I want to use the variables from dub.json. For example, use the parameter "name" to display information message. Now I read dub.json. Is there a way to import them? Please excuse any mistakes as English is my second language.
Re: Variables with scoped destruction in closures
On Saturday, 15 October 2016 at 07:55:30 UTC, ketmar wrote: p.s. compiler doesn't complain each time, only in some circumstances. i don't remember the exact code now, but some of it has nothing to do with closures at all -- no std.algo, no templates with lambda args, etc.
Re: Variables with scoped destruction in closures
On Saturday, 15 October 2016 at 05:41:05 UTC, Walter Bright wrote: The problem is the closure is generated when it is expected that the delegate will survive past the end of the scope (it's the whole point of a closure). But with a destructor that runs at the end of the scope, it cannot survive, and so the user of the closure will be referring to a destroyed object. while this is generally true, it bites me every time i'm using my refcounted objects. i have std.stdio.File replacement, which is just an rc-struct for the outside world, and i can't write generalized templates with it, as i'm hitting this "scoped destruction" issue often enough for it to be annoying. like, for example, when i want to accept generalized i/o stream (i have template checkers for that) in write[ln] replacement, so i have to write this abominations: public void write(A...) (VFile fl, A args) { writeImpl!(false)(fl, args); } public void write(ST, A...) (auto ref ST fl, A args) if (!is(ST == VFile) && isRorWStream!ST) { writeImpl!(false, ST)(fl, args); } VFile is perfectly valid "OutputStream", but i can't use general template, 'cause VFile has dtor which decrements rc, and compiler complains. i.e. that should be just: public void write(ST, A...) (auto ref ST fl, A args) if (isRorWStream!ST) { writeImpl!(false, ST)(fl, args); } can we, please, have some way to tell the compiler that it is perfectly ok for some structs to have scoped destruction *and* be a member of closure? also, what kind of *closure* compiler tries to build here? it is a simple templated function, which calls another templated function, nothing fancy.
Re: Building DMD with DMD or LDC
On Saturday, 15 October 2016 at 07:39:31 UTC, ketmar wrote: p.s. this is all about GNU/Linux on x86 arch. for other OS/arch it may be completely different.
Re: Building DMD with DMD or LDC
On Friday, 14 October 2016 at 15:13:58 UTC, Jonathan M Davis wrote: On Thursday, October 13, 2016 19:07:44 Nordlöw via Digitalmars-d-learn wrote: Is there a large speed difference in compilation time depending on whether the DMD used is built using DMD or LDC? I would be shocked if there weren't. i did that out of curiosity some time ago, but with gdc, and then tested my projects, and phobos rebuilding. speed difference was so small that it can be a usual random deviation. yet i didn't tried ldc, maybe ldc does better. but dmd itself doesn't use ranges extensively, so i don't think that it really matters. that is, maybe in some corner cases the difference will be noticable, but most of the time building dmd with ldc/gdc means only "let's wait significantly longer for nothing".
Re: Escaping ref to stack mem sometimes allowed in @safe?
On 10/14/16 4:49 PM, Nick Sabalausky wrote: This compiles. Is it supposed to? @safe ubyte[] foo() { ubyte[512] buf; auto slice = buf[0..$]; // Escaping reference to stack memory, right? return slice; } Yes, this still is a problem, but Walter has a fix in the works. https://issues.dlang.org/show_bug.cgi?id=8838 This one is also fun: ubyte[512] foo(); ubyte[] x = foo(); // compiles, but never is correct. Ever. https://issues.dlang.org/show_bug.cgi?id=12625 -Steve
Escaping ref to stack mem sometimes allowed in @safe?
This compiles. Is it supposed to? @safe ubyte[] foo() { ubyte[512] buf; auto slice = buf[0..$]; // Escaping reference to stack memory, right? return slice; } Or is the escaping reference detection not intended to be 100%? Or something else I'm missing? Should I file @ bugzilla?