Re: Example usage of the core.sync classes

2015-01-03 Thread Matt via Digitalmars-d-learn

On Friday, 2 January 2015 at 18:52:11 UTC, Benjamin Thaut wrote:

Am 02.01.2015 um 19:45 schrieb Vlad Levenfeld:


My personal favorite method is to use the primitives in 
core.atomic with
a double or triple buffer. To double buffer, keep two buffers 
in an
array (data[][] or something) and an integer index that points 
to the
active buffer, then use atomicStore to update active buffer 
index when

you want to swap.
Triple buffering can improve runtime performance at the cost 
of more
memory, and in this case you will need two indices and two 
swap methods,
one for the producer and another for the consumer. I use cas 
with a

timed sleep to update the indices in this case.

Take it with a grain of salt, I'm far from a pro, but this has 
worked
well for me in the past. I can post some example code if you 
need it.


I have to agree with Vlad here. The usual way is to do double 
buffering. So you have two buffers which hold all the data that 
the physics simulation passes to the game logic. While the 
physics simulation is computing the next frame, the game logic 
can use the results from the last frame. The only point where 
sinchronisation is neccessary is when swapping the buffers. You 
have to ensure that both the game logic and the physics thread 
are done with their current buffer and then sawp them. The only 
downside of this approach is, that you will get a small delay 
(usually the time taken for 1 frame) into the data you pass 
this way. Sometimes this approach is called the exractor 
pattern. I use it to pass data from the game simulation to the 
renderer, so the renderer can render in paralell with the game 
simulation computing the next frame. You can find a example of 
my double buffered solution here:

https://github.com/Ingrater/Spacecraft/blob/master/game/sources/renderer/extractor.d

I had tripple buffering up and running at some point, but I 
went back to double buffering, because tripple buffering can 
cause micro lags and you don't want that.


Kind Regards
Benjamin Thaut


Right, I've been looking at core.atomic, but it has very little 
documentation, and it's territory I haven't explored, yet. Any 
chance of some pointers along the way?


Re: Example usage of the core.sync classes

2015-01-03 Thread Vlad Levenfeld via Digitalmars-d-learn

On Saturday, 3 January 2015 at 15:44:16 UTC, Matt wrote:
Right, I've been looking at core.atomic, but it has very little 
documentation, and it's territory I haven't explored, yet. Any 
chance of some pointers along the way?


Could you be more specific about what you need help 
understanding? You can implement a very simple double buffer like 
so:


  synchronized final class DoubleBuffer (T) {
T[][2] data;
uint write_index;

shared void swap () {
  atomicStore (write_index, (write_index + 1) % 2);
  (cast()this).buffer[write_index].clear;
}

shared @property write_buffer () {
  return (cast()this).buffer[write_index][];
}
shared @property read_buffer () {
  return (cast()this).buffer[(write_index + 1) % 2][];
}
  }

Just append to write_buffer, call swap, then read from 
read_buffer. Make sure you declare the instance shared. 
Benjamin's post links to a more robust and realistic 
implementation.


Re: Example usage of the core.sync classes

2015-01-03 Thread Matt via Digitalmars-d-learn

On Saturday, 3 January 2015 at 22:10:49 UTC, Vlad Levenfeld wrote:

On Saturday, 3 January 2015 at 15:44:16 UTC, Matt wrote:
Right, I've been looking at core.atomic, but it has very 
little documentation, and it's territory I haven't explored, 
yet. Any chance of some pointers along the way?


Could you be more specific about what you need help 
understanding? You can implement a very simple double buffer 
like so:


  synchronized final class DoubleBuffer (T) {
T[][2] data;
uint write_index;

shared void swap () {
  atomicStore (write_index, (write_index + 1) % 2);
  (cast()this).buffer[write_index].clear;
}

shared @property write_buffer () {
  return (cast()this).buffer[write_index][];
}
shared @property read_buffer () {
  return (cast()this).buffer[(write_index + 1) % 2][];
}
  }

Just append to write_buffer, call swap, then read from 
read_buffer. Make sure you declare the instance shared. 
Benjamin's post links to a more robust and realistic 
implementation.


What I mean is that I don't understand what atomicStore, 
atomicLoad, etc. actually DO, although in the case of the two 
mentioned, I can hazard a pretty good guess. The documentation 
doesn't exist to tell me how to use the functions found in the 
module. I've never used any atomic functions in any language 
before.


However, thank you for the simple double buffered example, I do 
appreciate it. How would another thread gain access to the 
instance in use, though? Am I able to pass references to 
instances of this class via events? 'synchronized' is also 
something I'm clueless about, and again the D documentation 
doesn't help, as the section in the class documentation basically 
says synchronized classes are made of synchronized functions, 
without explaining what a synchronized function IS. The function 
section makes no mention of the synchronized keyword, either.


Sorry, it feels like there's a load of stuff in the library and 
language to make multithreading easier, but with little to no 
explanation, so it feels like you're expected to already know it, 
unless I'm looking in the wrong places for my explanations. If I 
am, please do give me a kick, and point me in the right direction.


Re: Example usage of the core.sync classes

2015-01-03 Thread Vlad Levenfeld via Digitalmars-d-learn

On Sunday, 4 January 2015 at 01:02:07 UTC, Matt wrote:
What I mean is that I don't understand what atomicStore, 
atomicLoad, etc. actually DO, although in the case of the two 
mentioned, I can hazard a pretty good guess. The documentation 
doesn't exist to tell me how to use the functions found in the 
module. I've never used any atomic functions in any language 
before.


Atomic operations are guaranteed to happen in one step. For 
example, x += 1 will load the value of x into a register, 
increment it by 1, then write the value back to memory, while 
atomicStore (x, x+1) will cause all three steps to be executed as 
if they were instantaneous. This prevents a thread writing to an 
address that another thread just read from, causing that thread's 
view of the data to be silently outdated (aka a race condition, 
because the final value of x depends on which thread writes 
first).


However, thank you for the simple double buffered example, I do 
appreciate it. How would another thread gain access to the 
instance in use, though? Am I able to pass references to 
instances of this class via events?


You can if they're shared. You can pass the reference via 
message, or have the reference exist at a location both threads 
have access to. The two @property functions in the example are 
meant to be used by the writing and reading thread, respectively.


I go about the swap by letting the reader thread message the 
writer to inform it that it is ready for more data, at which 
point the writer thread is free to call .swap () once it is done 
writing (though you can do it the opposite way if you want).


'synchronized' is also something I'm clueless about, and again 
the D documentation doesn't help, as the section in the class 
documentation basically says synchronized classes are made of 
synchronized functions, without explaining what a synchronized 
function IS. The function section makes no mention of the 
synchronized keyword, either.


A synchronized class has a hidden mutex, and any thread that 
calls a member method acquires the lock. The synchronized keyword 
isn't limited to classes, though - you can find a detailed 
description of synchronized's various uses here: 
http://ddili.org/ders/d.en/concurrency_shared.html


Sorry, it feels like there's a load of stuff in the library and 
language to make multithreading easier, but with little to no 
explanation, so it feels like you're expected to already know 
it, unless I'm looking in the wrong places for my explanations. 
If I am, please do give me a kick, and point me in the right 
direction.


Yeah, I think the docs are meant to be a reference if you already 
have in mind what you're looking for, and just need to know how 
to wire it up.


I recommend Ali's book (linked in the previous paragraph) in 
general as its the most comprehensive and up-to-date explanation 
of the language as a whole that I've yet seen (and its free).


Re: Example usage of the core.sync classes

2015-01-02 Thread Benjamin Thaut via Digitalmars-d-learn

Am 02.01.2015 um 19:45 schrieb Vlad Levenfeld:


My personal favorite method is to use the primitives in core.atomic with
a double or triple buffer. To double buffer, keep two buffers in an
array (data[][] or something) and an integer index that points to the
active buffer, then use atomicStore to update active buffer index when
you want to swap.
Triple buffering can improve runtime performance at the cost of more
memory, and in this case you will need two indices and two swap methods,
one for the producer and another for the consumer. I use cas with a
timed sleep to update the indices in this case.

Take it with a grain of salt, I'm far from a pro, but this has worked
well for me in the past. I can post some example code if you need it.


I have to agree with Vlad here. The usual way is to do double buffering. 
So you have two buffers which hold all the data that the physics 
simulation passes to the game logic. While the physics simulation is 
computing the next frame, the game logic can use the results from the 
last frame. The only point where sinchronisation is neccessary is when 
swapping the buffers. You have to ensure that both the game logic and 
the physics thread are done with their current buffer and then sawp 
them. The only downside of this approach is, that you will get a small 
delay (usually the time taken for 1 frame) into the data you pass this 
way. Sometimes this approach is called the exractor pattern. I use it 
to pass data from the game simulation to the renderer, so the renderer 
can render in paralell with the game simulation computing the next 
frame. You can find a example of my double buffered solution here:

https://github.com/Ingrater/Spacecraft/blob/master/game/sources/renderer/extractor.d

I had tripple buffering up and running at some point, but I went back to 
double buffering, because tripple buffering can cause micro lags and you 
don't want that.


Kind Regards
Benjamin Thaut


Re: Example usage of the core.sync classes

2015-01-02 Thread Vlad Levenfeld via Digitalmars-d-learn

On Friday, 2 January 2015 at 17:39:19 UTC, Matt wrote:
I'm trying to write a small 3D engine, and wanted to place the 
physics in a separate thread to the graphics, using events, 
possibly std.concurrency, to communicate between them.


How, then, do I pass large amounts of data between threads? I'm 
thinking the physics world state (matrix for each object, 
object heirarchies, etc) being passed to the graphics thread 
for rendering.


I'd assumed that I would use Mutex, or ReadWriteMutex, but I 
have no idea how to build code using these classes, if this is 
even the right way to go about this.


I would really appreciate any pointers you can give.


Many thanks


My personal favorite method is to use the primitives in 
core.atomic with a double or triple buffer. To double buffer, 
keep two buffers in an array (data[][] or something) and an 
integer index that points to the active buffer, then use 
atomicStore to update active buffer index when you want to swap.
Triple buffering can improve runtime performance at the cost of 
more memory, and in this case you will need two indices and two 
swap methods, one for the producer and another for the consumer. 
I use cas with a timed sleep to update the indices in this case.


Take it with a grain of salt, I'm far from a pro, but this has 
worked well for me in the past. I can post some example code if 
you need it.


Example usage of the core.sync classes

2015-01-02 Thread Matt via Digitalmars-d-learn
I'm trying to write a small 3D engine, and wanted to place the 
physics in a separate thread to the graphics, using events, 
possibly std.concurrency, to communicate between them.


How, then, do I pass large amounts of data between threads? I'm 
thinking the physics world state (matrix for each object, object 
heirarchies, etc) being passed to the graphics thread for 
rendering.


I'd assumed that I would use Mutex, or ReadWriteMutex, but I have 
no idea how to build code using these classes, if this is even 
the right way to go about this.


I would really appreciate any pointers you can give.


Many thanks