Hi, On Tue, 2011-12-20 at 11:24 +0100, "Christoph Fünfzig" wrote: > Hi Gerrit, > > On 19.12.2011 18:33, Gerrit Voß wrote: > > > > Hi, > > > >>> > >> So this is in the GIT already or where do you change it? > > > > yes, I pushed it this afternoon commit > > fa087cbe8e252ee14558aeeb7f33b8a59ad6d60b > > > ok, will have a look. Btw. do you have a test program for OSG::Barrier > on windows (WinThreadBarrierBase)?
not a special one, but all the par drawing examples use a barrier for synchronization. There is nothing more to it. The only choice you have is when you set the number of threads to wait for, you can either do it on enter (like in your example) or you can separately set the number and than just call enter() from the threads. > >>> So far all tests including your testprogram work. Only for your > >>> testprogram I get only a black result in the parallel windows but > >>> this might be something else. > >> > >> Which of the two test programs are you referring to > >> (2 windows or 3 windows) ? > > > > your latest with 3 windows from Friday (testStatisticsGerrit.cpp) > > Here it works modulo some artifacts to find out about. > You have to add skyBoxFront.jpg/skyBoxBack.jpg/.. > and you are looking at the scene box from the outside ;) > > Coming back to aspect synchronization: > The synchronization between > singleton->syncBarrier->enter(singleton->numChannels+1); > singleton->appThread->getChangeList()->applyNoClear(); > singleton->syncBarrier->enter(singleton->numChannels+1); > is read only with respect to "singleton->appThread->getChangeList()". yes. > It writes to the aspect assigned to the thread. So this does not > interfere with others .. There is still one point where the sync operation touches all copies involved. The MFields store flags to know between which aspects their contents is shared. So within the mfield you find for(UInt32 i = 0; i < oOffsets.size(); ++i) { pOther = reinterpret_cast<Self *>(pOtherMem + oOffsets[i]); this will walk pOther through all aspect copies of a particular MField. Let me check if by using atomic ops for the operations needed we can get this part lock free. > Why do you insist on doing it sequentially? :) It is not sequential as before in a sense where only one of the threads could sync. Now access to a single container is sequential which means, from a top level view the synchronizing threads run overlapped and the sequential parts are mainly at the beginning and end (depending on number of threads and changelist size) kind regards gerrit ------------------------------------------------------------------------------ Write once. Port to many. Get the SDK and tools to simplify cross-platform app development. Create new or port existing apps to sell to consumers worldwide. Explore the Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join http://p.sf.net/sfu/intel-appdev _______________________________________________ Opensg-users mailing list Opensg-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/opensg-users