(coninued...)

I forgot to mention a thing about interpolating. Yes, this could be a nice idea 
to interpolate (or extrapolate) data, but I think this is a great risk for 
something that may not be very useful for common physics usage. I belive that 
realtime engines generally run at something like 50 or 100Hz (one can of course 
be much slower or much faster), so you may loose much time trying to 
interpolate; and moreover you may introduce inconstistencies (such as objects 
interpenetrating). So I guess simply reading the last matrix is enough.
This is what I do in my engine (not multithreaded yet), and it works without 
lagging. I simply created a main loop that takes care of running physics or 
display when necesssary.

Maybe you have a different point of view on that? I'd be happy to read!

And what about other types of engines (non realtime)? I suppose interpolating 
could be possible, but I still think we introduce simulation errors.

Sukender
PVLE - Lightweight cross-platform game engine - http://pvle.sourceforge.net/


Le Thu, 15 Jan 2009 11:48:48 +0100, Sukender <suky0...@free.fr> a écrit:

> Hi Robert,
>
> I like your idea. I'm not a threading expert, but won't this cost much? I 
> mean locking/unlocking is quite expensive, as far as I know, and depending on 
> the number of matrices to lock, this could cost much. Anyway much less that 
> waiting for a complete traversal to end, of course! :)
> Maybe our implementation could be a "ThreadSafeMatrixTransform"...
>
> Sukender
> PVLE - Lightweight cross-platform game engine - http://pvle.sourceforge.net/
>
>
> Le Thu, 15 Jan 2009 10:10:05 +0100, Robert Osfield <robert.osfi...@gmail.com> 
> a écrit:
>
>> Hi Sukender,
>>
>>
>> On Wed, Jan 14, 2009 at 8:59 PM, Sukender <suky0...@free.fr> wrote:
>>> Well, multithreading *is* an issue, but we need to address the problem.
>>> I guess the order of steps would be:
>>> - When it is time to update the physics, run the "physics update traversal" 
>>> (say "PUT") in a physics thread
>>> - PUT
>>> - PUT
>>> - ...
>>> - When the times come to update the display, lock the physics thread so 
>>> that no PUT runs
>>> - Run the "display update traversal" ("DUT") in a display thread. During 
>>> this, copy physics positions/orientations to transforms.
>>> - Unlock the physics.
>>> - PUT (physics thread), cull and draw (display thread)
>>> - and so on.
>>>
>>> Am I right? I'm not sure about threads because here the PUT cant run when 
>>> DUT runs, so the interest is limited, even if cull and draw steps would be 
>>> in parallel of a PUT. Maybe there could be improvements. Any idea?
>>
>>
>> I would suggest supporting decoupling of the rendering and physics
>> threads as much as possible, without either one blocking the other.
>> The way I would tackle this would be via thread safe data buffers that
>> can be read by the rendering thread, and written to by the physics
>> thread, this buffer would typically be a matrix (or similar transform
>> representation) with a time stamp, new entries would be push to the
>> end of buffer, and then the rendering thread would pull at the time
>> interval required - perhaps interpolating between two entries or even
>> estimating the position beyond last entry if physics is lagging
>> behind.  This type of buffer need only have one mutex and would only
>> be locked when the physics thread pushes data into it, and when the
>> rendering thread pulls data from it.   This way neither threads should
>> be halted and will run happily decoupled.
>>
>> Robert.
>> _______________________________________________
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
> _______________________________________________
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to