Hi Benjamin,

Could you possibly repost the below text, but broken down into shorter
paragraphs as long paragraphs are really hard to follow.  Try to keep
one major point per paragraph.

You are tackling a big topic in terms of performance optimization, so
its worth taking it bit by bit.

Robert.

On Sat, Mar 1, 2008 at 2:26 PM, Benjamin Eikel <[EMAIL PROTECTED]> wrote:
> Hello Robert,
>
>
>  On Saturday 01 March 2008 at 13:22:35, Robert Osfield wrote:
>  > Hi Benjamin,
>  >
>  > On Sat, Mar 1, 2008 at 12:24 AM, Benjamin Eikel <[EMAIL PROTECTED]> wrote:
>  > >  > Have you explored multiple CPU/GPU set ups on a single machine?
>  > >
>  > >  We have two CPUs but only one GPU on each node in the cluster.
>  > >
>  > >  > Have you fully explored scene graph/OpenGL optimization?
>  > >
>  > >  I think you mean that we do not needed to use a cluster at all, because
>  > > the scenes can be rendered on one computer. I think I can answer that
>  > > with no. We are trying to create a parallel rendering algorithm and want
>  > > to check how well it scales with the number of cluster nodes used. In the
>  > > end it will hopefully be able to render really massive scenes (some
>  > > hundred millions polygons) You can see it as some kind of research.
>  >
>  > Ahh, understood, its research rather than just trying to solve a
>  > specific performance issue.
>  exactly.
>
> >
>  > BTW, modern GPU's can do hundreds of millions of polygons per second
>  > right now ;-)
>  Okay, but we do not have such modern GPUs and furthermore we just could
>  increase the complexity of the scene then and the GPU cannot cope with the
>  scene anymore.
>
> >
>  > As its research might I suggest benchmarking different types of scenes
>  > between a single workstation with a single GPU, multiple GPU's and the
>  > cluster.  I strongly suspect it's only a small set of scenes that will
>  > favour a cluster.
>  We want to visualize scenes which are generated by a simulation of a factory.
>  The simulation is running on the cluster too. We want to do the rendering on
>  the cluster because we can get messages from the simulation faster (10 GBit/s
>  InfiniBand, lower latency than Ethernet, DMA transfers from one node to
>  another) and are able to cope with big scenes by just using more rendering
>  nodes (at least that is our goal). For example models of some machines are
>  very complex because they are generated by CAD programs and we do not want to
>  do an extensive preprocessing because the models may change over time.
>
>
> >
>  > > So it is no option to only
>  > >  use a single big workstation. We do not want to use the rendering of
>  > >  OpenSceneGraph (or Chromium which was suggested in another e-mail)
>  > > because we want to implement some algorithm ourselves to be able to
>  > > handle these massive scenes (dynamical LOD with loose octrees for big
>  > > models for example). When writing these rendering ourselves, we have more
>  > > freedom in doing so.
>  > >
>  > >  Of course we could write the whole system without using OpenSceneGraph.
>  > > But as stated before, it would be very nice to use some of its features
>  > > on our central computer so we do not have to implement things like
>  > > picking, dragging, culling and so on.
>  > >
>  > >  We already began the implementation and now the question is, as stated
>  > > in my first mail, how we can use the power of OpenSceneGraph on the front
>  > > end to ease our lives. I have already used OSG in my Bachelor's thesis to
>  > > implement adaptive animation algorithms and was very pleased by it's
>  > > features. But I am not quite sure if the way I described in my first mail
>  > > is the right one to follow. I do not know if it might be better to put
>  > > the interface between this front end node and the rendering slaves
>  > > somewhere else. So further hints are greatly appreciated.
>  >
>  > There are different levels of distributed rendering.  Low level like -
>  > distributed GL like Chomium, through to high level distributed IG
>  > where all nodes have a local copy of all the data with high level
>  > state like camera matrices etc. sync'd from the master.   What level
>  > of granularity are you aiming for?  Will you have the opportunity to
>  > test/develop various levels?
>  We get ids for the machine models from the simulation. We then want to load
>  the meshes for that machine inside the scene graph (for this we can use the
>  already existing reader/writer plugins of OSG and do not have to write our
>  own). So the scene graph "knows" the bounding boxes and models and can do
>  culling, picking, moving machines with grabbers and so on.
>  Furthermore we want to use the animation capabilities for some special models
>  (e. g. forklifts, packages on a conveyor).
>  When a frame should be rendererd we want to know the nodes (e. g. Geodes)
>  which are visible and their transformation matrix (which should be known to
>  OSG after it's update and culling traversals). There is one central node with
>  a scene graph for every user connected to our simulation/rendering system (so
>  we are able to have different views into the scene and can distribute the
>  culling for every user to an own node). After that we know the visible
>  machines (and are able to estimate the costs for the rendering). We then do a
>  load balancing between the different users viewing the simulation and the
>  different rendering nodes to create a partitioning of the set of render
>  nodes. Each subset of render nodes is assigned to one of the central nodes.
>  After each of this central nodes knows these rendering nodes which can be
>  used, it instructs these nodes by distributing the mesh ids, the
>  transformations of the meshes and the camera transformation. At that point we
>  want to exploit the fact that there may be the same machine multiple times
>  inside the factory. So we could load the mesh of such a machine as VBO on the
>  rendering node and let it render the machines with different transformations
>  multiple times.
>
>  So I hope this makes the system a bit more clear. I am now searching for a
>  interface where I can grab these visible nodes and put them into a list or
>  something. Then I can analyse them (perhaps store the mesh id and the number
>  of polygons inside the user data), run the balancing algorithm and send the
>  data to the rendering nodes. As stated before, we do not want to use an
>  already existing rendering system but develop our own. So my question is as
>  before if the way using an own DrawImplementation is the right one or if
>  there is a better one for getting the visible nodes from a scene graph.
>
>  Regards,
>  Benjamin
>  >
>  > Robert.
>
>
> > _______________________________________________
>  > osg-users mailing list
>  > osg-users@lists.openscenegraph.org
>  > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
>  _______________________________________________
>  osg-users mailing list
>  osg-users@lists.openscenegraph.org
>  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to