Dirk, thank you for the reply,

You´re not late. Actually, you´re just on time...
As I´ve said we were at an analysis point of the project and so we were yet
gathering some basic informations about OpenSG... I made a technical report
about OpenSG (I´ve found many info in your Phd thesis, about basic things of
OpenSG).

But now, we´re thinking about how we´re going to deal with some basic
problems, like VTK+OpenSG integration.. I think the way to solve this is
pretty much as you´ve said, making a new OpenSG node that uses VTK for
rendering.

I´ve checked out OpenSG version 2.0 and I´ve found out that there is an
OpenSG to VTK integration natively available in this version (as a
contribution to OpenSG). I´ve compiled a demo of it but I don´t know if that
is ready to work on a cluster yet. Because the strategy adopted by this,
say, "vtk+opensg extension feature", is, as far as I know, to somehow
translate a vtk actor to  a OpenSG field container. But I think it doesn´t
translate data from vtk to opensg (doesn´t stores vtk data on OpenSG node) ,
it only stores physical pointers to vtkActor at a fieldContainer so when
OpenSG needs to render something, the data is taken from the pointers...

But I don´t know if this strategy will work on a Cluster (physical pointers
distributable without special care). Plus, Its a different strategy than the
one you ´ve suggested. Let´s see if I really understand what you´ve
suggested:

- The idea is to created a special kind of FieldContainer that some how
stores vtk input data (using fieldContainer Editor?).
- These special fieldContainers nodes can easily be distributed over the
network using the native OpenSG clustering feature.
- When the node arrives at server and it is sent for rendering, the VTK
input data is taken and sent to vtk engine, which will take care of
rendering everything....

Is that right? If not, how should this Special OpenSG be?

Thank you very much


2007/5/20, Dirk Reiners <[EMAIL PROTECTED]>:


        Hi Pablo,

Pablo Carneiro Elias wrote:
>
> Well My project is a bit complex. It actually goes much farther than the
> scene graph. We´re working on a BIG system that basically consists on a
> browser that can render many kinds of things: Engineering Objects (such
> as Ships and Big constructs), Scientific Objects (such as Seismic data)
> and Many, many other things, all together within the same scene (eg. a
> ship at ocean with all engineering info, and the soil below with seismic
> info.. and many other things.)

That sounds pretty big indeed. ;)

> A part from the vtk-openSG integration problem (which is not solved yet,
> but I´m assuming that everything has already been placed at OpenSG
> correctly for now), as VTK data will possibly be very large (the
> engineering data will be very large too, but it will not change so it is
> not a problem) and it is going to change many times due to numerical
> calculations, we need a way to handle this very efficiently using
clusters.

To do that I would probably try to integrate VTK and OpenSG in the sense
that I would split the data, put it in a new kind of OpenSG node that
calls VTK for rendering, and distribute those across the cluster. That
way the actual visualization calculations are done on the cluster and in
parallel.

> But I don´t have much information yet about how fields are serialized
> over the network (there is any magical strategy of sending them or it is
> normal bit transfer to the network?)

No magic involved, plain data transfer. ;)

> Thats it.. the SceneGraph related part of the project is basically that,
> and we´ve chose OpenSG ;) Now I´m searching for info until tomorrow
> night in order to finish the report so We can start developing things
> soon (after a good analysis if the report) ;)

I hope I'm not too late then, sorry about the delay but I'm on travel
and out of regular email right now.

Pablo Carneiro Elias wrote:
>
> I think I have almost everything in mind by now. I just miss some
> details about how data is sent over the network (if there´s compression,

Not right now. We've tried compression for images a few times, but in
our tests unless you have an extremely fast compressor sending
uncompressed data was faster on todays networks (GBit and up).

> how data is packed,

It's a simple Tag/Length/Data format for the data that changed (as
recorded in the ChangeList).

> if there´s introspection and stuff like that...)

Yup.

> If anyone can help me on this missing info I´ll be great by now...
>
> ... and soon I´ll be glad to bring again more interesting an complex
> issues as the one you posted about...

Hope it helps

        Dirk


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to