Hi Gerrit

Something I forgot to ask / mention is the ratio between points
and properties one point per property might be overkill as the overhead
comes from the property not the point itself a 3s point should consume
the 6 bytes you mentioned. Per property you get an overhead of 52 per
aspect (there should be 2 per default so this counts 104) plus 8 bytes
for bean counting. Plus 8 bytes for each multithread pointer you keep,
So for each property you should consume 120 bytes at least (excluding
any data and only counting 1 pointer). It is different from the 580 you
report but I expect the reallocation pattern to add to the problem (see
below).
I just tried it with a tree of 23977 Nodes total. The Levels have a size like this:

Positions:
Nodes in Level 0: 1
Nodes in Level 1: 2
Nodes in Level 2: 4
Nodes in Level 3: 8
Nodes in Level 4: 16
Nodes in Level 5: 32
Nodes in Level 6: 64
Nodes in Level 7: 128
Nodes in Level 8: 256
Nodes in Level 9: 512
Nodes in Level 10: 1024
Nodes in Level 11: 2046
Nodes in Level 12: 4092
Nodes in Level 13: 7810
Nodes in Level 14: 7072
Nodes in Level 15: 894
Nodes in Level 16: 16
Nodes Together: 23977

it is an nearly-balanced ;-)  binary tree, as it ought to be.

How to do it better without a good estimation how much memory will be
used beforehand this is hard say, especially for me without knowing what exactly your algorithm is doing ;-)). First I would try a better memory tool to see if the overhead is really coming from the allocation patterns (not that I overlooked somthing) ;-), unfortunately I don't know one off my head probably
valgrind might be suited (I haven't had the time to try so far).
Or somebody else could suggest a good memory profiler ;-). Unfortunately the current version of OpenSG does not allow you to mess
with the memory system ;-(, so building your own might be tricky ;-)

One possibility could be to estimate the maximum sizes for a given
testcase and preallocate all the arrays to that size, look at osview
and see if the value observed value fits the expected size better.
If I come up with something I let you know.
I used the binary-export of OpenSG to get the size of my Node. With this data the resulting (of course uncompressed) file had a size of 728276 Bytes, that is an average of 30.3 Bytes per Node!

So these 'highlevel' tools can fool you ;-))
Yes, this seems to happen here. Looks like i wrote worst-case code from the memory-management-view :'( .

Any Idea?

My only one for now is, to do an initial build with only counting the size of the levels, but without store anything. Then allocating the memory, then do the real build. But thats not very nice.



Lars


-------------------------------------------------------
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to