Hi,

On Fri, 2005-01-21 at 18:27, Lars Hillebrand wrote:
> Hi Gerrit
> 
> 
> >this is quite optimistic neglecting at least the three pointers
> >used per Multfield stored inside a property plus the aspect
> >overhead, plus multiple vtable, plus probably some things I 
> >forgot ;-) 
> >  
> >
> 
> >Another issue might be the order of allocation, the push_back
> >constantly reallocates you array leaving you with fragmented
> >memory depending on the exact order. Could you try to preallocate
> >your array and see if this changes your memory footprint. 
> >
> Hm, that will be difficult, because i use a recursive-function to build 
> the tree.
>
> So i know the exact size of each level not before the build is 
> finished. Is there an other way for storing my data, with less overhead?

Something I forgot to ask / mention is the ratio between points
and properties one point per property might be overkill as the overhead
comes from the property not the point itself a 3s point should consume
the 6 bytes you mentioned. Per property you get an overhead of 52 per
aspect (there should be 2 per default so this counts 104) plus 8 bytes
for bean counting. Plus 8 bytes for each multithread pointer you keep,
So for each property you should consume 120 bytes at least (excluding
any data and only counting 1 pointer). It is different from the 580 you
report but I expect the reallocation pattern to add to the problem (see
below).

How to do it better without a good estimation how much memory will be
used beforehand this is hard say, especially for me without knowing 
what exactly your algorithm is doing ;-)). 
First I would try a better memory tool to see if the overhead is really
coming from the allocation patterns (not that I overlooked 
somthing) ;-), unfortunately I don't know one off my head probably
valgrind might be suited (I haven't had the time to try so far).
Or somebody else could suggest a good memory profiler ;-). 
Unfortunately the current version of OpenSG does not allow you to mess
with the memory system ;-(, so building your own might be tricky ;-)

One possibility could be to estimate the maximum sizes for a given
testcase and preallocate all the arrays to that size, look at osview
and see if the value observed value fits the expected size better. 

If I come up with something I let you know.

> >BTW how
> >did you measure your memory consumption  ?
> >  
> >
> Well, i watched it grow with xosview ;-)   .
> Before starting the Program i had a use of about 400MB RAM and 0MB SWAP, 
> while running i had 1.5GB RAM and a full swap-partition of 512MB.

hmm again, IIRC the osview memory reflects memory allocated by the
application yes, but if the application reallocates memory and the new 
memory does not overlap the old one the consumption increases
differently than expected. Some time back somebody came up with
this nice example :

int *foo = new int[1024];

cout << "test" << endl;

delete [] foo;

foo = new int[1025];

the observed memory footprint (IIRC he used top) was a little bit
more that 2049 ints. Why ? because the call to cout was the first 
time cout was used and some memory allocated during this call prevented
the system from reusing the memory of the first 1024 ints while
allocating memory for second 1025 ;-) So he ended up with twice
the memory he expected ;-). Allocating anything below the size
of 1024 ints would cause the system to reuse the first chunk of
memory showing no increase in the application memory usage.

So these 'highlevel' tools can fool you ;-))

regards,
  gerrit




-------------------------------------------------------
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to