On 02/02/2013 07:24 AM, Arash Fard wrote:
> I don't know about imho! My python task is not yet finished after 11
> hours. I forgot to mention that we halved the number of incoming and
> outgoing edges. Actually, we have defined a function which returns a
> pair of random integers between 0 and 80 for incoming and outgoing
> edges, so we expect the average degree of vertices to be 80.

Don't you mean 40?

> Monitoring the processes on our system using top, I can see that all
> RAM and 80GB on swap is used, and the python program has CPU usage
> usually higher than 90%. So, it seems that perhaps the memory is not
> the main issue on run time at this moment. I am not sure how much the
> 2 randint(0,80) functions we have called for the number of edges are
> responsible in this CPU load!

If the memory usage has stabilized, then it's possible the graph has
been created, and it is being randomly rewired, so the randint() should
no longer be called.

You should not expect any sort of decent performance if you are using
swap. I guess the best approach for you is either to work with smaller
graphs, or if you can't do that you have to implement your own data
structure which uses less memory. You can use the Boost Graph Library
itself, which has some different graph implementations, and allows you
to create your own. If you are not generating graphs, but reading them
from dist, a very compact representation is the compressed sparse row
format:

    
http://www.boost.org/doc/libs/1_52_0/libs/graph/doc/compressed_sparse_row.html

You may also look at the web graph people, since they are used to
working with huge graphs:

   http://webgraph.di.unimi.it/

Cheers,
Tiago

-- 
Tiago de Paula Peixoto <[email protected]>

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
graph-tool mailing list
[email protected]
http://lists.skewed.de/mailman/listinfo/graph-tool

Reply via email to