I've heard that Neo4J is the de facto standard tool for dealing with graph databases. Never used it though.
T. On Mon, Dec 19, 2016 at 12:32 PM, Ruchika Salwan <[email protected]> wrote: > Hi, > That's true. I have developed the basic version with Igraph. Can you tell > me about any other library that I can use to implement the algorithm for > massive graphs > > Thanks, > Ruchika > > On 15 Dec 2016 18:12, "Tamas Nepusz" <[email protected]> wrote: > >> I am following this research paper whose findings I have to replicate. >>> And one of their graphs has 5million nodes and 69 million edges. That's the >>> smallest dataset they are using. >>> >> igraph has no problems with a graph of that size on a decent machine. >> (Mine has 8 GB of RAM and an Erdos-Renyi random graph of that size fits >> easily). Larger graphs can become problematic -- but anyway, working with >> in-memory graphs and on-disk graphs is radically different, and igraph was >> designed for the former use-case, so it won't be of any help to you if your >> graph does not fit into RAM. The problem is that igraph makes assumptions >> about the cost of certain operations; for instance, it assumes that looking >> up the neighbors of a vertex can be done in constant time. These >> assumptions do not hold if the graph is on the disk because the operations >> get much more costly. So, in that case, you are better off either using >> another library that stores the graph in a database, or implement your >> algorithm from scratch. >> >> T. >> >> _______________________________________________ >> igraph-help mailing list >> [email protected] >> https://lists.nongnu.org/mailman/listinfo/igraph-help >> >> > _______________________________________________ > igraph-help mailing list > [email protected] > https://lists.nongnu.org/mailman/listinfo/igraph-help > >
_______________________________________________ igraph-help mailing list [email protected] https://lists.nongnu.org/mailman/listinfo/igraph-help
