Oh yes, and something else about your benchmark : try to avoid using "rand"
methods when you are doing one, especially when you test such low-level
methods, because often the rand() method represents  an important part of
the time.

The best would be to compute all the random number you need in a first
phase, then run %timeit on the add_edge part.

Well, it probably will not reflect well on Sage because it should increase
the differences between the libraries, but I think that it is very
important in your benchmark, To give you an idea :

sage: from numpy import random as rnd
sage:
sage: g = Graph(500)
sage: def rand_entry(G):
....:     ...    n = G.order()
....:     ...    i = rnd.randint(0,n-1)
....:     ...    j = rnd.randint(0,n-1)
....:     ...    G.add_edge(i,j)
....:     ...    G.delete_edge(i,j)
....:
sage: def just_rand(G):
....:     ...    n = G.order()
....:     ...    i = rnd.randint(0,n-1)
....:     ...    j = rnd.randint(0,n-1)
....:     ...    return i*j
....:
sage: %timeit rand_entry(g)
625 loops, best of 3: 20.4 µs per loop
sage: %timeit just_rand(g)
625 loops, best of 3: 4.93 µs per loop

So 20% of the time used by this test method is actualy used by calls to
"random()" :-)

Nathann

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to