On Wednesday, August 9, 2023 at 8:49:51 AM UTC-5 Edward K. Ream wrote:

> When writing a .leo file, Leo must scan *all* nodes to find dirty @<file> 
nodes.

> Using *c.all_unique_nodes* will be *slightly* more efficient than 
*c.all_unique_positions*. But not enough to matter.

The timeit script below shows that c.all_unique_nodes is about three times 
faster than c.all_unique_positions.  On my machine the output is something 
like this:

positions  0.012876
vnodes     0.004999

Imo this 3x speedup is no big deal. It's a gain of less than 0.01 seconds 
on a large outline like LeoPyRef.leo. Note that the time is linear on the 
number of positions in the outline.

Edward

P.S. Here is the timeit script:

import timeit

def positions():
    for p in c.all_unique_positions():
        if 0:
            print(p.v)

def vnodes():
    for v in c.all_unique_nodes():
        if 0:
            print(v)

for f in ('positions', 'vnodes'):
    s = timeit.timeit(f"{f}()", number=1, globals=globals())
    print(f"{f:10} {s:6.6}")

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/02ea97fe-865e-4628-b98d-dce9895f58fcn%40googlegroups.com.

Reply via email to