Hi Thomas,

I think we can fix the time needed to slice a model.
Actually, its simple to parallize the task over a couple of processors.

25.6454157829 necessary to perform slice with a 1 processor(s).
5.20338606834 necessary to perform slice with a 8 processor(s).

Pretty good, right, that's a nice 5 fold speed up ( could be faster async ). Still the example is pretty trivial. I haven't figured out how to memory map / share a shape between processes, so I'm recreating a sphere for every slice ( duhhh... ) So, a bit of an oversimplified example, but proves the point of how simple it really is to do this kind of thing.

Oh, its still 33 lines ;')

Cheers,

-jelle

from OCC.gp import *
from OCC.BRepPrimAPI import *
from OCC.BRepAlgoAPI import *
from OCC.TopOpeBRepTool import *
from OCC.BRepBuilderAPI import *
from OCC.Geom import *
from OCC.TopoDS import *
import time, processing, numpy

def slice( z ):
    import os
    # Create Plane defined by a point and the perpendicular direction
    print 'slicing index:', z, 'sliced by process:', os.getpid()
shape = BRepPrimAPI_MakeSphere ( 60. ).Shape() # Create the shape to slice
    plane = gp_Pln( gp_Pnt( 0., 0., z ), gp_Dir( 0., 0., 1. ) )
    face = BRepBuilderAPI_MakeFace( plane ).Shape()
    # Computes Shape/Plane intersection
    section = BRepAlgoAPI_Section( shape, face )
    if section.IsDone():
        return section
    else:
        return None

def run( n_procs ):
    Zmin, Zmax, deltaZ = -100., 100., 0.001
#shape = BRepPrimAPI_MakeSphere (60.).Shape() # Create the shape to slice
    P = processing.Pool( n_procs )
    init_time = time.time() # for total time computation
result = P.map( slice, numpy.arange( Zmin, Zmax + 0.01, deltaZ ).tolist() )
    total_time = time.time() - init_time
print "%s necessary to perform slice with a %s processor(s)." % ( total_time, n_procs )
    time.sleep( 5 )
run( 1 ); run( 8 )


_______________________________________________
Pythonocc-users mailing list
Pythonocc-users@gna.org
https://mail.gna.org/listinfo/pythonocc-users

Reply via email to