Hey Eric,
I must confess that I am quite confused by this thread as I can't figure
out what this is really about...
* performance of the Array property?
>>> env_weights = envelope.Weights.Array # 7.18513235319
This is something on which we don't really have control over... well,
seems like we're doomed with that =( ...
* fastest way to turn ((D1P1, D1Pn), ..., (DnP1, DnPn)) into [P1D1, PnDn,
..., PnD1, PnDn]?
XSI man with local sub refinement (81331 vtx, 126 defs)
not much better...
>>> w1 = [weights[j][i] for i in range(len(weights[0])) for j in
range(len(weights))] # 8.70612062111
>>> w2 = list(itertools.chain.from_iterable(itertools.izip(*weights)))
# 5.21891196087
>>> assert(w1 == w2)
>>> w3 = itertools.chain.from_iterable(itertools.izip(*weights)))
# 0.0
...way better! Actually, timing this kind of meaningless (no context)
operations is quite boring and useless...
As I don't see the point of those alone:
>>> [list(x) for x in weights] # turning a tuple of tuples into a tuple
of lists?
>>> map(list, weights)
Timing the algorithms is much funnier...
* using numpy?
Considering the Array property is half responsible of the slowness, not
sure how numpy would help... eventhough,
the XSI man example could I think be considered as a worst-case scenario
as this is not common to see so much
dense meshes rigged. But even if we are meeting those cases once in a
while, I think a few seconds are worth
than introducing dependencies on an external library such as numpy as it
is quite inappropriate to talk about it
in such context (we are not talking about arrays of gigabytes of
datas...).
In 90% of the cases, just refactoring the code will give ya the boost you
need! before considering numpy or other languages, try to refactor the
code...
Using the code Jeremie posted above and the XSI man example:
I am getting (jeremie's):
-- all points
envelope.Weights.Array: 7.24001892257 # half of getWeights
get_weights: 14.6793240571
average_weights: 97.3035831834
list(weights): 1.64222230682
total: 113.65285342
-- xrange(200, 5000) points
envelope.Weights.Array: 7.13410343853 # half of getWeights
get_weights: 16.3542267175
average_weights: 9.32082265267
list(weights): 1.5987448828
total: 27,322880867
refactored:
-- all points
envelope.Weights.Array: 7.17315390541
get_weights: 7.17847566392 # no more waste here
average_weights: 12.0701456704
list(weights): 4.16878212485
total: 23.4396892319 ~5X faster
-- xrange(200, 5000) points
envelope.Weights.Array: 7.03594689191 # no more waste here
get_weights: 7.16215152683
average_weights: 5.33362318853
list(weights): 3.63788132247
total: 16,1539661472 ~2X faster
...with my configuration! this is just to demonstrate my point, not trying
to figure out who has the longest... =)
In Jeremie's, the design is quite limited and performances will be worst in
a library where the function may be re-used many times within the same
process.
You basically just get the weights, do the operation and write out
(everytime).
In the code below, you 're only keeping a generator on the stack (no actual
datas -> less memory) which will be passed along functions (in that case
though we had to unroll it for computing the averages), and the design let
you apply multiple operations to the weights before even setting back the
envelope.Weights.Array property (kinda of ICE pattern) without affecting
performances and even improving them...
def get_weights(envelope):
"""Get envelope weights.
((D1P1, D1Pn), ..., (DnP1, DnPn)) -> [D1P1, D1Pn, ..., DnP1, DnPn]
"""
weights = envelope.Weights.Array
return itertools.chain.from_iterable(itertools.izip(*weights))
def average_weights(envelope, weights, points=all):
"""Compute average of given points.
takes and returns:
[P1D1, P1Dn, ..., PnD1, PnDn]
"""
deformer_count = envelope.Deformers.Count
if points is all:
points_count =
envelope.Parent3DObject.ActivePrimitive.Geometry.Points.Count
points = xrange(points_count)
else:
points_count = len(points)
points = frozenset(points)
# Compute the average by summing up all weights of points.
weights = tuple(weights) # flatten it (we have to in this case)
averages = [sum(weights[p * deformer_count + d] for p in points) /
points_count
for d in
xrange(deformer_count)]
# groupby pattern
weights_per_point = itertools.izip(*([iter(weights)] *
deformer_count))
enum = enumerate(weights_per_point)
it = ((i in points and averages or w) for (i, w) in enum)
return itertools.chain.from_iterable(it)
print_ = lambda x, *args: Application.LogMessage(str(x).format(*args))
siobject = Application.Selection(0)
envelope = siobject.Envelopes(0)
cls_property = envelope.Weights
weights = cls_property.Array
weights = get_weights(envelope)
weights = average_weights(envelope, weights)
weights = list(weights)
weights_p = get_weights(envelope)
weights_p = average_weights(envelope, weights_p, xrange(200, 5000))
weights_p = list(weights_p)
Anyway, just saying that the algorithm and the code matters more than using
numpy or a JIT version of python (which are imho easy answers... hey dood!
use C, why bother! ), and yes, you're right, envelope.Weights.Array is slow
as they may perform underneath the construction of the array they return
each time (just a guess).
--jon
2013/6/1 Bartosz Opatowiecki <[email protected]>
> W dniu 2013-05-31 15:29, Eric Thivierge pisze:
>
> I really seems the slowness is simply accessing envelopeOp.Weights.Array.
>
> Indeed, you are right.
>
>
> Bartek Opatowiecki
>