Matt, is the test case you outlined also your use case?
Reparametrization, even outside of ICE, is non-trivial since if you want
equidistant you are basically facing a minimization problem, which is where
I assume you went for forward walking technique (repeat with bouncing or
decreasing increment until a lowest possible U and V Value is found
returning a distance withing tolerance of the discrete interval).
I tried that, and it was prohibitively expensive as it involves whiles and
repeates that degrage the graph's threading and inflate memory use
enormously.

What, to my surprise, I found out the first time I tackled the problem at
its lowest dimensionality is that using a ton of get-closest location and a
single repeat (and then ridding myself of that in favour of starting from a
set of samples ran through a fixed number of iterations hard-wired) had
practically no cost compared to that, and threaded more efficiently across
all cores at all times.

Get closest location on its own of course will return data you want to
filter, especially in areas where there is considerable discontinuity (high
rate of change for the first order derivative), but nothing that filtering
by a ruleset wouldn't deal with excellently (exclude precedent location >
filter in range > filter by lowest U or V to avoid skipping the entire
discontinuity and then a further get closest resized and filtered again).

If you literally are limited to cases with only a few control vertices and
you can guarantee the discontinuity isn't too brutal (IE: first order
derivative between subsequent nodes doesn't change by more than 90 degrees
minus iota) the problem is a great deal simpler than if you have many knots
and the domain of the surface has practically no boundaries other than
those of the function. That's why I was asking about the case.

Playing with the arrays for filtering in a safe and fast way was also key,
and that is counter-intuitive compared to how you would deal with arrays in
traditional programming, especially performance wise, but possible (again,
Stephen and Julian's blogs have many gems).

I would also consider using a very dense poly or point cloud conversion of
the nurbs plane with data samples from the surface, if this is an on-off
tool, over using the surface itself, but that might or might not be
possible.

I still don't know what your performance target is. If it's dozens of
frames per second, or 60hz across multiple setups, I'd say you're bettter
off dropping this like a dead rat and instantly explore other venues.
If it's a conforming tool used in a session with clear entry and exit
points, then the average 15-20hz that is perceived as still smooth when
operating a tool is more achievable.

Lastly, you always have the option of dealing with the parametrization in
your own OP and writing a transform per discrete element to use in ICE for
the rest from there, which is probably the sane thing to do if you have
dense surfaces and the problem has an unbound domain. ICE just isn't well
suited to dealing with a lot of fringe case handling to scale performance
(it does best when dealing with the same operation, no matter how big, run
many times as widely as possible instead of at variable depth), whereas in
an OP that kind of optimization always works well.

Reply via email to