Jeff, first, let me say I'm very pleased to hear you are contributing this
work. I'm sure it will be useful to many people, and probably to me sooner
than I think. So the following is not a critique, just exploring the bounds
of your algorithm.

Does your routine work "only" in the case where you essentially properly
define the original regular grid to your routine? If so, I presume one
acquires the special knowledge of this original grid in some way from the
researcher (experience shows THEY never know what it is cause some grad
student 3 years ago did the actual work (:-) So in some sense, I predict I
will end up trying the following until they miraculously remember what the
correct grid was...).

Does it work in the following cases:

1) original grid was, say  origin 0,0,0; delta 1,1,1; counts 20,20,20
and you define the output grid as 0,0,0; delta 1,1,1; counts 10,10,10
(should fall on every other point exactly but does it average the
in-between values too? or discard them?)

2) original grid was, say  origin 0,0,0; delta 1,1,1; counts 20,20,20
and you define the output grid as 0,0,0; delta 1,1,1; counts 13,12,11
(interpolation or bin assignment would be involved)

3) most of the input data falls on a regular grid but some schmutz around
the edges or wherever does not? (i.e., do you get the speedup when things
work as you expect, and the slowdown in the more general Regrid cases? or
does your routine become godawful slow when its expectations are not met?)

I think adding your stuff as a method to Regrid sounds elegant: just be
sure to warn the prospective user about the incredible speedup he'll get
only if he plays by the rules (unless, for example, you get the speedup
(over Regrid) in version 2 above which I suspect you may not).

Chris Pelkie
Vice President/Scientific Visualization Producer
Conceptual Reality Presentations, Inc.
30 West Meadow Drive
Ithaca, NY 14850
[EMAIL PROTECTED]

Reply via email to