> I believe that the "concatenate array" function makes a new, slightly
> larger, copy of the original array to add your new data on to the end, then
> deletes the original. It's a little smarter in that it adds more space than
> you need to reduce the number of times it has to do this, but it's still
> slow and memory hogging. It may be one of your problems. Initialise the
> array at the start and replace null entries with your data as you go. You
> could also use a more elegant design that doesn't buffer all the data before
> starting to write to Excel- a queue to a seperate Excel writer for instance,
> unless the machine overhead isn't acceptable.
> 


I'm not sure I follow everything here, but if concatenate array function 
is really the build array function in the LV diagram, then it has 
several different modes of operation.

It currently takes the output datatype and if any of the inputs match in 
type and dimensionality, LV will try to modify that array instead of 
going to a new one.  In cases where that array isn't modifiable, I think 
it will go to a new one, but it could go further down the input looking 
for candidates.  Anyway, in the typical case of appending or prepending 
data, it works fine and the output is the input grown slightly.  And the 
handle that the array is stored in may be oversized somewhat either by 
LV, the LV memory manager, or the underlying OS memory managers meaning 
that a resize often just means updating the size, not reallocating to a 
new location with a bigger buffer.

Anyway, even though build array is more efficient than it used to and is 
reasonably intelligent, it is still doesn't have as much information as 
you, and it is possible to write a better diagram using this knowledge 
by presize and replace.  On the otherhand, it does take a decent amount 
of code to do so, and the built-in behavior is pretty good if it isn't 
growing millions of times.

Greg McKaskle


Reply via email to