RE: Array Wish list
Subject: RE: Array Wish list From: Jim Kring [EMAIL PROTECTED] Date: Tue, 20 Jan 2004 12:40:17 -0800 Using the Show Buffer Allocations tool (I forget where I got it on NI's website) there is no buffer-allocation-dot on the Transpose 2D Array function. This seems to imply that it is an in place operation. -Jim Jim, You piqued my interest so I searched the NI web-site and found that tool in a KnowledgeBase document entitled Determining When and Where LabVIEW Creates a New Buffer, URL: http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/c18189e84e 2e415286256d330072364a?OpenDocument Thanks for the tip, Mark Watson Staff Engineer Philip Morris [EMAIL PROTECTED] (804) 752-5631 (804) 752-5600 (fax) (804) 215-5631 (pager)
Re: Array Wish list
Urs, you're dead right. I get the same results here (on Windows/LV7.0). My apologies to anyone I misled yesterday with my assertion that Transpose 2D is a memory hog. Clearly, it isn't. Jason, you're right too - of course it's possible to transpose an array without making a complete copy. I'll turn my brain to ON before cooking up these wild hypotheses in the future. I'm still interested to learn why the Transpose 2D Array wired to a FP indicator in my initial experiment (and Urs' first example below) consumes so much extra memory? Intuitively it would seem to require no more memory than the same VI without a transpose array function. All the best, Simon On Tue, 20 Jan 2004 22:45:04 +0100 Urs Lauterburg wrote: Dear wireworkers and data transposers, I made a short test on my LV7/MacOS-10.3.2 by creating a 1 million element DBL 2D-array of 1000x1000 size by creating random numbers in two nested For loops. Transposing the array before displaying the values in a plain regular 2D-array indicator adds 8MB and 1 block to the 16MB in 7 blocks of memory consumption. Execution time raises from 758ms to 972.7ms when doing the transpose function before displaying the values. However and this is the point, only a marginal overhead is present if no FP-display occurs, whereas the created 2D-Array is just disassembled by autoindexing in two nested For loops again. In this case the VI consumes 8MB of memory in 3 blocks in booth cases with the transpose function only adding a marginal overhead of 0.3kB. Execution time here raises from 672 to 686ms. So the test shows that it is fair to state that ''Transpose Array'' is an inplace function at least under MacOS-X. Don't let yourself fool by FP buffering which puts a much larger strain on LabVIEW's memory manager compared to plain BD-calculations. In fact due to the results, when FP-display takes place it easily consumes multiples of what the BD consumes. Are the results duplicatable on the politically correct platform too? Anyway happy wireworks as usual... Urs Urs Lauterburg Physics demonstrator LabVIEW wireworker University of Bern Switzerland The conclusion Well I was too lazy to do the test you performed, and I didn't find anything on NI's web site. But I'm nearly certain that the LV developers have said that transpose is an inplace function at several of the NIWeek session's I've attended. Maybe that's no longer true or maybe my memory has turned to mush (fairly probable given my advancing years). It's certainly possible to transpose an array while reusing the same buffer, you just have to make sure to move the elements in the right order, and you need a tiny scratch buffer. But I believe your test results. Maybe someone with inside knowledge will chime in here... Jason Dunham SF Industrial Software, Inc. -Original Message- From: Simon Whitaker [mailto:[EMAIL PROTECTED] Sent: Tuesday, January 20, 2004 2:03 AM To: Info-LabVIEW List Subject: Re: Array Wish list On Mon, 19 Jan 2004 22:42:22 -0800 Jason Dunham wrote: As far as I've ever heard, the transpose arrays don't use any extra memory. The transpose function is done in place. I'm sure a few extra bytes are needed for temporary storage, but supposedly the same array buffer is reused. I would guess that the graph transpose option is also not a memory hog. Although a transposed array will consume the same amount of memory as the original array, the transpose function involves creating a new array, populating it using data from the original array, then deleting the original. This can have a significant effect on memory usage with a large array. LabVIEW stores multi-dimensional arrays as a list of data items plus 4 bytes per dimension to store the size of that dimension. You can therefore work out how much memory an array will use: size of array item in bytes x no. items + 4 x no. dimensions The no. dimensions element becomes insignificant for large arrays. So, an array of a million 16-bit integers will consume around an extra 2 million bytes (about 1.9MB) during transposition. You can confirm this by profiling some sample VIs, one with transposition and one without. I created two VIs: the first creates a 2D array containing 10,000,000 32-bit integers (10 x 1,000,000) and writes it to an indicator. The second is a copy of the first, with a transpose array node just before the indicator. The first VI consumes almost exactly 40,000K less than the second (84,000.94K vs 124,000.98K). According to the above formula the array will consume c. 10,000,000 x 4, = c. 39,000K. You may not need to worry about this memory hit if you're transposing small arrays, but if you're transposing arrays with many millions of elements your memory usage will increase significantly during transposition - especially if you're using a large data type like an extended precision float. All the best
RE: Array Wish list
Urs, I'm glad you keep the flag of the most important 'politically ' (_in-) 'correct platform' high ;-)) You wrote on Tue, 20 Jan 2004 22:45:04 +0100 Dear wireworkers and data transposers, I made a short test on my LV7/MacOS-10.3.2 by creating a 1 million element DBL 2D-array of 1000x1000 size by creating random numbers in two nested For loops. Transposing the array before displaying the values in a plain regular 2D-array indicator adds 8MB and 1 block to the 16MB in 7 blocks of memory consumption. Execution time raises from 758ms to 972.7ms when doing the transpose function before displaying the values. However and this is the point, only a marginal overhead is present if no FP-display occurs, whereas the created 2D-Array is just disassembled by autoindexing in two nested For loops again. In this case the VI consumes 8MB of memory in 3 blocks in booth cases with the transpose function only adding a marginal overhead of 0.3kB. Execution time here raises from 672 to 686ms. So the test shows that it is fair to state that ''Transpose Array'' is an inplace function at least under MacOS-X. Don't let yourself fool by FP buffering which puts a much larger strain on LabVIEW's memory manager compared to plain BD-calculations. In fact due to the results, when FP-display takes place it easily consumes multiples of what the BD consumes. Are the results duplicatable on the politically correct platform too? They are! Found comparabel results, allthough the PC where I tested this is not soo fast (took ~1.3s to generate the data you mentioned). I generated the same data and sent it to an empty sequence structure - it used 8.009 kB. I replaced the sequence with an indicator - took 16.009 kB. Insertted an transpose - it used 24.009 kB. I replaced the indicator with a sequence again - took again 8.009 kB. So I's assume transposing 2D-arrays is in-place. I was surprised, however, where the 3rd copy of the data occured or where it went to. No extra buffer allocation (black dot) was visibel anywhere. Maybe Greg or someone else from NI can shed some light? Greetings from Germany! -- Uwe Frenz ~ Dr. Uwe Frenz Entwicklung getemed Medizin- und Informationtechnik AG Oderstr. 59 D-14513 Teltow Tel. +49 3328 39 42 0 Fax +49 3328 39 42 99 [EMAIL PROTECTED] WWW.Getemed.de
RE: Array Wish list
On Mon, 19 Jan 2004 22:42:22 -0800 Jason Dunham wrote: As far as I've ever heard, the transpose arrays don't use any extra memory. The transpose function is done in place. I'm sure a few extra bytes are needed for temporary storage, but supposedly the same array buffer is reused. I would guess that the graph transpose option is also not a memory hog. Although a transposed array will consume the same amount of memory as the original array, the transpose function involves creating a new array, populating it using data from the original array, then deleting the original. This can have a significant effect on memory usage with a large array. I'm not really sure about if it really does, but it is envisable that the transpose function may actually operate in place with just one single temporary storage value by swapping the two elements from start and tail of the according buffer part. It certainly has not done so in the beginning but as LabVIEW evolves a lot of array manipulation functions have been improved. Tests show an additional buffer used indeed compared to when no Transpose is used. Also using the unofficial INI file setting showInplaceMenuItem=True you can verify that the Transpose function does not seem to operate in place. Rolf Kalbermatter CIT Engineering Nederland BVtel: +31 (070) 415 9190 Treubstraat 7H fax: +31 (070) 415 9191 2288 EG Rijswijkhttp://www.citengineering.com Netherlands mailto:[EMAIL PROTECTED]
Re: Array Wish list
So I's assume transposing 2D-arrays is in-place. I was surprised, however, where the 3rd copy of the data occured or where it went to. No extra buffer allocation (black dot) was visibel anywhere. Maybe Greg or someone else from NI can shed some light? I wasn't around when the original Transpose was written, but I believe that it was one of the first nodes to be made to be inplace. It does have one extra element and is inplace. I suspect that the results that found an extra copy being made were being affected by constant-folding. If you build a VI that has nothing but constants wired to build array, this is equivalent to wiring up a constant array. No problem yet, but constants cannot be overwritten, thus even though Transpose is inplace, it modifies the data, and a copy must be made to avoid modifying the constant. If you one of the inputs to Build Array be a control, then this data will be generated each time and you should see that no additional data buffers are needed when a Transpose is inserted. Of course, you may be wondering why the Transpose node doesn't run at constant-folding time, require no additional runtime data nor any runtime execution time. I believe it will soon along with lots of other nodes. Greg McKaskle
RE: Array Wish list
Dear wireworkers and data transposers, I made a short test on my LV7/MacOS-10.3.2 by creating a 1 million element DBL 2D-array of 1000x1000 size by creating random numbers in two nested For loops. Transposing the array before displaying the values in a plain regular 2D-array indicator adds 8MB and 1 block to the 16MB in 7 blocks of memory consumption. Execution time raises from 758ms to 972.7ms when doing the transpose function before displaying the values. However and this is the point, only a marginal overhead is present if no FP-display occurs, whereas the created 2D-Array is just disassembled by autoindexing in two nested For loops again. In this case the VI consumes 8MB of memory in 3 blocks in booth cases with the transpose function only adding a marginal overhead of 0.3kB. Execution time here raises from 672 to 686ms. So the test shows that it is fair to state that ''Transpose Array'' is an inplace function at least under MacOS-X. Don't let yourself fool by FP buffering which puts a much larger strain on LabVIEW's memory manager compared to plain BD-calculations. In fact due to the results, when FP-display takes place it easily consumes multiples of what the BD consumes. Are the results duplicatable on the politically correct platform too? Anyway happy wireworks as usual... Urs Urs Lauterburg Physics demonstrator LabVIEW wireworker University of Bern Switzerland The conclusion Well I was too lazy to do the test you performed, and I didn't find anything on NI's web site. But I'm nearly certain that the LV developers have said that transpose is an inplace function at several of the NIWeek session's I've attended. Maybe that's no longer true or maybe my memory has turned to mush (fairly probable given my advancing years). It's certainly possible to transpose an array while reusing the same buffer, you just have to make sure to move the elements in the right order, and you need a tiny scratch buffer. But I believe your test results. Maybe someone with inside knowledge will chime in here... Jason Dunham SF Industrial Software, Inc. -Original Message- From: Simon Whitaker [mailto:[EMAIL PROTECTED] Sent: Tuesday, January 20, 2004 2:03 AM To: Info-LabVIEW List Subject: Re: Array Wish list On Mon, 19 Jan 2004 22:42:22 -0800 Jason Dunham wrote: As far as I've ever heard, the transpose arrays don't use any extra memory. The transpose function is done in place. I'm sure a few extra bytes are needed for temporary storage, but supposedly the same array buffer is reused. I would guess that the graph transpose option is also not a memory hog. Although a transposed array will consume the same amount of memory as the original array, the transpose function involves creating a new array, populating it using data from the original array, then deleting the original. This can have a significant effect on memory usage with a large array. LabVIEW stores multi-dimensional arrays as a list of data items plus 4 bytes per dimension to store the size of that dimension. You can therefore work out how much memory an array will use: size of array item in bytes x no. items + 4 x no. dimensions The no. dimensions element becomes insignificant for large arrays. So, an array of a million 16-bit integers will consume around an extra 2 million bytes (about 1.9MB) during transposition. You can confirm this by profiling some sample VIs, one with transposition and one without. I created two VIs: the first creates a 2D array containing 10,000,000 32-bit integers (10 x 1,000,000) and writes it to an indicator. The second is a copy of the first, with a transpose array node just before the indicator. The first VI consumes almost exactly 40,000K less than the second (84,000.94K vs 124,000.98K). According to the above formula the array will consume c. 10,000,000 x 4, = c. 39,000K. You may not need to worry about this memory hit if you're transposing small arrays, but if you're transposing arrays with many millions of elements your memory usage will increase significantly during transposition - especially if you're using a large data type like an extended precision float. All the best, Simon Simon Whitaker [EMAIL PROTECTED] Software developer, Tiab Ltd tel: +44 (0)1295 714046 fax: +44 (0)1295 712334 web: http://www.tiab.co.uk/
RE: Array Wish list
As far as I've ever heard, the transpose arrays don't use any extra memory. The transpose function is done in place. I'm sure a few extra bytes are needed for temporary storage, but supposedly the same array buffer is reused. I would guess that the graph transpose option is also not a memory hog. The way the data comes from the DAQ vis is probably the most sensible way. It's actually good in that if you want to combine the output of two or more successive calls to AI Read, you can just concatenate the arrays. If you think of how a labview 2D array is stored in memory (row data lies together), and how a DAQ scan card has to store it's data (data from a single scan lies together) then the current order makes a lot of sense. lvdaq.dll would have to transpose the data in order to give it to you in the other format. It may not take memory, but it's probably a waste of time. I'd rather keep control of when the transposes happen rather than have the computer assume I always need it done. That's such a Microsoft approach (why would you want to see any DLLs in the Windows Explorer? I'll just hide them. or everyone's favorite: It looks like you're writing a letter...) Jason Dunham, President SF Industrial Software, Inc. 415 743 9350 x142 [EMAIL PROTECTED] -Original Message- From: Jack Hamilton [mailto:[EMAIL PROTECTED] Sent: Monday, January 19, 2004 9:23 PM To: LabVIEW -Info; [EMAIL PROTECTED] Subject: Array Wish list I wish they would fix the orientation of the array that comes from the analog DAQ functions. All the arrays are transposed wrong and you have to transpose them to even plot them (yes, yes you can select the plot to 'transpose') Transposing is memory intensive no matter how you do it. Hopefully, the guy at NI who programmed this initially - still winces when he goes to sleep at night - He did not fix then - so now the other 1,000's of us have to - every time. Hey, don't fix it now!!! Jack Hamilton Hamilton Design [EMAIL PROTECTED] www.Labuseful.com 714-839-6375 Office