Hi,

I have lots of data acquired via analogue to digital conversion. The data is 
consequently represented as integers (often 16 bit resolution). To obtain the 
correct signal and plot it, these data must of course be multiplied by a 
floating point scale factor. This seems potentially wasteful of resources 
(time and memory), especially as I would prefer to keep the original data 
untouched.

It occurs to me that a more efficient plotting method would be to plot the 
original data but scale the axes by the appropriate factor. In that way a 
simple numpy array view could be passed to plot. Does a method for doing this 
exist? I think I can do it in a rather convoluted way by plotting the 
original data and then superimposing empty axes at the adjusted scale. 
However, I haven't yet tested this and I'm a bit skeptical about the overhead 
of two plots. Another possibility might be the units mechanism, but according 
to the documentation that is discouraged, and it might be awkward to 
implement.

If the possibility doesn't exist, I wonder whether it might be feasible - and 
not too difficult - to add to the axis methods? One could add a scale 
parameter with a default value of 1 that should not affect existing code.

Boris

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Matplotlib-users mailing list
Matplotlib-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/matplotlib-users

Reply via email to