Re: [Numpy-discussion] Counting the Colors of RGB-Image
unique has an option to get indexes out which you can use in combination with sort to get the actual counts out. tab0 = zeros( 256*256*256 , dtype=int) col=ravel(((im0[...,0].astype('u4')*256+im0[...,1])*256)+im0[...,2]) col,idx=unique(sort(col),True) idx=hstack([idx,[2500*2500]]) tab0[col]=idx[1:]-idx[:-1] tab0.shape=(256,256,256) As Chris pointed out, if each pixel were 4 bytes you could probably just use im0.view('u4') for histogram values. //Torgil On Wed, Jan 18, 2012 at 10:26 AM, a...@pdauf.de wrote: Sorry, that i use this way to send an answer to Tony Yu , Nadav Horesh , Chris Barker. When iam direct answering on Your e-mail i get an error 5. I think i did a mistake. Your ideas are very helpfull and the code is very fast. Thank You elodw ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] views and mask NA
Hi All, I'd like some feedback on how mask NA should interact with views. The immediate problem is how to deal with the real and imaginary parts of complex numbers. If the original has a masked value, it should show up as masked in the real and imaginary parts. But what should happen on assignment to one of the masked views? This should probably clear the NA in the real/imag part, but not in the complex original. However, that does allow touching things under the mask, so to speak. Things get more complicated if the complex original is viewed as reals. In this case the mask needs to be doubled up, and there is again the possibility of touching things beneath the mask in the original. Viewing the original as bytes leads to even greater duplication. My thought is that touching the underlying data needs to be allowed in these cases, but the original mask can only be cleared by assignment to the original. Thoughts? Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Cross-covariance function
I ran into this a while ago and was confused why cov did not behave the way pierre suggested. On Jan 21, 2012 12:48 PM, Elliot Saba staticfl...@gmail.com wrote: Thank you Sturla, that's exactly what I want. I'm sorry that I was not able to reply for so long, but Pierre's code is similar to what I have already implemented, and I am in support of changing the functionality of cov(). I am unaware of any arguments for a covariance function that works in this way, except for the fact that the MATLAB cov() function behaves in the same way. [1] MATLAB, however, has an xcov() function, which is similar to what we have been discussing. [2] Unless you all wish to retain compatibility with MATLAB, I feel that the behaviour of cov() suggested by Pierre is the most straightforward method, and that if users wish to calculate the covariance of X concatenated with Y, then they may simply concatenate the matrices explicitly before passing into cov(), as this way the default method does not use 75% more CPU time. Again, if there is an argument for this functionality, I would love to learn of it! -E [1] http://www.mathworks.com/help//techdoc/ref/cov.html [2] http://www.mathworks.com/help/toolbox/signal/ref/xcov.html ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Cross-covariance function
On Sat, Jan 21, 2012 at 6:26 PM, John Salvatier jsalv...@u.washington.edu wrote: I ran into this a while ago and was confused why cov did not behave the way pierre suggested. same here, When I rewrote scipy.stats.spearmanr, I matched the numpy behavior for two arrays, while R only returns the cross-correlation part. Josef On Jan 21, 2012 12:48 PM, Elliot Saba staticfl...@gmail.com wrote: Thank you Sturla, that's exactly what I want. I'm sorry that I was not able to reply for so long, but Pierre's code is similar to what I have already implemented, and I am in support of changing the functionality of cov(). I am unaware of any arguments for a covariance function that works in this way, except for the fact that the MATLAB cov() function behaves in the same way. [1] MATLAB, however, has an xcov() function, which is similar to what we have been discussing. [2] Unless you all wish to retain compatibility with MATLAB, I feel that the behaviour of cov() suggested by Pierre is the most straightforward method, and that if users wish to calculate the covariance of X concatenated with Y, then they may simply concatenate the matrices explicitly before passing into cov(), as this way the default method does not use 75% more CPU time. Again, if there is an argument for this functionality, I would love to learn of it! -E [1] http://www.mathworks.com/help//techdoc/ref/cov.html [2] http://www.mathworks.com/help/toolbox/signal/ref/xcov.html ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] The NumPy Mandelbrot code 16x slower than Fortran
Hi, I read the Mandelbrot code using NumPy at this page: http://mentat.za.net/numpy/intro/intro.html but when I run it, it gives me integer overflows. As such, I have fixed the code, so that it doesn't overflow here: https://gist.github.com/1655320 and I have also written an equivalent Fortran program. You can compare both source codes to see that that it is pretty much one-to-one translation. The main idea in the above gist is to take an algorithm written in NumPy, and translate it directly to Fortran, without any special optimizations. So the above is my first try in Fortran. You can plot the result using this simple script (you can also just click on this gist to see the image there): https://gist.github.com/1655377 Here are my timings: Python Fortran Speedup Calculation 12.749 00.784 16.3x Saving 01.904 01.456 1.3x Total 14.653 02.240 6.5x I save the matrices to disk in an ascii format, so it's quite slow in both cases. The pure computation is however 16x faster in Fortran (in gfortran, I didn't even try Intel Fortran, that will probably be even faster). As such, I wonder how the NumPy version could be sped up? I have compiled NumPy with Lapack+Blas from source. Would anyone be willing to run the NumPy version? Just copy+paste should do it. If you want to run the Fortran version, the above gist uses some of my other modules that I use in my other programs, my goal was to see how much more complicated the Fortran code gets, compared to NumPy. As such, I put here https://gist.github.com/1655350 a single file with all the dependencies, just compile it like this: gfortran -fPIC -O3 -march=native -ffast-math -funroll-loops mandelbrot.f90 and run: $ ./a.out Iteration 1 Iteration 2 ... Iteration 100 Saving... Times: Calculation: 0.748045999 Saving: 1.36408502 Total: 2.11213102 Let me know if you figure out something. I think the mask thing is quite slow, but the problem is that it needs to be there, to catch overflows (and it is there in Fortran as well, see the where statement, which does the same thing). Maybe there is some other way to write the same thing in NumPy? Ondrej ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Easy module installation with less human intervention.
Hi, Perl has something like ppm so that I can just use one command to download and install perl modules. But I don't find such thing in python. As shown on http://docs.python.org/install/index.html, it seems that I have to download the packages first unzip it then install it. I'm wondering if there is a better way to install python packages that require less human intervention. Thanks! NAME ppm - Perl Package Manager, version 4.14 SYNOPSIS Invoke the graphical user interface: ppm ppm gui Install, upgrade and remove packages: ppm install [--area area] [--force] pkg ppm install [--area area] [--force] module ppm install [--area area] url ppm install [--area area] file.ppmx ppm install [--area area] file.ppd ppm install [--area area] num ppm upgrade [--install] ppm upgrade pkg ppm upgrade module ppm remove [--area area] [--force] pkg -- Regards, Peng ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Easy module installation with less human intervention.
You can try easy_install or pip. -=- Olivier 2012/1/21 Peng Yu pengyu...@gmail.com Hi, Perl has something like ppm so that I can just use one command to download and install perl modules. But I don't find such thing in python. As shown on http://docs.python.org/install/index.html, it seems that I have to download the packages first unzip it then install it. I'm wondering if there is a better way to install python packages that require less human intervention. Thanks! NAME ppm - Perl Package Manager, version 4.14 SYNOPSIS Invoke the graphical user interface: ppm ppm gui Install, upgrade and remove packages: ppm install [--area area] [--force] pkg ppm install [--area area] [--force] module ppm install [--area area] url ppm install [--area area] file.ppmx ppm install [--area area] file.ppd ppm install [--area area] num ppm upgrade [--install] ppm upgrade pkg ppm upgrade module ppm remove [--area area] [--force] pkg -- Regards, Peng ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion