Re: [Numpy-discussion] Building on WinXP 64-bit, Intel Compilers
Michael Colonno wrote: Thanks for your response. I manually edited one of the python files (ccompiler.py I think) to change icc.exe to icl.exe. (This is a trick I used to use to get F2PY to compile on Windows platforms.) Since icl is a drop-in replacement for the visual studio compiler / linker, I'd like to edit the python files configuring this (msvc) but I could not find anything(?) If you could point me towards the config files(s) for the visual studio compiler (I'm assuming are configured for the Windows file extensions already) I could likely make some headway.` Unfortunately, the code for our building process is difficult to grasp - there is a lof of magic. Everything is in numpy/distutils. Basically, you need to create a new compiler, a bit like intelccompiler.py, but for Windows. I unfortunately can't help you more ATM, since I don't know the intel compiler on Windows. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Building on WinXP 64-bit, Intel Compilers
I think this is doable; thankfully the Intel compilers on Windows and Linux are very similar in behavior. The exact same build scripts *should*work fine provided the file extensions (.o -- .obj) and flags (-L, etc.) are modified. In terms of syntax this should be an easy thing to do (it was with the free-standing F2PY) , but I will need some help navigating through the magic you refer to. I will put some effort into this and write back if I hit a roadblock. As an aside: how were the Windows 32-bit installers created? Is it possible to recreate this process changing the target arch -- x64? Thanks again, ~Mike C. On Wed, Jan 28, 2009 at 1:06 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Michael Colonno wrote: Thanks for your response. I manually edited one of the python files (ccompiler.py I think) to change icc.exe to icl.exe. (This is a trick I used to use to get F2PY to compile on Windows platforms.) Since icl is a drop-in replacement for the visual studio compiler / linker, I'd like to edit the python files configuring this (msvc) but I could not find anything(?) If you could point me towards the config files(s) for the visual studio compiler (I'm assuming are configured for the Windows file extensions already) I could likely make some headway.` Unfortunately, the code for our building process is difficult to grasp - there is a lof of magic. Everything is in numpy/distutils. Basically, you need to create a new compiler, a bit like intelccompiler.py, but for Windows. I unfortunately can't help you more ATM, since I don't know the intel compiler on Windows. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] optimise operation in array with datetime objects
Hello, I have an array of datetime objects. What is the most efficient way of creating a new array with only the hours or minutes out of it? Here is an example: ### imports import numpy as np import datetime as dt ### create some data d = dt.datetime.now() dates_li = [] count = 0 for i in range(0, 24): count = count +1 fact = count * 3600 date_new = d + dt.timedelta(0, fact) print date_new dates_li.append(date_new) ### the array with datetime objects dates_array = np.array(dates_li) ### this is the loop I would like to optimize: ### looping over arrays is considered inefficient. ### what could be a better way? hours_array = dates_array.copy() for i in range(0, dates_array.size): hours_array[i] = dates_array[i].hour hours_array = hours_array.astype('int') Thanks in advance for any hints! Kind regards, Timmie ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] optimise operation in array with datetime objects
On Jan 28, 2009, at 3:56 PM, Timmie wrote: ### this is the loop I would like to optimize: ### looping over arrays is considered inefficient. ### what could be a better way? hours_array = dates_array.copy() for i in range(0, dates_array.size): hours_array[i] = dates_array[i].hour You could try: np.fromiter((_.hour for _ in dates_li), dtype=np.int) or np.array([_.hour for _ in dates_li], dtype=np.int) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] optimise operation in array with datetime objects
On Jan 28, 2009, at 5:43 PM, Timmie wrote: You could try: np.fromiter((_.hour for _ in dates_li), dtype=np.int) or np.array([_.hour for _ in dates_li], dtype=np.int) I used dates_li only for the preparation of example data. So let's suppose I have the array dates_array returned from a a function. Just use dates_array instead of dates_li, then. hours_array = dates_array.copy() for i in range(0, dates_array.size): hours_array[i] = dates_array[i].hour * What's the point of making a copy of dates_array ? dates_array is a ndarray of object, right ? And you want to take the hours, so you should have an ndarray of integers for hours_array. * The issue I have with this part is that you have several calls to __getitem__ at each iteration. It might be faster to use create hours_array as a block: hours_array=np.array([_.hour for _ in dates_array], dtype=np.int) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Building on WinXP 64-bit, Intel Compilers
On Thu, Jan 29, 2009 at 1:18 AM, Michael Colonno mcolo...@gmail.com wrote: I think this is doable; thankfully the Intel compilers on Windows and Linux are very similar in behavior. The problem is that distutils does not abstract this kind of things: you have a CCompiler class, and a subclass Unix C Compiler, and then Intel Compiler. OTOH, the MS compiler is its own class which inherit from CCompiler - all windows specifics are encoded in this class. So I am afraid you will need to recreate all this class implementation for Intel C Compiler, because contrary to the Linux case, nothing is abstracted for windows. As an aside: how were the Windows 32-bit installers created? With mingw compiler. Is it possible to recreate this process changing the target arch -- x64? If you can build numpy with the Intel compiler, building the installer should be a no-brainer. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] A buildbot farm with shell access - for free ?
Hi, Just saw that on one ML: http://www.snakebite.org/ http://mail.python.org/pipermail/python-committers/2009-January/000331.html Bottom line: it looks like there is a set of machines which were donated to the PSF for buildbot *with shell access* so that people can fix problems appearing on some platforms. If you look at the email, there are some 'exotic' machines that mere mortals cannot have access to (like True64 on Itanium: to quote the email massive quad Itanium 2 RX-5670s, chock full of 73GB 15k disks and no less than 78GB of RAM between the two servers; 32GB in one and 46GB in the other). There are also windows machines available. It is said in the email that this is reserved to the python project, and prominent python projects like Twisted and Django. Would it be ok to try to be qualified as a prominent python project as well ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A buildbot farm with shell access - for free ?
On Wed, Jan 28, 2009 at 7:11 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Hi, Just saw that on one ML: http://www.snakebite.org/ http://mail.python.org/pipermail/python-committers/2009-January/000331.html Bottom line: it looks like there is a set of machines which were donated to the PSF for buildbot *with shell access* so that people can fix problems appearing on some platforms. If you look at the email, there are some 'exotic' machines that mere mortals cannot have access to (like True64 on Itanium: to quote the email massive quad Itanium 2 RX-5670s, chock full of 73GB 15k disks and no less than 78GB of RAM between the two servers; 32GB in one and 46GB in the other). There are also windows machines available. It is said in the email that this is reserved to the python project, and prominent python projects like Twisted and Django. Would it be ok to try to be qualified as a prominent python project as well ? Ohhh... I love buildbots. Go for it. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A buildbot farm with shell access - for free ?
Sounds like a great idea! On 1/28/09, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Hi, Just saw that on one ML: http://www.snakebite.org/ http://mail.python.org/pipermail/python-committers/2009-January/000331.html Bottom line: it looks like there is a set of machines which were donated to the PSF for buildbot *with shell access* so that people can fix problems appearing on some platforms. If you look at the email, there are some 'exotic' machines that mere mortals cannot have access to (like True64 on Itanium: to quote the email massive quad Itanium 2 RX-5670s, chock full of 73GB 15k disks and no less than 78GB of RAM between the two servers; 32GB in one and 46GB in the other). There are also windows machines available. It is said in the email that this is reserved to the python project, and prominent python projects like Twisted and Django. Would it be ok to try to be qualified as a prominent python project as well ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Sent from my mobile device - Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSCE2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064http://www.soe.ucsc.edu/~eads ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A buildbot farm with shell access - for free ?
On Wed, Jan 28, 2009 at 6:11 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: It is said in the email that this is reserved to the python project, and prominent python projects like Twisted and Django. Would it be ok to try to be qualified as a prominent python project as well ? That would be great. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] convolution axis
Is there an easy way to perform convolutions along a particular axis of an array? -gideon ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] help on fast slicing on a grid
Hi, I have to buidl a grid with 256 point by the command: a = arange(-15,16,2) L = len(a) cnstl = a.reshape(L,1)+1j*a My problem is that I have a big data array that contains the data round the points in cnstl. I want to slice the point to the closest cnstl point and also compute the error. The condition is in the middle of the two point in x and y axis. I can do it in a for loop. Since Python and numpy have a lot of magic, I want to find an efficient way to do. This problem arise from QAM 256 modulation. Thanks Frank _ Windows Live™ Hotmail®:…more than just e-mail. http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t2_hm_justgotbetter_explore_012009___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] help on fast slicing on a grid
On Wed, Jan 28, 2009 at 23:52, frank wang f...@hotmail.com wrote: Hi, I have to buidl a grid with 256 point by the command: a = arange(-15,16,2) L = len(a) cnstl = a.reshape(L,1)+1j*a My problem is that I have a big data array that contains the data round the points in cnstl. I want to slice the point to the closest cnstl point and also compute the error. The condition is in the middle of the two point in x and y axis. I can do it in a for loop. Since Python and numpy have a lot of magic, I want to find an efficient way to do. This problem arise from QAM 256 modulation. Can you show us the for loop? I'm not really sure what you want to compute. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] help on fast slicing on a grid
Here is the for loop that I am think about. Also, I do not know whether the where commands can handle the complicated logic. The where command basically find the data in the square around the point cnstl[j]. Let the data array is qam with size N Out = X error = X for i in arange(N): for j in arange(L): aa = np.where((real(X)real(cnstl[j])+1) (real(X)real(cnstl[j])-1) (imag(X)imag(cnstl[j])+1) (imag(X)imag(cnstl[j]-1)) Out[aa]=cnstl[j] error[aa]=abs(X)**2 - abs(cnstl[j])**2 Thanks Frank Date: Wed, 28 Jan 2009 23:57:16 -0600 From: robert.k...@gmail.com To: numpy-discussion@scipy.org Subject: Re: [Numpy-discussion] help on fast slicing on a grid On Wed, Jan 28, 2009 at 23:52, frank wang f...@hotmail.com wrote: Hi, I have to buidl a grid with 256 point by the command: a = arange(-15,16,2) L = len(a) cnstl = a.reshape(L,1)+1j*a My problem is that I have a big data array that contains the data round the points in cnstl. I want to slice the point to the closest cnstl point and also compute the error. The condition is in the middle of the two point in x and y axis. I can do it in a for loop. Since Python and numpy have a lot of magic, I want to find an efficient way to do. This problem arise from QAM 256 modulation. Can you show us the for loop? I'm not really sure what you want to compute. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion _ Windows Live™: E-mail. Chat. Share. Get more ways to connect. http://windowslive.com/howitworks?ocid=TXT_TAGLM_WL_t2_allup_howitworks_012009___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] help on fast slicing on a grid
On Thu, Jan 29, 2009 at 00:09, frank wang f...@hotmail.com wrote: Here is the for loop that I am think about. Also, I do not know whether the where commands can handle the complicated logic. The where command basically find the data in the square around the point cnstl[j]. cnstl is a 2D array from your previous description. Let the data array is qam with size N I don't see qam anywhere. Did you mean X? Out = X error = X Don't you want something like zeros_like(X) for these? for i in arange(N): for j in arange(L): aa = np.where((real(X)real(cnstl[j])+1) (real(X)real(cnstl[j])-1) (imag(X)imag(cnstl[j])+1) (imag(X)imag(cnstl[j]-1)) Out[aa]=cnstl[j] error[aa]=abs(X)**2 - abs(cnstl[j])**2 I'm still confused. Can you show me a complete, working script with possibly fake data? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] convolution axis
There are at least two options: 1. use convolve1d from numpy.numarray.nd_image (or scipy.ndimage) 2. use scipy.signal.convolve and adjust the dimensions of the convolution kenel to align it along the desired axis. Nadav -הודעה מקורית- מאת: numpy-discussion-boun...@scipy.org בשם Gideon Simpson נשלח: ה 29-ינואר-09 06:59 אל: Discussion of Numerical Python נושא: [Numpy-discussion] convolution axis Is there an easy way to perform convolutions along a particular axis of an array? -gideon ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion winmail.dat___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] help on fast slicing on a grid
Hi, Bob, Thanks for your help. I am sorry for my type error. qam array is the X array in my example. cntl is a complex array contains the point (x,y) axises. I will try to make a workable example. Also I will try to find out the zeros_like function. However, I guess that zeros_like(X) will create an array the same size as X. It it is. Then the two line Out=X and error=X should be Out=zeros_like(X) and error=zeros(X). Also, can where command handel the logic command? aa = np.where((real(X)real(cnstl[j])+1) (real(X)real(cnstl[j])-1) (imag(X)imag(cnstl[j])+1) (imag(X)imag(cnstl[j]-1)) For example, cntl[j]=3+1j*5, then the where command is the same as: aa = np.where((real(X)4) (real(X)2 ) (imag(X)6) (imag(X)4)) Thanks Frank Date: Thu, 29 Jan 2009 00:15:48 -0600 From: robert.k...@gmail.com To: numpy-discussion@scipy.org Subject: Re: [Numpy-discussion] help on fast slicing on a grid On Thu, Jan 29, 2009 at 00:09, frank wang f...@hotmail.com wrote: Here is the for loop that I am think about. Also, I do not know whether the where commands can handle the complicated logic. The where command basically find the data in the square around the point cnstl[j]. cnstl is a 2D array from your previous description. Let the data array is qam with size N I don't see qam anywhere. Did you mean X? Out = X error = X Don't you want something like zeros_like(X) for these? for i in arange(N): for j in arange(L): aa = np.where((real(X)real(cnstl[j])+1) (real(X)real(cnstl[j])-1) (imag(X)imag(cnstl[j])+1) (imag(X)imag(cnstl[j]-1)) Out[aa]=cnstl[j] error[aa]=abs(X)**2 - abs(cnstl[j])**2 I'm still confused. Can you show me a complete, working script with possibly fake data? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion _ Windows Live™: E-mail. Chat. Share. Get more ways to connect. http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t2_allup_explore_012009___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] make latex in numpy/doc failed
2009/1/27 Nils Wagner nwag...@iam.uni-stuttgart.de: a make latex in numpy/doc failed with ... Intersphinx hit: PyObject http://docs.python.org/dev/c-api/structures.html writing... Sphinx error: too many nesting section levels for LaTeX, at heading: numpy.ma.MaskedArray.__lt__ make: *** [latex] Fehler 1 I am using sphinxv0.5.1 BTW, make html works fine here. I see this problem too. It used to work, and I don't think I've changed anything on my system. Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49) [GCC 4.3.2] on linux2 Type help, copyright, credits or license for more information. import numpy numpy.__version__ '1.3.0.dev6335' import sphinx sphinx.__version__ '0.5.1' Should I file a ticket, or just let whoever has to build the docs for the next release sort it out when the time comes? Cheers, Scott ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion