Re: [Numpy-discussion] Quick Question about Optimization

2008-05-21 Thread Christopher Barker
James Snyder wrote: b = np.zeros((1,30)) # allocates new memory and disconnects the view This is really about how python works, not how numpy works: np.zeros() -- creates a new array with all zeros in it -- that's the whole point. b = Something -- binds the name b to the Something object.

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-20 Thread Hoyt Koepke
The time has now been shaved down to ~9 seconds with this suggestion from the original 13-14s, with the inclusing of Eric Firing's suggestions. This is without scipy.weave, which at the moment I can't get to work for all lines, and when I just replace one of them sucessfully it seems to run

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-20 Thread James Snyder
Well, if you do f = a[n, :], you would get a view, another object that shares the data in memory with a but is a separate object. OK, so it is a new object, with the properties of the slice it references, but if I write anything to it, it will consistently go back to the same spot in the

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-20 Thread Eric Firing
James Snyder wrote: Well, if you do f = a[n, :], you would get a view, another object that shares the data in memory with a but is a separate object. OK, so it is a new object, with the properties of the slice it references, but if I write anything to it, it will consistently go back to the

[Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
Hi - First off, I know that optimization is evil, and I should make sure that everything works as expected prior to bothering with squeezing out extra performance, but the situation is that this particular block of code works, but it is about half as fast with numpy as in matlab, and I'm

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
On Mon, May 19, 2008 at 7:08 PM, James Snyder [EMAIL PROTECTED] wrote: for n in range(0,time_milliseconds): self.u = self.expfac_m * self.prev_u + (1-self.expfac_m) * self.aff_input[n,:] self.v = self.u + self.sigma *

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
Also you could use xrange instead of range... Again, not sure of the size of the effect but it seems to be recommended by the docstring. Robin ___ Numpy-discussion mailing list Numpy-discussion@scipy.org

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Hoyt Koepke
for n in range(0,time_milliseconds): self.u = self.expfac_m * self.prev_u + (1-self.expfac_m) * self.aff_input[n,:] self.v = self.u + self.sigma * np.random.standard_normal(size=(1,self.naff)) self.theta = self.expfac_theta * self.prev_theta -

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
Hi, I think my understanding is somehow incomplete... It's not clear to me why (simplified case) a[curidx,:] = scalar * a[2-curidx,:] should be faster than a = scalar * b In both cases I thought the scalar multiplication results in a new array (new memory allocated) and then the difference

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Anne Archibald
2008/5/19 James Snyder [EMAIL PROTECTED]: First off, I know that optimization is evil, and I should make sure that everything works as expected prior to bothering with squeezing out extra performance, but the situation is that this particular block of code works, but it is about half as fast

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Eric Firing
Robin wrote: Also you could use xrange instead of range... Again, not sure of the size of the effect but it seems to be recommended by the docstring. No, it is going away in Python 3.0, and its only real benefit is a memory saving in extreme cases. From the Python library docs: The

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Hoyt Koepke
On Mon, May 19, 2008 at 12:53 PM, Robin [EMAIL PROTECTED] wrote: Hi, I think my understanding is somehow incomplete... It's not clear to me why (simplified case) a[curidx,:] = scalar * a[2-curidx,:] should be faster than a = scalar * b In both cases I thought the scalar multiplication

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Christopher Barker
Anne Archibald wrote: 2008/5/19 James Snyder [EMAIL PROTECTED]: I can provide the rest of the code if needed, but it's basically just filling some vectors with random and empty data and initializing a few things. It would kind of help, since it would make it clearer what's a scalar and

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 2:53 PM, Christopher Barker [EMAIL PROTECTED] wrote: Anne Archibald wrote: 2008/5/19 James Snyder [EMAIL PROTECTED]: I can provide the rest of the code if needed, but it's basically just filling some vectors with random and empty data and initializing a few

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 5:27 PM, Charles R Harris [EMAIL PROTECTED] wrote: The latest versions of Matlab use the ziggurat method to generate random normals and it is faster than the method used in numpy. I have ziggurat code at hand, but IIRC, Robert doesn't trust the method ;) Well, I

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
Separating the response into 2 emails, here's the aspect that comes from implementations of random: In short, that's part of the difference. I ran these a few times to check for consistency. MATLAB (R2008a: tic for i = 1:2000 a = randn(1,13857); end toc Runtime: ~0.733489 s NumPy

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 5:27 PM, Charles R Harris [EMAIL PROTECTED] wrote: The latest versions of Matlab use the ziggurat method to generate random normals and it is faster than the method used in numpy. I have ziggurat

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 6:39 PM, Charles R Harris [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 5:27 PM, Charles R Harris [EMAIL PROTECTED] wrote: The latest versions of Matlab use the ziggurat method to generate

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
On to the code, here's a current implementation, attached. I make no claims about it being great code, I've modified it so that there is a weave version and a sans-weave version. Many of the suggestions make things a bit faster. The weave version bombs out with a rather long log, which can be

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 5:52 PM, Robert Kern [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 6:39 PM, Charles R Harris [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 5:27 PM, Charles R Harris [EMAIL PROTECTED]

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 6:55 PM, James Snyder [EMAIL PROTECTED] wrote: Also note, I'm not asking to match MATLAB performance. It'd be nice, but again I'm just trying to put together decent, fairly efficient numpy code. I can cut the time by about a quarter by just using the boolean mask

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 7:30 PM, Charles R Harris [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 5:52 PM, Robert Kern [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 6:39 PM, Charles R Harris [EMAIL PROTECTED] wrote: On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED]

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Eric Firing
Robert Kern wrote: On Mon, May 19, 2008 at 6:55 PM, James Snyder [EMAIL PROTECTED] wrote: Also note, I'm not asking to match MATLAB performance. It'd be nice, but again I'm just trying to put together decent, fairly efficient numpy code. I can cut the time by about a quarter by just using