Re: [Numpy-discussion] Where is Jaime?

2015-12-08 Thread Nathaniel Smith
On Mon, Dec 7, 2015 at 12:42 PM, Peter Creasey wrote: >>> > >>> > Is the interp fix in the google pipeline or do we need a workaround? >>> > >>> >>> Oooh, if someone is looking at changing interp, is there any chance >>> that fp could be extended to take complex128

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Sebastian
On 12/08/2015 02:17 AM, Warren Weckesser wrote: > On Sun, Dec 6, 2015 at 6:55 PM, Allan Haldane > wrote: > > It has also crossed my mind that np.random.randint and > np.random.rand could use an extra 'dtype' keyword. > > +1. Not a

[Numpy-discussion] Q: Use of scipy.signal.bilinear

2015-12-08 Thread R Schumacher
We have a function which describes a frequency response correction to piezo devices we use. To flatten the FFT, it is similar to: Cdis_t = .5 N = 8192 for n in range(8192): B3 = n * 2560 / N Fc(n) = 1 / ((B3/((1/(Cdis_t*2*pi))**2+B3**2)**0.5)*(-0.01*log(B3) + 1.04145)) In practice it

Re: [Numpy-discussion] Q: Use of scipy.signal.bilinear

2015-12-08 Thread Charles R Harris
On Tue, Dec 8, 2015 at 9:30 AM, R Schumacher wrote: > We have a function which describes a frequency response correction to > piezo devices we use. To flatten the FFT, it is similar to: > Cdis_t = .5 > N = 8192 > for n in range(8192): > B3 = n * 2560 / N > Fc(n) = 1 /

Re: [Numpy-discussion] Q: Use of scipy.signal.bilinear

2015-12-08 Thread R Schumacher
Sorry - I'll join there. - Ray At 10:00 AM 12/8/2015, you wrote: On Tue, Dec 8, 2015 at 9:30 AM, R Schumacher <r...@blue-cove.com> wrote: We have a function which describes a frequency response correction to piezo devices we use. To flatten the FFT, it is similar

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Stephan Hoyer
On Sun, Dec 6, 2015 at 3:55 PM, Allan Haldane wrote: > > I've also often wanted to generate large datasets of random uint8 and > uint16. As a workaround, this is something I have used: > > np.ndarray(100, 'u1', np.random.bytes(100)) > > It has also crossed my mind that

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Allan Haldane
On 12/08/2015 07:40 PM, Stephan Hoyer wrote: > On Sun, Dec 6, 2015 at 3:55 PM, Allan Haldane > wrote: > > > I've also often wanted to generate large datasets of random uint8 > and uint16. As a workaround, this is something I have

Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-08 Thread Chris Barker
drop 2.6 I still don't understand why folks insist that they need to run a (very)) old python on an old OS, but need the latest and greatest numpy. Chuck's list was pretty long and compelling. -CHB On Mon, Dec 7, 2015 at 1:38 AM, Sturla Molden wrote: > Charles R

Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-08 Thread Ralf Gommers
On Wed, Dec 9, 2015 at 12:01 AM, Chris Barker wrote: > drop 2.6 > > I still don't understand why folks insist that they need to run a (very)) > old python on an old OS, but need the latest and greatest numpy. > > Chuck's list was pretty long and compelling. > > -CHB > > >

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Allan Haldane
On 12/08/2015 08:01 PM, Allan Haldane wrote: > On 12/08/2015 07:40 PM, Stephan Hoyer wrote: >> On Sun, Dec 6, 2015 at 3:55 PM, Allan Haldane > > wrote: >> >> >> I've also often wanted to generate large datasets of random uint8 >> and

Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-08 Thread Charles R Harris
On Tue, Dec 8, 2015 at 4:10 PM, Ralf Gommers wrote: > > > On Wed, Dec 9, 2015 at 12:01 AM, Chris Barker > wrote: > >> drop 2.6 >> >> I still don't understand why folks insist that they need to run a (very)) >> old python on an old OS, but need the

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Matthew Brett
Hi, On Tue, Dec 8, 2015 at 4:40 PM, Stephan Hoyer wrote: > On Sun, Dec 6, 2015 at 3:55 PM, Allan Haldane > wrote: >> >> >> I've also often wanted to generate large datasets of random uint8 and >> uint16. As a workaround, this is something I have used: