On Jun 21, 2010, at 5:38 PM, Bevan Jenkins wrote:
> Pierre GM gmail.com> writes:
>
>>
>> I just pushed the weekend's changes, including a fix for your problem.
>> Now it should be possible to define an array of dates w/ a timestep
> different from 1:
> date_array(start_date=Date("T", "20
On Mon, Jun 21, 2010 at 7:10 PM, Robert Kern wrote:
> On Mon, Jun 21, 2010 at 17:42, Neal Becker wrote:
>> Robert Kern wrote:
>>
>>> On Mon, Jun 21, 2010 at 14:01, Neal Becker wrote:
Can I find an efficient way to do this?
I have a 2d array, A, 80 rows by 880 columns.
I
On Mon, Jun 21, 2010 at 17:42, Neal Becker wrote:
> Robert Kern wrote:
>
>> On Mon, Jun 21, 2010 at 14:01, Neal Becker wrote:
>>> Can I find an efficient way to do this?
>>>
>>> I have a 2d array, A, 80 rows by 880 columns.
>>>
>>> I have a vector, B, of length 80, with scalar indexes.
>>
>> I as
Robert Kern wrote:
> On Mon, Jun 21, 2010 at 14:01, Neal Becker wrote:
>> Can I find an efficient way to do this?
>>
>> I have a 2d array, A, 80 rows by 880 columns.
>>
>> I have a vector, B, of length 80, with scalar indexes.
>
> I assume you mean 880.
>
>> I want a vector output C where
>> C[
Well, I just found that I can build shared libs in ATLAS 3.9.23, but
not 3.9.24 or 25.
This begs the question: has anyone actually successfully compiled
ATLAS v3.9.25 shared libs on RHEL/CentOS 5.x?
--
Warm regards,
Michael Green
___
NumPy-Discussion mai
Pierre GM gmail.com> writes:
>
> I just pushed the weekend's changes, including a fix for your problem.
> Now it should be possible to define an array of dates w/ a timestep
different from 1:
> >>> date_array(start_date=Date("T", "2010-06-21 00:00"), length=4,
timestep=15)
> DateArray([21-Jun
On Mon, Jun 21, 2010 at 14:01, Neal Becker wrote:
> Can I find an efficient way to do this?
>
> I have a 2d array, A, 80 rows by 880 columns.
>
> I have a vector, B, of length 80, with scalar indexes.
I assume you mean 880.
> I want a vector output C where
> C[i] = A[b[i],i] (i=0,879)
C = A[b,
This would certainly be useful in my case as well. I originally tried doing
something similar:
fun = lambda x: (x.min(), x,max())
apply_along_axis(fun, -1, val_pts)
It turned out to be much slower, which I guess isn't too surprising.
Brad
On Sat, Jun 19, 2010 at 4:45 PM, Warren Weckesser <
war
Can I find an efficient way to do this?
I have a 2d array, A, 80 rows by 880 columns.
I have a vector, B, of length 80, with scalar indexes.
I want a vector output C where
C[i] = A[b[i],i] (i=0,879)
___
NumPy-Discussion mailing list
NumPy-Discussion@s
On Mon, Jun 21, 2010 at 1:51 PM, Robert Kern wrote:
> On Mon, Jun 21, 2010 at 13:44, Benjamin Root wrote:
> > Hello,
> >
> > I ran into a somewhat counter-intuitive situation that probably should be
> > documented somewhere with respect to record (structured?) arrays. I
> wanted
> > to stack mu
Hmmm, if I force the reshaped array to be copied, it speeds up the min/max
and makes the overall operation a bit faster (times are below, generated
using line profiler with kernprof.py). I'd certainly like to get rid of
this copy() operation if possible. Is there any way to avoid it?
Brad
Line
On Mon, Jun 21, 2010 at 1:51 PM, Skipper Seabold wrote:
> On Mon, Jun 21, 2010 at 2:44 PM, Benjamin Root wrote:
> > Hello,
> >
> > I ran into a somewhat counter-intuitive situation that probably should be
> > documented somewhere with respect to record (structured?) arrays. I
> wanted
> > to sta
On Mon, Jun 21, 2010 at 13:44, Benjamin Root wrote:
> Hello,
>
> I ran into a somewhat counter-intuitive situation that probably should be
> documented somewhere with respect to record (structured?) arrays. I wanted
> to stack multiple arrays together that had the same names for the columns.
> Si
On Mon, Jun 21, 2010 at 2:44 PM, Benjamin Root wrote:
> Hello,
>
> I ran into a somewhat counter-intuitive situation that probably should be
> documented somewhere with respect to record (structured?) arrays. I wanted
> to stack multiple arrays together that had the same names for the columns.
>
Thanks Robert
On Mon, Jun 21, 2010 at 11:47 AM, Robert Kern wrote:
> On Mon, Jun 21, 2010 at 13:40, Vishal Rana wrote:
> > I have tried:
> > x1=np.array([1,2,3,4])
> > x2=np.array(['a','dd','xyz','12'])
> > x3=np.array([1.1,2,3,4])
> > r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c'
>
On Mon, Jun 21, 2010 at 13:40, Vishal Rana wrote:
> I have tried:
> x1=np.array([1,2,3,4])
> x2=np.array(['a','dd','xyz','12'])
> x3=np.array([1.1,2,3,4])
> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c'
Note, please do not reach down into numpy.core like this. Use
numpy.core.records is
Hello,
I ran into a somewhat counter-intuitive situation that probably should be
documented somewhere with respect to record (structured?) arrays. I wanted
to stack multiple arrays together that had the same names for the columns.
Since I was imagining the columns as having the names (as opposed
I have tried:
x1=np.array([1,2,3,4])
x2=np.array(['a','dd','xyz','12'])
x3=np.array([1.1,2,3,4])
r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c'
type(r) gives
np.save('np.npy', r)
r = np.load('np.npy')
type(r) gives
So my record is lost and converted to ndarray, any idea?
Thanks
Vi
Hello David,
thanks for your reply.
On Sat, Jun 19, 2010 at 18:37, David Cournapeau wrote:
> On Sat, Jun 19, 2010 at 11:13 PM, Sandro Tosi wrote:
>> Hello,
>> I'm upgrading the Debian package for Numpy to 1.4.1.
>>
>> Compilation goes well, but when I install the package for test and
>> import n
Scott Ransom nrao.edu> writes:
>
> I have an updated version of ppgplot.c as part of a larger astronomical
> package. You can get the ppgplot code here:
>
> http://github.com/scottransom/presto/tree/master/python/ppgplot_src/
>
> Scott
>
Thank you very much. It really works!
__
I have an updated version of ppgplot.c as part of a larger astronomical
package. You can get the ppgplot code here:
http://github.com/scottransom/presto/tree/master/python/ppgplot_src/
Scott
On Monday, June 21, 2010 12:32:37 pm Jun Liu wrote:
> follow http://www.dur.ac.uk/physics.astrolab/ppgp
follow http://www.dur.ac.uk/physics.astrolab/ppgplot.html, I installed pgplot
with gfortran and ppgplot (ported to numpy), everything seems Ok. But when I
tried a simple test, it raise an unplesent warning:
DeprecationWarning: PyArray_As1D: use PyArray_AsCArray
Here is my test code:
import ppgplot
On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran wrote:
> I have a 1D array with >100k samples that I would like to reduce by
> computing the min/max of each "chunk" of n samples. Right now, my
> code is as follows:
>
> n = 100
> offset = array.size % downsample
> array_min = array[offset:].reshape((-
Robert Kern writes:
> np.intc
Great, Thanks
Berthold Höllmann
--
Germanischer Lloyd AG
Berthold Höllmann
Project Engineer, CAE Development
Brooktorkai 18
20457 Hamburg
Germany
Phone: +49(0)40 36149-7374
Fax: +49(0)40 36149-7320
e-mail: berthold.hoellm...@gl-group.com
Internet: http://www.gl-gr
On 06/20/2010 03:24 AM, Tom Durrant wrote:
> Hi All,
>
> I have a problem involving lat/lon data. Basically, I am evaluating
> numerical weather model data against satellite data, and trying to
> produce gridded plots of various statistics. There are various steps
> involved with this, but bas
On Sun, Jun 20, 2010 at 10:57 PM, Tom Durrant wrote:
>
>>
>> are you doing something like np.polyfit(model, obs, 1) ?
>>
>> If you are using polyfit with deg=1, i.e. fitting a straight line,
>> then this could be also calculated using the weights in histogram2d.
>>
>> histogram2d (histogramdd) use
On Jun 20, 2010, at 7:43 PM, Pierre GM wrote:
> On Jun 20, 2010, at 7:28 PM, Bevan Jenkins wrote:
>>
>> I downloaded the git branch,
http://github.com/pierregm/scikits.timeseries-sandbox
>> I had to use the Download
>> pierregm/scikits.timeseries-sandbox at master via a zip file as I have not
On Mon, Jun 21, 2010 at 10:40 AM, Neil Crighton wrote:
> Warren Weckesser enthought.com> writes:
>
>>
>> Benjamin Root wrote:
>> > Brad, I think you are doing it the right way, but I think what is
>> > happening is that the reshape() call on the sliced array is forcing a
>> > copy to be made firs
Warren Weckesser enthought.com> writes:
>
> Benjamin Root wrote:
> > Brad, I think you are doing it the right way, but I think what is
> > happening is that the reshape() call on the sliced array is forcing a
> > copy to be made first. The fact that the copy has to be made twice
> > just wor
29 matches
Mail list logo