On Fri, Jun 29, 2012 at 2:20 PM, Jim Vickroy jim.vick...@noaa.gov wrote:
As a lurker and user, I too wish for a distinct numpy-users list. -- jv
This thread is a perfect example of why another list is needed. It's
currently 42 semi-philosophical posts about what kind community numpy
should
On Thu, Jun 28, 2012 at 7:25 AM, Travis Oliphant tra...@continuum.io wrote:
Hey all,
I'd like to propose dropping support for Python 2.4 in NumPy 1.8 (not the 1.7
release). What does everyone think of that?
As a tangential point, MPL is dropping support for python2.4 in it's
next major
Some examples would be nice. A lot of people did move already. And I haven't
seen reports of those that tried and got stuck. Also, Debian and Python(x,
y) have 1.6.2, EPD has 1.6.1.
In my company, the numpy for our production python install is well
behind 1.6. In the world of trading, the
On Tue, Jun 26, 2012 at 3:27 PM, Thouis (Ray) Jones tho...@gmail.com wrote:
+1 !
Speaking as someone trying to get started in contributing to numpy, I
find this discussion extremely off-putting. It's childish,
meaningless, and spiteful, and I think it's doing more harm than any
possible
On Mon, Mar 5, 2012 at 1:29 PM, Keith Goodman kwgood...@gmail.com wrote:
I[8] np.allclose(a, a[0])
O[8] False
I[9] a = np.ones(10)
I[10] np.allclose(a, a[0])
O[10] True
One disadvantage of using a[0] as a proxy is that the result depends on the
ordering of a
(a.max() - a.min())
On Wed, Feb 29, 2012 at 1:20 PM, Neal Becker ndbeck...@gmail.com wrote:
Much of Linus's complaints have to do with the use of c++ in the _kernel_.
These objections are quite different for an _application_. For example,
there
are issues with the need for support libraries for exception
On Sat, Feb 18, 2012 at 5:09 PM, David Cournapeau courn...@gmail.comwrote:
There are better languages than C++ that has most of the technical
benefits stated in this discussion (rust and D being the most
obvious ones), but whose usage is unrealistic today for various
reasons: knowledge,
On Thu, Feb 16, 2012 at 7:26 PM, Alan G Isaac alan.is...@gmail.com wrote:
On 2/16/2012 7:22 PM, Matthew Brett wrote:
This has not been an encouraging episode in striving for consensus.
I disagree.
Failure to reach consensus does not imply lack of striving.
Hey Alan, thanks for your
On Oct 13, 2011, at 4:21 PM, Zachary Pincus zachary.pin...@yale.edu wrote:
I keep meaning to use matplotlib as well, but every time I try I also get
really turned off by the matlabish interface in the examples. I get that it's
a selling point for matlab refugees, but I find it
On Wed, Apr 13, 2011 at 8:50 AM, John Hunter jdh2...@gmail.com wrote:
On Sat, Jan 15, 2011 at 7:28 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
I've opened http://projects.scipy.org/numpy/ticket/1713 so this doesn't get
lost.
Just wanted to bump this -- bug still exists in numpy HEAD
On Sat, Jan 15, 2011 at 7:28 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
I've opened http://projects.scipy.org/numpy/ticket/1713 so this doesn't get
lost.
Just wanted to bump this -- bug still exists in numpy HEAD 2.0.0.dev-fe3852f
___
jo...@udesktop253:~ gcc --version
gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
2010/9/3 Guillaume Chérel guillaume.c.che...@gmail.com:
Great, Thank you. I also found out about csv2rec. I've been missing
these two a lot.
Some other handy rec functions in mlab
http://matplotlib.sourceforge.net/examples/misc/rec_groupby_demo.html
On Fri, Sep 3, 2010 at 8:50 AM, Benjamin Root ben.r...@ou.edu wrote:
Why is this function in matplotlib? Wouldn't it be more useful in numpy?
I tend to add stuff I write to matplotlib. mlab was initially a
repository of matlab-like functions that were not available in numpy
(load, save,
2010/8/16 Guillaume Chérel guillaume.c.che...@gmail.com:
Hello,
I'd like to know if there is an easy way to save a list of 1D arrays to a
csv file, with the first line of the file being the column names.
I found the following, but I can't get to save the column names:
data =
On Fri, Aug 13, 2010 at 11:47 AM, David Goldsmith
d.l.goldsm...@gmail.com wrote:
2010/7/30 Stéfan van der Walt ste...@sun.ac.za
Hi David
Best of luck with your new position! I hope they don't make you program
too much MATLAB!
After several years now of writing Python and now having written
On Fri, Aug 13, 2010 at 3:41 PM, Benjamin Root ben.r...@ou.edu wrote:
@Josh: Awesome name. Very fitting...
Another thing that I really love about matplotlib that drove me nuts in
Matlab was being unable to use multiple colormaps in the same figure.
Funny -- this was one of the *first* things
I am seeing a problem on Solaris since I upgraded to svn HEAD.
np.isinf does not handle np.inf. See ipython session below. I am not
seeing this problem w/ HEAD on an ubuntu linux box I tested on
In [1]: import numpy as np
In [2]: np.__version__
Out[2]: '2.0.0.dev8480'
In [3]: x = np.inf
On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
Is it certain that the Solaris compiler lacks isinf? Is it possible
that it has it, but it is not being detected?
Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on solaris x86
On Thu, Jul 15, 2010 at 7:11 PM, John Hunter jdh2...@gmail.com wrote:
On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
Is it certain that the Solaris compiler lacks isinf? Is it possible
that it has it, but it is not being detected?
Just to clarify, I'm not using
On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:
On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
Is it certain that the Solaris compiler lacks isinf
I use record arrays extensively with python datetimes, which works if
you pass in a list of lists of data with the names. numpy can
accurately infer the dtypes and create a usable record array. Eg,
import datetime
import numpy as np
rows = [ [datetime.date(2001,1,1), 12, 23.],
On Fri, Jul 9, 2010 at 8:03 PM, Peter Isaac peter.is...@monash.edu wrote:
Note that EPD-6.2-2 works fine with this script on WinXP.
Any suggestions welcome
then just use epd-6.2.2 on winxp.
your-mpl-developer-channeling-steve-jobs,
JDH
wink
___
On Mon, Mar 8, 2010 at 11:39 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
- port matplotlib to Py3K
We'd be happy to mentor a project here. To my knowledge, nothing has
been done, other than upgrade to CXX6 (our C++ extension lib). Most,
but not all, of our extension code is exposed
On Tue, Feb 9, 2010 at 7:53 PM, Pierre GM pgmdevl...@gmail.com wrote:
On Feb 9, 2010, at 8:16 PM, John Hunter wrote:
and have totxt, tocsv. etc... from rec2txt, rec2csv, etc... I
think the functionality of mlab.rec_summarize and rec_groupby is very
useful, but the interface is a bit clunky
On Wed, Feb 10, 2010 at 8:54 AM, John Hunter jdh2...@gmail.com wrote:
On Tue, Feb 9, 2010 at 7:53 PM, Pierre GM pgmdevl...@gmail.com wrote:
On Feb 9, 2010, at 8:16 PM, John Hunter wrote:
and have totxt, tocsv. etc... from rec2txt, rec2csv, etc... I
think the functionality
On Tue, Feb 9, 2010 at 4:43 PM, Fernando Perez fperez@gmail.com wrote:
On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern robert.k...@gmail.com wrote:
numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter')
And if that isn't sufficient, John has in matplotlib.mlab a few other
similar
On Tue, Feb 9, 2010 at 7:02 PM, Pierre GM pgmdevl...@gmail.com wrote:
On Feb 9, 2010, at 7:54 PM, Pauli Virtanen wrote:
But, should we make these functions available under some less
internal-ish namespace? There's numpy.rec at the least -- it could be
made a real module to pull in things from
We are looking to hire a quantitative researcher to help research and
develop trading ideas, and to develop and support infrastructure to
put these trading strategies into production. We are looking for
someone who is bright and curious with a quantitative background and a
strong interest in
On Wed, Nov 25, 2009 at 8:48 AM, Dan Yamins dyam...@gmail.com wrote:
Am I just not supposed to be working with length-0 string columns, period?
But why would you want to? array dtypes are immutable, so you are
saying: I want this field to be only empty strings now and forever.
So you can't
of circular references. Pyplot close does this
automatically, but this does not apply to embedding.
How are you running you app? From the shell or IPython?
Mathew
On Thu, Nov 19, 2009 at 10:30 AM, John Hunter jdh2...@gmail.com
wrote:
On Nov 19, 2009, at 11:57 AM, Robert Kern robert.k
using the mpl toolbar? It keeps a ref to the canvas. If you
can create a small freestanding example, that would help
-Mathew
On Thu, Nov 19, 2009 at 10:42 AM, John Hunter jdh2...@gmail.com
wrote:
On Nov 19, 2009, at 12:35 PM, Mathew Yeates mat.yea...@gmail.com
wrote:
Yeah, I
I just tracked down a subtle bug in my code, which is equivalent to
In [64]: x, y = np.random.rand(2, n)
In [65]: z = np.zeros_like(x)
In [66]: mask = x0.5
In [67]: z[mask] = x/y
I meant to write
z[mask] = x[mask]/y[mask]
so I can fix my code, but why is line 67 allowed
In [68]:
On Wed, Aug 12, 2009 at 6:28 AM, John Hunterjdh2...@gmail.com wrote:
We would like to add function plotting to mpl, but to do this right we
need to be able to adaptively sample a function evaluated over an
interval so that some tolerance condition is satisfied, perhaps with
both a relative and
We would like to add function plotting to mpl, but to do this right we
need to be able to adaptively sample a function evaluated over an
interval so that some tolerance condition is satisfied, perhaps with
both a relative and absolute error tolerance condition. I am a bit
out of my area of
yubnub is pretty cool -- it's a command line interface for the web.
You can enable it in firefox by typing about:config in the URL bar,
scrolling down to keyword.URL, right click on the line and choose
modify, and set the value to be
http://www.yubnub.org/parser/parse?default=g2command=
Then,
We are trying to build and test mpl installers for python2.4, 2.5 and
2.6. What we are finding is that if we build mpl against a more
recent numpy than the installed numpy on a test machine, the import of
mpl extension modules which depend on numpy trigger a segfault.
Eg, on python2.5 and
2009/5/20 Stéfan van der Walt ste...@sun.ac.za:
David Cournapeau also put a check in place so that the NumPy build
will break if we forget to update the API version again.
So, while we can't change the releases of NumPy out there already, we
can at least ensure that this won't happen again.
A colleague of mine has a bunch of numpy arrays saved with np.save and
he now wants to access them directly in C, with or w/o the numpy C API
doesn't matter. Does anyone have any sample code lying around which
he can borrow from? The array is a structured array with an otherwise
plain vanilla
On Wed, Jan 7, 2009 at 6:37 AM, Franck Pommereau
pommer...@univ-paris12.fr wrote:
def f4 (x, y) :
Jean-Baptiste Rudant boogalo...@yahoo.fr
test 1 CPU times: 111.21s
test 2 CPU times: 13.48s
As Jean-Baptiste noticed, this solution is not very efficient (but
works almost
On Thu, Dec 18, 2008 at 8:27 PM, Bradford Cross
bradford.n.cr...@gmail.com wrote:
This is a new project I just released.
I know it is C#, but some of the design and idioms would be nice in
numpy/scipy for working with discrete event simulators, time series, and
event stream processing.
On Fri, Dec 19, 2008 at 12:59 PM, Eric Firing efir...@hawaii.edu wrote:
Licensing is no problem; I have never bothered with it, but I can tack on a
BSD-type license if that would help.
Great -- if you are the copyright holder, would you commit a BSD
license file to the py4science trailstats
On Mon, Dec 1, 2008 at 12:21 PM, Pierre GM [EMAIL PROTECTED] wrote:
Well, looks like the attachment is too big, so here's the implementation.
The tests will come in another message.\
It looks like I am doing something wrong -- trying to parse a CSV file
with dates formatted like '2008-10-14',
On Mon, Dec 1, 2008 at 1:14 PM, Pierre GM [EMAIL PROTECTED] wrote:
The problem you have is that the default dtype is 'float' (for
backwards compatibility w/ the original np.loadtxt). What you want
is to automatically change the dtype according to the content of
your file: you should use
On Tue, Nov 25, 2008 at 11:23 PM, Ryan May [EMAIL PROTECTED] wrote:
Updated patch attached. This includes:
* Updated docstring
* New tests
* Fixes for previous issues
* Fixes to make new tests actually work
I appreciate any and all feedback.
I'm having trouble applying your patch, so
On Tue, Nov 25, 2008 at 12:16 PM, Pierre GM [EMAIL PROTECTED] wrote:
A la mlab.csv2rec ? It could work with a bit more tweaking, basically
following John Hunter's et al. path. What happens when the column names are
unknown (read from the header) or wrong ?
Actually, I'd like John to comment
On Tue, Nov 25, 2008 at 2:01 PM, Pierre GM [EMAIL PROTECTED] wrote:
On Nov 25, 2008, at 2:26 PM, John Hunter wrote:
Yes, I've said on a number of occasions I'd like to see these
functions in numpy, since a number of them make more sense as numpy
methods than as stand alone functions.
Great
I frequently want to break a 1D array into regions above and below
some threshold, identifying all such subslices where the contiguous
elements are above the threshold. I have two related implementations
below to illustrate what I am after. The first crossings is rather
naive in that it doesn't
On Thu, Oct 16, 2008 at 2:28 PM, Rob Hetland [EMAIL PROTECTED] wrote:
I did not know that very useful thing. But now I do. This is solid
proof that lurking on the mailing lists makes you smarter.
and that our documentation effort still has a long way to go !
FAQ added at
On Thu, Oct 16, 2008 at 1:25 PM, Rob Hetland [EMAIL PROTECTED] wrote:
This question gets asked about once a month on the mailing list.
Perhaps pnpoly could find a permanent home in scipy? (or somewhere?)
Obviously, many would find it useful.
It is already in matplotlib
In [1]: import
On Mon, Oct 13, 2008 at 2:29 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
The problem is, you did not just ask
for technical information. You also
accused people of being condescending
and demeaning. But nobody was
condescending or demeaning. As several
people **politely** explained to you,
I have a an array of indices into a larger array where some condition
is satisfied. I want to create a larger set of indices which *mark*
all the indicies following the condition over some Nmark length
window. In code:
import numpy as np
N = 1000
Nmark = 20
ind =
On Mon, Sep 22, 2008 at 10:13 AM, Robert Kern [EMAIL PROTECTED] wrote:
marked[ind + np.arange(Nmark)] = True
That triggers a broadcasting error:
Traceback (most recent call last):
File /home/titan/johnh/test.py, line 13, in ?
marked3[ind + np.arange(Nmark)] = True
ValueError: shape
On Mon, Sep 22, 2008 at 10:23 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Mon, Sep 22, 2008 at 10:22, Robert Kern [EMAIL PROTECTED] wrote:
ind2mark = np.asarray((ind[:,np.newaxis] + np.arange(Nmark).flat).clip(0,
N-1)
marked[ind2mark] = True
Missing parenthesis:
ind2mark =
In trying to track down a bug in matplotlib, I have come across tsome
very strange numpy behavior. Basically, whether or not I call
np.seterr('raise') or not in a matplotlib demo affects the behavior of
seterr in another (pure numpy) script, run in a separate process.
Something about the numpy
On Mon, Jul 28, 2008 at 2:02 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Mon, Jul 28, 2008 at 13:56, John Hunter [EMAIL PROTECTED] wrote:
In trying to track down a bug in matplotlib, I have come across tsome
very strange numpy behavior. Basically, whether or not I call
np.seterr('raise
On Mon, Jul 28, 2008 at 2:35 PM, Robert Kern [EMAIL PROTECTED] wrote:
Both, if the behavior exhibits itself without the npy file. If it only
exhibits itself with an npy involved, then we have some more
information about where the problem might be.
OK, I'll see what I can come up with. In the
On Fri, Jul 25, 2008 at 8:22 PM, Matt Knox [EMAIL PROTECTED] wrote:
The automatic string parsing has been mentioned before, but it is a feature
I am personally very fond of. I use it all the time, and I suspect a lot of
people would like it very much if they used it. It's not suited for high
On Mon, Jul 14, 2008 at 12:34 PM, Robert Kern [EMAIL PROTECTED] wrote:
We're not doing anything special, here. When I install using sudo
python install.py on OS X, all of the permissions are 644. I think
the problem may be in your pipeline.
With a little more testing, what I am finding is
I have a rather unconventional install pipeline at work and owner only
read permissions on a number of the tests are causing me minor
problems. It appears the permissions on the tests are set rather
inconsistently in numpy and python -- is there any reason not to make
these all 644?
[EMAIL
On Fri, Jul 11, 2008 at 1:14 PM, Francesc Alted [EMAIL PROTECTED] wrote:
So, it seems that setters/getters for matplotlib datetime could be
supported, maybe at the risk of loosing precision. We should study
this more carefully, but I suppose that if there is interest enough
that could be
I would like to find the sample points where the running sum of some
vector exceeds some threshold -- at those points I want to collect all
the data in the vector since the last time the criteria was reached
and compute some stats on it. For example, in python
tot = 0.
xs = []
ys =
On Thu, Jun 26, 2008 at 11:38 AM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:
Stéfan van der Walt wrote:
Hi all,
I am documenting `recarray`, and have a question:
Is its use still recommended, or has it been superseded by fancy data-types?
I rarely recommend it's use (but some people do
On Sat, May 31, 2008 at 4:05 AM, R. Bastian [EMAIL PROTECTED] wrote:
Neat! I really like the layout. The red format warnings are a nice touch:
http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.exp/
Hi, I was just reading through this example when I noticed this usage:
from
I just spent a while tracking down a bug in my code, and found out the
problem was numpy was letting me get away with using a logical mask of
smaller size than the array it was masking.
In [19]: x = np.random.rand(10)
In [20]: x
Out[20]:
array([ 0.72253623, 0.8412243 , 0.12835194,
I have a record array w/ dates (O4) and floats. If some of these
floats are NaN, np.save crashes (on my solaris platform but not on a
linux machine I tested on). Here is the code that produces the bug:
In [1]: pwd
Out[1]: '/home/titan/johnh/python/svn/matplotlib/matplotlib/examples/data'
In
On Tue, May 20, 2008 at 12:13 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
Looks like we need to add a test for this before release. But I'm off to
work.
Here's a simpler example in case you want to wrap it in a test harness:
import datetime
import numpy as np
r = np.rec.fromarrays([
We recently deprecated matplotlib.mlab.hist, and I am now hitting a
bug in numpy's historgram, which appears to be caused by the use of
any that does not exist in the namespace. Small patch attached.
The example below exposes the bug:
Python 2.4.2 (#1, Feb 23 2006, 12:48:31)
Type copyright,
Apologies for the off-topic post to the numpy list, but we have just
committed some potentially code-breaking changes to the matplotlib svn
repository, and we want to gve as wide a notification to people as
possible. Please do not reply to the numpy list, but rather to a
matplotlib mailing list .
On Nov 6, 2007 8:22 AM, Lisandro Dalcin [EMAIL PROTECTED] wrote:
Mmm...
It looks as it 'mask' is being inernally converted from
[True, False, False, False, True]
to
[1, 0, 0, 0, 1]
Yep, clearly. The question is: is this the desired behavior because
it leads to a silent failure for people
A colleague of mine just asked for help with a pesky bug that turned
out to be caused by his use of a list of booleans rather than an array
of booleans as his logical indexing mask. I assume this is a feature
and not a bug, but it certainly surprised him:
In [58]: mask = [True, False, False,
I am working on an example to illustrate convolution in the temporal
and spectral domains, using the property that a convolution in the
time domain is a multiplication in the fourier domain. I am using
numpy.fft and numpy.convolve to compute the solution two ways, and
comparing them. I am
On 9/26/07, Robert Kern [EMAIL PROTECTED] wrote:
Here is the straightforward way:
In [15]: import numpy as np
In [16]: dt = np.dtype([('foo', int), ('bar', float)])
In [17]: r = np.zeros((3,3), dtype=dt)
Here is a (hopefully) simple question. If I create an array like
this, how can I
Is it desirable that numpy.corrcoef for two arrays returns a 2x2 array
rather than a scalar
In [10]: npy.corrcoef(npy.random.rand(10), npy.random.rand(10))
Out[10]:
array([[ 1., -0.16088728],
[-0.16088728, 1.]])
I always end up extracting the 0,1 element anyway. What is
On 8/24/07, Travis Oliphant [EMAIL PROTECTED] wrote:
I like the direction of this work. For me, the biggest issue is whether
or not matplotlib (and other code depending on numpy.ma) works with it.
I'm pretty sure this can be handled and so, I'd personally like to see it.
mpl already supports
On 7/6/07, Vincent Nijs [EMAIL PROTECTED] wrote:
I wrote the attached (small) program to read in a text/csv file with
different data types and convert it into a recarray without having to
pre-specify the dtypes or variables names. I am just too lazy to type-in
stuff like that :) The supported
On 6/13/07, Pierre GM [EMAIL PROTECTED] wrote:
Have you tried mrecords, in the alternative maskedarray package available on
the scipy SVN ? It should support masked fields (by opposition to masked
records in numpy.core.ma). If not, would you mind giving a test and letting
me know your
I fund myself using record arrays more and more, and feature missing
is the ability to do tab completion on attribute names in ipython,
presumably because you are using a dict under the hood and __getattr__
to resolve
o.key
where o is a record array and key is a field name.
How hard would it be
A colleague of mine is trying to update our production environment
with the latest releases of numpy, scipy, mpl and ipython, and is
worried about the lag time when there is a new numpy and old scipy,
etc... as the build progresses. This is the scheme he is considering,
which looks fine to me,
On 5/31/07, Matthew Brett [EMAIL PROTECTED] wrote:
Hi,
That would get them all built as a cohesive set. Then I'd repeat the
installs without PYTHONPATH:
Is that any different from:
cd ~/src
cd numpy
python setup.py build
cd ../scipy
python setup.py build
Well, the scipy
I have a numpy record array and I want to pretty print a single
element. I was trying to loop over the names in the element dtype and
use getattr to access the field value, but I got fouled up because
getattr is trying to access the dtype attribute of one of the python
objects (datetime.date)
I have a numpy array of floats, and would like an easy way of
specifying the format string when printing the array, eg
print x.pprint('%1.3f')
would do the normal repr of the array but using my format string for
the individual elements. Is there and easy way to get something like
this
David == David Cournapeau [EMAIL PROTECTED] writes:
David Of this 300 ms spent in Colormap functor, 200 ms are taken
David by the take function: this is the function which I think
David can be speed up considerably.
Sorry I had missed this in the previous conversations. It is
David == David Cournapeau [EMAIL PROTECTED] writes:
David At the end, in the original context (speeding the drawing
David of spectrogram), this is the problem. Even if multiple
David backend/toolkits have obviously an impact in performances,
David I really don't see why a numpy
84 matches
Mail list logo