On Mon, Jan 14, 2013 at 9:57 AM, Benjamin Root ben.r...@ou.edu wrote:
On Mon, Jan 14, 2013 at 7:38 AM, Pierre Haessig pierre.haes...@crans.org
wrote:
Hi,
Le 14/01/2013 00:39, Nathaniel Smith a écrit :
(The nice thing about np.filled() is that it makes np.zeros() and
np.ones() feel like
On Mon, Jan 14, 2013 at 1:12 PM, Pierre Haessig
pierre.haes...@crans.org wrote:
In [8]: tile(nan, (3,3)) # (it's a verb ! )
tile, in my opinion, is useful in some cases (for people who think in
terms of repmat()) but not very NumPy-ish. What I'd like is a function
that takes
- an initial
A bit off-topic, but could someone have a look at
https://github.com/numpy/numpy/pull/2699 and provide some feedback?
If 1.7 is meant to be an LTS release, this would be a nice wart to
have out of the way. The Travis failure was a spurious one that has
since been fixed.
On Sat, Dec 15, 2012 at
On Thu, Dec 13, 2012 at 11:34 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
Time to raise this topic again. Opinions welcome.
As you know from the pull request discussion, big +1 from me too. I'm
also of the opinion with David C. and Brad that dropping 2.5 support
would be a good thing
On Wed, Dec 12, 2012 at 3:20 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
For numpy indexing this may not be appropriate though; checking every index
value used could slow things down and/or be quite disruptive.
For array fancy indices, a dtype check on the entire array would
suffice. For
On Sun, Nov 25, 2012 at 9:47 PM, Tom Bennett tom.benn...@mail.zyzhu.net wrote:
Thanks for the quick response.
Ah, I see. There is a difference between A[:,:1] and A[:,0]. The former
returns an Mx1 2D array whereas the latter returns an M element 1D array. I
was using A[:,0] in the code but
M = A[..., np.newaxis] == B
will give you a 40x60x20 boolean 3d-array where M[..., i] gives you a
boolean mask for all the occurrences of B[i] in A.
If you wanted all the (i, j) pairs for each value in B, you could do
something like
import numpy as np
from itertools import izip, groupby
from
:
idx.append((i,j))
Of course, it is slow if the arrays are large, but it is very
readable, and probably very fast if cythonised.
David.
On Sat, Nov 24, 2012 at 10:19 PM, David Warde-Farley
d.warde.far...@gmail.com wrote:
M = A[..., np.newaxis] == B
will give you
On Sat, Nov 24, 2012 at 7:08 PM, David Warde-Farley
d.warde.far...@gmail.com wrote:
I think that would lose information as to which value in B was at each
position. I think you want:
(premature send, stupid Gmail...)
idx = {}
for i, x in enumerate(a):
for j, y in enumerate(x
On Fri, Nov 2, 2012 at 11:16 AM, Joe Kington jking...@wisc.edu wrote:
On Fri, Nov 2, 2012 at 9:18 AM, Neal Becker ndbeck...@gmail.com wrote:
I'm trying to convert some matlab code. I see this:
b(1)=[];
AFAICT, this removes the first element of the array, shifting the others.
What is the
On Wed, Oct 31, 2012 at 7:23 PM, Moroney, Catherine M (388D)
catherine.m.moro...@jpl.nasa.gov wrote:
Hello Everybody,
I have the following problem that I would be interested in finding an
easy/elegant solution to.
I've got it working, but my solution is exceedingly clunky and I'm sure that
On Mon, Oct 29, 2012 at 6:29 AM, Radek Machulka
radek.machu...@gmail.com wrote:
Hi,
is there a way how to save more arrays into single npy (npz if possible) file
in loop? Something like:
fw = open('foo.bar', 'wb')
while foo:
arr = np.array(bar)
np.savez_compressed(fw, arr)
I submitted a pull request and one of the Travis builds is failing:
https://travis-ci.org/#!/numpy/numpy/jobs/2933551
Given my changes,
https://github.com/dwf/numpy/commit/4c88fdafc003397d6879f81bf59f68adeeb59f2b
I don't see how the masked array module (responsible for the failing
On Wed, Oct 24, 2012 at 7:18 AM, George Nurser gnur...@gmail.com wrote:
Hi,
I was just looking at the einsum function.
To me, it's a really elegant and clear way of doing array operations, which
is the core of what numpy is about.
It removes the need to remember a range of functions, some of
On Thu, Oct 25, 2012 at 6:15 PM, Sebastian Berg
sebast...@sipsolutions.net wrote:
On Thu, 2012-10-25 at 17:48 -0400, David Warde-Farley wrote:
Don't worry about that failure on Travis... It happens randomly on at
the moment and its unrelated to anything you are doing.
Ah, okay. I figured
On Thu, Oct 25, 2012 at 8:39 PM, josef.p...@gmail.com wrote:
On Thu, Oct 25, 2012 at 6:58 PM, David Warde-Farley
warde...@iro.umontreal.ca wrote:
On Thu, Oct 25, 2012 at 6:15 PM, Sebastian Berg
sebast...@sipsolutions.net wrote:
On Thu, 2012-10-25 at 17:48 -0400, David Warde-Farley wrote
On Fri, Oct 26, 2012 at 1:04 AM, josef.p...@gmail.com wrote:
Fine, I didn't understand that part correctly.
I have no opinion in that case.
(In statsmodels we only use copy the array method and through np.array().)
Do you implement __copy__ or __deepcopy__ on your objects? If not,
client
On Wed, Oct 3, 2012 at 1:58 PM, Will Lee lee.w...@gmail.com wrote:
This seems to be a old problem but I've recently hit with this in a very
random way (I'm using numpy 1.6.1). There seems to be a ticket (1239) but
it seems the issue is unscheduled. Can somebody tell me if this is fixed?
In
On 2012-04-03, at 4:10 PM, Frédéric Bastien wrote:
I would like to add this parameter to Theano. So my question is, will
the interface change or is it stable?
To elaborate on what Fred said, in Theano we try to offer the same
functions/methods as NumPy does with the same arguments and same
On Mon, Mar 12, 2012 at 04:15:04AM +, Gias wrote:
I am using Ubuntu 11.04 (natty) in my laptop and Python 2.7. I installed nltk
(2.09b), numpy (1.5.1), and matplotlib(1.1.0). The installation is global and
I
am not using virtualenv.When I try (text4.dispersion_plot([citizens,
On 2012-02-19, at 12:47 AM, Benjamin Root wrote:
Dude, have you seen the .c files in numpy/core? They are already read-only
for pretty much everybody but Mark.
I've managed to patch several of them without incident, and I do not do a lot
of programming in C. It could be simpler, but it's not
On 2012-02-18, at 2:47 AM, Matthew Brett wrote:
Of course it might be that so-far undiscovered C++ developers are
drawn to a C++ rewrite of Numpy. But it that really likely?
If we can trick them into thinking the GIL doesn't exist, then maybe...
David
On 2012-02-16, at 1:28 PM, Charles R Harris wrote:
I think this is a good point, which is why the idea of a long term release is
appealing. That release should be stodgy and safe, while the ongoing
development can be much more radical in making changes.
I sort of thought this *was* the
On 2012-02-14, at 10:14 PM, Bruce Southey wrote:
I will miss you as a numpy release manager!
You have not only done an incredible job but also taken the role to a
higher level.
Your attitude and attention to details has been amazing.
+1, hear hear! Thank you for all the time you've
On Tue, Jan 24, 2012 at 09:15:01AM +, Robert Kern wrote:
On Tue, Jan 24, 2012 at 08:37, Sturla Molden stu...@molden.no wrote:
On 24.01.2012 09:21, Sturla Molden wrote:
randomkit.c handles C long correctly, I think. There are different codes
for 32 and 64 bit C long, and buffer sizes
On Tue, Jan 24, 2012 at 06:00:05AM +0100, Sturla Molden wrote:
Den 23.01.2012 22:08, skrev Christoph Gohlke:
Maybe this explains the win-amd64 behavior: There are a couple of places
in mtrand where array indices and sizes are C long instead of npy_intp,
for example in the randint
On Tue, Jan 24, 2012 at 06:37:12PM +0100, Robin wrote:
Yes - I get exactly the same numbers in 64 bit windows with 1.6.1.
Alright, so that rules out platform specific effects.
I'll try and hunt the bug down when I have some time, if someone more
familiar with the indexing code doesn't beat me
On Tue, Jan 24, 2012 at 01:02:44PM -0500, David Warde-Farley wrote:
On Tue, Jan 24, 2012 at 06:37:12PM +0100, Robin wrote:
Yes - I get exactly the same numbers in 64 bit windows with 1.6.1.
Alright, so that rules out platform specific effects.
I'll try and hunt the bug down when I have
A colleague has run into this weird behaviour with NumPy 1.6.1, EPD 7.1-2, on
Linux (Fedora Core 14) 64-bit:
a = numpy.array(numpy.random.randint(256,size=(500,972)),dtype='uint8')
b = numpy.random.randint(500,size=(4993210,))
c = a[b]
It seems c is not getting filled in full,
, 2012 at 05:23:28AM -0500, David Warde-Farley wrote:
A colleague has run into this weird behaviour with NumPy 1.6.1, EPD 7.1-2, on
Linux (Fedora Core 14) 64-bit:
a = numpy.array(numpy.random.randint(256,size=(500,972)),dtype='uint8')
b = numpy.random.randint(500,size=(4993210,))
c
Hi Travis,
Thanks for your reply.
On Mon, Jan 23, 2012 at 01:33:42PM -0600, Travis Oliphant wrote:
Can you determine where the problem is, precisely.In other words, can you
verify that c is not getting filled in correctly?
You are no doubt going to get overflow in the summation as you
On Mon, Jan 23, 2012 at 08:38:44PM +0100, Robin wrote:
On Mon, Jan 23, 2012 at 7:55 PM, David Warde-Farley
warde...@iro.umontreal.ca wrote:
I've reproduced this (rather serious) bug myself and confirmed that it
exists
in master, and as far back as 1.4.1.
I'd really appreciate
On 2011-08-25, at 2:42 PM, Chris.Barker wrote:
On 8/24/11 9:22 AM, Anthony Scopatz wrote:
You can use Python pickling, if you do *not* have a requirement for:
I can't recall why, but it seem pickling of numpy arrays has been
fragile and not very performant.
I like the npy / npz
On 2011-08-20, at 4:01 AM, He Shiming wrote:
Hi,
I'm wondering how to do RGB - HSV conversion in numpy. I found a
couple solutions through stackoverflow, but somehow they can't be used
in my array format. I understand the concept of conversion, but I'm
not that familiar with numpy.
My
On 2011-08-18, at 10:24 PM, Robert Love wrote:
In 1.6.1 I get this error:
ValueError: setting an array element with a sequence. Is this a known
problem?
You'll have to post a traceback if we're to figure out what the problem is. A
few lines of zdum.txt would also be nice.
Suffice it to
On 2011-08-15, at 4:11 PM, Daniel Wheeler wrote:
One thing that I know I'm doing wrong is
reassigning every sub-matrix to a new array. This may not be that
costly, but it seems fairly ugly. I wasn't sure how to pass the
address of the submatrix to the lapack routines so I'm assigning to a
On 2011-03-12, at 12:43 PM, Dmitrey wrote:
hi all,
currently I use
a = array(m,n)
...
a = delete(a, indices, 0) # delete some rows
Can I somehow perform the operation in-place, without creating auxiliary
array?
If I'll use
numpy.compress(condition, a, axis=0, out=a),
or
On 2011-03-12, at 9:32 PM, Charles R Harris wrote:
I'd like to change the polynomial package to only import the Classes, leaving
the large number of implementation functions to be imported directly from the
different modules if needed. I always regarded those functions as
implementation
On 2010-12-01, at 2:18 PM, Ken Basye wrote:
On a somewhat related note, is there a table someplace which shows
which versions of Python are supported in each release of Numpy? I
found an FAQ that mentioned 2.4 and 2.5, but since it didn't mention 2.6
or 2.7 (much less 3.1), I assume
On 2010-11-22, at 2:51 AM, Hagen Fürstenau wrote:
but this is bound to be inefficient as soon as the vector of
probabilities gets large, especially if you want to draw multiple samples.
Have I overlooked something or should this be added?
I think you misunderstand the point of multinomial
Hi,
I'm trying to use numpy.testing.Tester to run tests for another, numpy-based
project. It works beautifully, except for the fact that I can't seem to
silence output (i.e. NOSE_NOCAPTURE/--nocapture/-s). I've tried to call test
with extra_argv=['-s'] and also tried subclassing to muck with
On 2010-11-08, at 8:52 PM, David wrote:
Please tell us what error you got - saying that something did not
working is really not useful to help you. You need to say exactly what
fails, and which steps you followed before that failure.
I think what he means is that it's very slow, there's no
On 2010-11-06, at 7:46 PM, qihua wu wrote:
day 1,2,3 have the non-promoted sales, day 4 have the promoted sales, day
5,6,7 have the non-promted sales, the output for day 1~7 are all non-promoted
sales. During the process, we might need to sum all the data for day 1~7, is
this what you
I'm not sure who registered/owns numpy.org, but it looks like a frame sitting
on top of numpy.scipy.org.
On 2010-10-12, at 3:50 PM, Vincent Davis wrote:
When visiting links starting at www.numpy.org the URL in the address
bar does not change. For example clicking the getting Numpy takes
you
On 2010-10-01, at 7:22 PM, Robert Kern wrote:
Also some design, documentation, format version bump, and (not least)
code away. ;-)
Would it require a format version number bump? I thought that was a .NPY thing,
and NPZs were just zipfiles containing several separate NPY containers.
David
On 2010-08-30, at 10:36 PM, Charles R Harris wrote:
I don't see what the connection with the determinant is. The log determinant
will be calculated using the ordinary LU decomposition as that works for more
general matrices.
I think he means that if he needs both the determinant and to
On 2010-08-30, at 10:19 PM, Dan Elliott wrote:
You don't think this will choke on a large (e.g. 10K x 10K) covariance
matrix?
That depends. Is it very close to being rank deficient?That would be my main
concern. NumPy/LAPACK will have no trouble Cholesky-decomposing a matrix this
big,
On 2010-08-30, at 11:28 AM, Daniel Elliott wrote:
Hello,
I am new to Python (coming from R and Matlab/Octave). I was preparing
to write my usual compute pdf of a really high dimensional (e.g. 1
dimensions) Gaussian code in Python but I noticed that numpy had a
function for computing
On 2010-08-30, at 5:42 PM, Melissa Mendonça wrote:
I'm just curious as to why you say scipy.linalg.solve(), NOT
numpy.linalg.solve(). Can you explain the reason for this?
Oh, the performance will be similar, provided you've linked against a good
BLAS.
It's just that the NumPy version
I've been using bincount() in situations where I always want the count vector
to be a certain fixed length, even if the upper bins are 0. e.g. I want the
counts of 0 through 9, and if there are no 9's in the vector I'm counting, I'd
still like it to be at least length 10. It's kind of a pain to
On 2010-08-27, at 5:15 PM, Robert Kern wrote:
How would people feel about an optional argument to support this behaviour?
I'd be happy with either a minlength or an exactly this length with
values above this range being ignored, but it seems like the latter might be
useful in more cases.
On 2010-08-18, at 12:36 PM, Charles R Harris wrote:
In general it is a good idea to keep the specific bits out of classes since
designing *the* universal class is hard and anyone who wants to just borrow a
bit of code will end up cursing the SOB who buried the good stuff in a class,
On 2010-08-12, at 3:59 PM, gerardob wrote:
Hello, this is a very basic question but i don't know the answer.
I would like to construct a two dimensional array A, such that A[i][j]
contains a set of numbers associated to the pair (i,j). For example, A[i][j]
can be all the numbers that are
Thanks Gokhan, I'll push that tonight (along with lots of other changes). I
just got the necessary permissions the other day and haven't had much time this
week.
David
On 2010-08-06, at 3:30 PM, Gökhan Sever wrote:
Hi,
@ http://new.scipy.org/download.html numpy and scipy links for Fedora
On 2010-08-12, at 5:54 PM, gerardob wrote:
As i wrote, the elements of each A[i][j] are sets and not numbers.
Ah, missed that part.
Assuming you don't need/want these sets to be mutable or differently sized, you
could do something like this.
Here we make 'n' an array of the numbers 0 to 49,
On 2010-08-04, at 2:18 AM, Matthieu Brucher wrote:
2010/8/4 Søren Gammelmark gammelm...@phys.au.dk:
I wouldn't know for sure, but could this be related to changes to the
gcc compiler in Fedora 13 (with respect to implicit DSO linking) or
would that only be an issue at build-time?
On 2010-08-03, at 4:09 PM, Pauli Virtanen wrote:
Tue, 03 Aug 2010 15:52:55 -0400, David Warde-Farley wrote:
[clip]
in PyErr_WarnEx (category=0x11eb6c54,
text=0x5f90c0 PyOS_ascii_strtod and PyOS_ascii_atof are deprecated.
Use PyOS_string_to_double instead., stack_level=0) at
Python
On 2010-08-01, at 12:38 PM, Ralf Gommers wrote:
I am pleased to announce the availability of the first beta of NumPy 1.5.0.
This will be the first NumPy release to include support for Python 3, as well
as for Python 2.7. Please try this beta and report any problems on the NumPy
mailing
On 2010-08-05, at 4:53 PM, Søren Gammelmark wrote:
It seems to me, that you are using an libiomp5 for Intel Itanium
(lib/intel64) or such, but an MKL for EM64T-processors (lib/em64t). In
my case I used EM64T in all cases (I'm running AMD Opteron) . I don't
think the two types of libraries
This was using the Intel MKL/icc to compile NumPy and also icc to compile
Python on a shared cluster, but I don't think that's relevant given where the
segmentation fault occurs...
gpc-f104n084-$ gdb python
GNU gdb Fedora (6.8-27.el5)
Copyright (C) 2008 Free Software Foundation, Inc.
License
On 2010-07-20, at 10:16 PM, Skipper Seabold wrote:
Out of curiosity, is there an explicit way to check if these share memory?
You could do the exact calculations (I think) but this isn't actually
implemented in NumPy, though np.may_share_memory is a conservative test for it
that will err on
On 2010-07-15, at 12:38 PM, Emmanuel Bengio beng...@gmail.com wrote:
Hello,
I have a list of 4x4 transformation matrices, that I want to dot with
another list of the same size (elementwise).
Making a for loop that calculates the dot product of each is extremely slow,
I thought that maybe
On 2010-07-15, at 4:31 PM, David Warde-Farley wrote:
If you need/want more speed than the solution Chuck proposed, you should
check out Cython and Tokyo. Cython lets you write loops that execute at C
speed, whereas Tokyo provides a Cython level wrapper for BLAS (no need to go
through
(CCing NumPy-discussion where this really belongs)
On 2010-07-08, at 1:34 PM, cfra...@uci.edu wrote:
Need Complex numbers in the saved file.
Ack, this has come up several times according to list archives and no one's
been able to provide a real answer.
It seems that there is nearly no
Pauli Virtanen wrote:
Wed, 28 Apr 2010 14:12:07 -0400, Alan G Isaac wrote:
[clip]
Here is a related ticket that proposes a more explicit alternative:
adding a ``dot`` method to ndarray.
http://projects.scipy.org/numpy/ticket/1456
I kind of like this idea. Simple, obvious, and
Trying to debug code written by an undergrad working for a colleague of
mine who ported code over from MATLAB, I am seeing an ugly melange of
matrix objects and ndarrays that are interacting poorly with each other
and various functions in SciPy/other libraries. In particular there was
a custom
On 2010-04-28, at 12:05 PM, Travis Oliphant wrote:
a(b) is equivalent to dot(a,b)
a(b)(c) would be equivalent to dot(dot(a,b),c)
This seems rather reasonable.
Indeed, and it leads to a rather pleasant way of permuting syntax to change the
order of operations, i.e. a(b(c)) vs. a(b)(c).
On 2010-04-28, at 2:30 PM, Alan G Isaac wrote:
Please let us not have this discussion all over again.
Agreed. See my preface to this discussion.
My main objection is that it's not easy to explain to a newcomer what the
difference precisely is, how they interact, why two of them exist, how
Adrien Guillon wrote:
Thank you for your questions... I'll answer them now.
The motivation behind using Python and NumPy is to be able to double
check that the numerical algorithms work okay in an
engineer/scientist friendly language. We're basically prototyping a
bunch of algorithms in
On 31-Mar-10, at 6:15 PM, T J wrote:
In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
Out[1]: -1.5849625007211561
In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
Out[2]: nan
In [3]: np.log2(np.exp2(-0.5849625007211563) +
np.exp2(-53.584962500721154))
Hey Dag,
On 30-Mar-10, at 5:02 AM, Dag Sverre Seljebotn wrote:
Well, you can pass -fdefault-real-8 and then write .pyf headers where
real(8) is always given explicitly.
Actually I've gotten it to work this way, with real(8) in the wrappers.
BUT... for some reason it requires me to set the
On 30-Mar-10, at 2:14 PM, David Warde-Farley wrote:
Hey Dag,
On 30-Mar-10, at 5:02 AM, Dag Sverre Seljebotn wrote:
Well, you can pass -fdefault-real-8 and then write .pyf headers where
real(8) is always given explicitly.
Actually I've gotten it to work this way, with real(8
On 30-Mar-10, at 5:02 AM, Dag Sverre Seljebotn wrote:
Well, you can pass -fdefault-real-8 and then write .pyf headers where
real(8) is always given explicitly.
Okay, the answer (without setting the F77 environment variable) is
basically to expect real-8's in the .pyf file and compile the
Hi,
In my setup.py, I have
from numpy.distutils.misc_util import Configuration
fflags= '-fdefault-real-8 -ffixed-form'
config = Configuration(
'foo',
parent_package=None,
top_path=None,
f2py_options='--f77flags=\'%s\' --f90flags=\'%s\'' % (fflags,
fflags)
)
However I am
On 26-Mar-10, at 8:08 AM, Kevin Jacobs wrote:
On Thu, Mar 25, 2010 at 6:25 PM, David Warde-Farley d...@cs.toronto.edu
wrote:
I decided to give wrapping this code a try:
http://morrislab.med.utoronto.ca/~dwf/GLMnet.f90
I have a working f2py wrapper located at:
http
On 26-Mar-10, at 4:25 PM, David Warde-Farley wrote:
That said, I gave that wrapper a whirl and it crashed on me...
I noticed you added an 'njd' argument to the wrapper for elnet, did
you modify the elnet Fortran function at all? Is it fine to have
arguments in the wrapped version that don't
I decided to give wrapping this code a try:
http://morrislab.med.utoronto.ca/~dwf/GLMnet.f90
I'm afraid my Fortran skills are fairly limited, but I do know that
gfortran compiles it fine. f2py run on this file produces lots of
errors of the form,
Reading fortran codes...
On 23-Mar-10, at 5:04 PM, Reckoner wrote:
I don't know
what the | this notation means. I can't find it in the
documentation.
This should be easy. Little help?
A or in this position means big or little endianness. Strings
don't have endianness, hence |.
David
On 19-Mar-10, at 1:13 PM, Anne Archibald wrote:
I'm not knocking numpy; it does (almost) the best it can. (I'm not
sure of the optimality of the order in which ufuncs are executed; I
think some optimizations there are possible.) But a language designed
from scratch for vector calculations
On 3-Mar-10, at 4:56 PM, Robert Kern wrote:
Other types have a sensible default determined by the platform.
Yes, and the 'S0' type isn't terribly sensible, if only because of
this issue:
http://projects.scipy.org/numpy/ticket/1239
David
On 2-Mar-10, at 7:23 PM, James Bergstra wrote:
Sorry... again... how do I make such a scalar... *in C* ? What would
be the recommended C equivalent of this python code? Are there C
type-checking functions for instances of these objects? Are there C
functions for converting to and from C
On 26-Feb-10, at 8:12 AM, Ernest Adrogué wrote:
Thanks for the tip. I didn't know that...
Also, frompyfunc appears to crash python when the last argument is 0:
In [9]: func=np.frompyfunc(lambda x: x, 1, 0)
In [10]: func(np.arange(5))
Violació de segment
This with Python 2.5.5, Numpy
On Thu, Feb 25, 2010 at 11:56:34PM -0800, Nathaniel Smith wrote:
So there's this patch I submitted:
http://projects.scipy.org/numpy/ticket/1402
Obviously not that high a priority in the grand scheme of things (it
adds a function to compute the log-determinant directly), but I don't
want to
On Fri, Feb 26, 2010 at 11:26:28AM +0100, Gael Varoquaux wrote:
I was more thinking of a 'return_sign=False' keyword argument.
My thoughts exactly.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On 26-Feb-10, at 7:43 PM, Charles سمير Doutriaux wrote:
Any idea on how to build a pure 32bit numpy on snow leopard?
If I'm not mistaken you'll probably want to build against the
Python.org Python rather than the wacky version that comes installed
on the system. The Python.org installer is
Hey James,
On 25-Feb-10, at 5:59 PM, James Bergstra wrote:
In case this hasn't been solved in more recent numpy...
I've tried the following lines on two installations of numpy 1.3
with python 2.6
numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'),
p=numpy.asarray([.1, .2,
On 25-Feb-10, at 5:59 PM, James Bergstra wrote:
In case this hasn't been solved in more recent numpy...
I've tried the following lines on two installations of numpy 1.3
with python 2.6
numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'),
p=numpy.asarray([.1, .2, .3],
On Sun, Feb 21, 2010 at 04:06:09PM -0600, Robert Kern wrote:
I spent some time on Friday getting Plurk's Solace tweaked for our use
(for various reasons, it's much better code to deal with than the
CNPROG software currently running advice.mechanicalkern.com).
On 20-Feb-10, at 2:48 PM, Matthew Brett wrote:
Hi,
I just noticed this:
In [2]: np.isfinite(np.inf)
Warning: invalid value encountered in isfinite
Out[2]: False
Maybe it would be worth not raising the warning, in the interests of
tidiness?
I think these warnings somehow got turned on
Hi,
I'm pretty sure this is unintentional but I tried easy_install numpy
the other day and it pulled down a 1.4 tarball from PyPI.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Hi everyone,
Does anyone know if there is an implementation of rank 1 updates (and
downdates) to a Cholesky factorization in NumPy or SciPy? It looks
there are a bunch of routines for it in LINPACK, but not LAPACK.
Thanks,
David
___
NumPy-Discussion
On 9-Feb-10, at 5:02 PM, Robert Kern wrote:
Examples? Pointers? Shoves toward the correct sections of the docs?
numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter')
Huh. All these years, how have I missed this?
Yet another demonstration of why my never skip over a Kern posting
On 4-Feb-10, at 2:18 PM, Gerardo Gutierrez wrote:
I'm working with audio signals with wavelet analisys, and I want to
know if
someone has work with some audio capture (with the mic and through a
file)
library so that I can get the time-series...
Also I need to play the transformed
on logsumexp, you might be
interested in the version I posted here in October:
http://mail.scipy.org/pipermail/scipy-user/2009-October/022931.html
and the attached tests.
Warren
David Warde-Farley wrote:
I decided to take a crack at adding a generalized ufunc for
logsumexp, i.e
I decided to take a crack at adding a generalized ufunc for logsumexp,
i.e. collapsed an array along the last dimension by subtracting the
maximum element E along that dimension, taking the exponential,
adding, and then adding back E. Functionally the same
logaddexp.reduce() but presumably
On 7-Jan-10, at 6:58 PM, Xue (Sue) Yang wrote:
Do I need any specifications when I run numpy with intel MKL (MKL9.1)?
numpy developers would be able to answer this question?
Are you sure you've compiled against MKL properly? What is printed by
numpy.show_config()?
David
On 7-Jan-10, at 8:13 PM, Xue (Sue) Yang wrote:
This is what I had (when I built numpy, I chose gnu compilers
instead of
intel compilers),
numpy.show_config()
lapack_opt_info:
libraries = ['mkl_lapack', 'mkl', 'vml', 'guide', 'pthread']
library_dirs =
On 5-Jan-10, at 7:18 PM, Christopher Barker wrote:
If distutils/setuptools could identify the python version properly,
then
binary eggs and easy-install could be a solution -- but that's a
mess,
too.
Long live toydist! :)
David
___
On 5-Jan-10, at 7:02 PM, Christopher Barker wrote:
Pretty sure the python.org binaries are 32-bit only. I still think
it's sensible to prefer the
waiting the rest of this sentence.. ;-)
I had meant to say 'sensible to prefer the Python.org version' though
in reality I'm a little miffed
On 5-Jan-10, at 6:01 PM, Christopher Barker wrote:
The python.org python is the best one to support -- Apple has never
upgraded a python, has often shipped a broken version, and has
provided
different versions with each OS-X version. If we support the
python.org
python for OS-X 10.4,
1 - 100 of 271 matches
Mail list logo