Maybe they should have written their code with **kwargs that consumes all
keyword arguments rather than assuming that no keyword arguments would be
added? The problem with this approach in general is that it makes writing
code unnecessarily convoluted.
On Tue, May 5, 2015 at 1:55 PM, Nathaniel
is for that conversion to be automated. I'm
still evaluating how to best achieve that.
On Tue, Apr 28, 2015 at 6:08 AM, Francesc Alted fal...@gmail.com wrote:
2015-04-28 4:59 GMT+02:00 Neil Girdhar mistersh...@gmail.com:
I don't think I'm asking for so much. Somewhere inside numexpr it builds
I've always wondered why numexpr accepts strings rather than looking a
function's source code, using ast to parse it, and then transforming the
AST. I just looked at another project, pyautodiff, which does that. And I
think numba does that for llvm code generation. Wouldn't it be nicer to
just
Also, FYI: http://numba.pydata.org/numba-doc/0.6/doc/modules/transforms.html
It appears that numba does get the ast similar to pyautodiff and only get
the ast from source code as a fallback?
On Mon, Apr 27, 2015 at 7:23 PM, Neil Girdhar mistersh...@gmail.com wrote:
I was told that numba did
On Mon, Apr 27, 2015 at 7:42 PM, Nathaniel Smith n...@pobox.com wrote:
On Mon, Apr 27, 2015 at 4:23 PM, Neil Girdhar mistersh...@gmail.com
wrote:
I was told that numba did similar ast parsing, but maybe that's not true.
Regarding the ast, I don't know about reliability, but take a look
a
usability standpoint, I do think that's better than feeding in strings,
which:
* are not syntax highlighted, and
* require porting code from regular numpy expressions to numexpr strings
(applying a decorator is so much easier).
Best,
Neil
On Mon, Apr 27, 2015 at 7:14 PM, Nathaniel Smith n
Wow, cool! Are there any users of this package?
On Mon, Apr 27, 2015 at 9:07 PM, Alexander Belopolsky ndar...@mac.com
wrote:
On Mon, Apr 27, 2015 at 7:14 PM, Nathaniel Smith n...@pobox.com wrote:
There's no way to access the ast reliably at runtime in python -- it gets
thrown away during
that
I like with my code. For my purpose, this would have been the more ideal
design.
On Mon, Apr 27, 2015 at 10:47 PM, Nathaniel Smith n...@pobox.com wrote:
On Apr 27, 2015 5:30 PM, Neil Girdhar mistersh...@gmail.com wrote:
On Mon, Apr 27, 2015 at 7:42 PM, Nathaniel Smith n...@pobox.com
On Fri, Apr 17, 2015 at 10:47 AM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
sebast...@sipsolutions.net wrote:
On Do, 2015-04-16 at 15:28 -0700, Matthew Brett wrote:
Hi,
snip
So, how about a slight modification of your proposal?
1) Raise
On Fri, Apr 17, 2015 at 12:09 PM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 11:22 AM, Neil Girdhar mistersh...@gmail.com
wrote:
On Fri, Apr 17, 2015 at 10:47 AM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
sebast...@sipsolutions.net wrote
On Fri, Apr 17, 2015 at 12:09 PM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 11:22 AM, Neil Girdhar mistersh...@gmail.com
wrote:
On Fri, Apr 17, 2015 at 10:47 AM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
sebast...@sipsolutions.net wrote
This relationship between outer an dot only holds for vectors. For
tensors, and other kinds of vector spaces, I'm not sure if outer products
and dot products have anything to do with each other.
On Fri, Apr 17, 2015 at 11:11 AM, josef.p...@gmail.com wrote:
On Fri, Apr 17, 2015 at 10:59 AM,
...@pobox.com wrote:
On Wed, Apr 15, 2015 at 6:08 PM, josef.p...@gmail.com wrote:
On Wed, Apr 15, 2015 at 5:31 PM, Neil Girdhar mistersh...@gmail.com
wrote:
Does it work for you to set
outer = np.multiply.outer
?
It's actually faster on my machine.
I assume it does because np.corrcoeff
Right.
On Thu, Apr 16, 2015 at 6:44 PM, Nathaniel Smith n...@pobox.com wrote:
On Thu, Apr 16, 2015 at 6:37 PM, Neil Girdhar mistersh...@gmail.com
wrote:
I can always put np.outer = np.multiply.outer at the start of my code to
get
what I want. Or could that break things?
Please don't do
That sounds good to me.
I can always put np.outer = np.multiply.outer at the start of my code to
get what I want. Or could that break things?
On Thu, Apr 16, 2015 at 6:28 PM, Matthew Brett matthew.br...@gmail.com
wrote:
Hi,
On Thu, Apr 16, 2015 at 3:19 PM, Neil Girdhar mistersh...@gmail.com
On Thu, Apr 16, 2015 at 6:32 PM, Nathaniel Smith n...@pobox.com wrote:
On Thu, Apr 16, 2015 at 6:19 PM, Neil Girdhar mistersh...@gmail.com
wrote:
Actually, looking at the docs, numpy.outer is *only* defined for 1-d
vectors. Should anyone who used it with multi-dimensional arrays have
Actually, looking at the docs, numpy.outer is *only* defined for 1-d
vectors. Should anyone who used it with multi-dimensional arrays have an
expectation that it will keep working in the same way?
On Thu, Apr 16, 2015 at 10:53 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Would it be possible
=100 bins. I don't think it does O(n) computations per point. I
think it's more like O(log(n)).
Best,
Neil
On Wed, Apr 15, 2015 at 10:02 AM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
On Wed, Apr 15, 2015 at 4:36 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Yeah, I'm not arguing
I don't understand. Are you at pycon by any chance?
On Wed, Apr 15, 2015 at 6:12 PM, josef.p...@gmail.com wrote:
On Wed, Apr 15, 2015 at 6:08 PM, josef.p...@gmail.com wrote:
On Wed, Apr 15, 2015 at 5:31 PM, Neil Girdhar mistersh...@gmail.com
wrote:
Does it work for you to set
outer
a
cache miss standpoint, I think p2 is better? Anyway, it might be worth
maybe coding to verify any performance advantages? Not sure if it should
be in numpy or not since it really should accept an iterable rather than a
numpy vector, right?
Best,
Neil
On Wed, Apr 15, 2015 at 12:40 PM, Jaime
Does it work for you to set
outer = np.multiply.outer
?
It's actually faster on my machine.
On Wed, Apr 15, 2015 at 5:29 PM, josef.p...@gmail.com wrote:
On Wed, Apr 15, 2015 at 7:35 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Yes, I totally agree. If I get started on the PR
, yes.
On Apr 14, 2015 9:17 PM, Neil Girdhar mistersh...@gmail.com wrote:
Ok, I didn't know that. Are you at pycon by any chance?
On Tue, Apr 14, 2015 at 7:16 PM, Nathaniel Smith
n...@pobox.com wrote:
On Tue, Apr 14, 2015 at 3:48 PM, Neil
Yeah, I'm not arguing, I'm just curious about your reasoning. That
explains why not C++. Why would you want to do this in C and not Python?
On Wed, Apr 15, 2015 at 1:48 AM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
On Tue, Apr 14, 2015 at 6:16 PM, Neil Girdhar mistersh...@gmail.com
run took 25.59 times longer than the fastest. This could mean
that an intermediate result is being cached
100 loops, best of 3: 834 ns per loop
On Tue, Apr 14, 2015 at 7:42 AM, Neil Girdhar mistersh...@gmail.com wrote:
Okay, but by the same token, why do we have cumsum? Isn't it identical
Ok, I didn't know that. Are you at pycon by any chance?
On Tue, Apr 14, 2015 at 7:16 PM, Nathaniel Smith n...@pobox.com wrote:
On Tue, Apr 14, 2015 at 3:48 PM, Neil Girdhar mistersh...@gmail.com
wrote:
Yes, I totally agree with you regarding np.sum and np.product, which is
why
I didn't
:
On Mon, Apr 13, 2015 at 8:02 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Can I suggest that we instead add the P-square algorithm for the dynamic
calculation of histograms?
(
http://pierrechainais.ec-lille.fr/Centrale/Option_DAD/IMPACT_files/Dynamic%20quantiles%20calcultation%20-%20P2
, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
On Tue, Apr 14, 2015 at 4:12 PM, Nathaniel Smith n...@pobox.com wrote:
On Mon, Apr 13, 2015 at 8:02 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Can I suggest that we instead add the P-square algorithm for the
dynamic
calculation
.
Similarly, cumprod is just np.multiply.accumulate.
Best,
Neil
On Sat, Apr 11, 2015 at 12:49 PM, Nathaniel Smith n...@pobox.com wrote:
Documentation and a call to warnings.warn(DeprecationWarning(...)), I
guess.
On Sat, Apr 11, 2015 at 12:39 PM, Neil Girdhar mistersh...@gmail.com
wrote:
I
Yes, I totally agree with you regarding np.sum and np.product, which is why
I didn't suggest np.add.reduce, np.multiply.reduce. I wasn't sure whether
cumsum and cumprod might be on the line in your judgment.
Best,
Neil
On Tue, Apr 14, 2015 at 3:37 PM, Nathaniel Smith n...@pobox.com wrote
is the
resolution of the bins throughout the domain.
Best,
Neil
On Sun, Apr 12, 2015 at 4:02 AM, Ralf Gommers ralf.gomm...@gmail.com
wrote:
On Sun, Apr 12, 2015 at 9:45 AM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
On Sun, Apr 12, 2015 at 12:19 AM, Varun nayy...@gmail.com wrote
Yes, you're right. Although in practice, people almost always want
adaptive bins.
On Tue, Apr 14, 2015 at 5:08 PM, Chris Barker chris.bar...@noaa.gov wrote:
On Mon, Apr 13, 2015 at 5:02 AM, Neil Girdhar mistersh...@gmail.com
wrote:
Can I suggest that we instead add the P-square algorithm
Hello,
Is this desired behaviour or a regression or a bug?
http://stackoverflow.com/questions/26497656/how-do-i-align-a-numpy-record-array-recarray
Thanks,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
Hi,
We came across this bug while using np.cross on 3D arrays of 2D vectors.
What version of numpy are you using? This should already be solved in numpy
master, and be part of the 1.9 release. Here's the relevant commit,
although the code has been cleaned up a bit in later ones:
Hi,
We came across this bug while using np.cross on 3D arrays of 2D vectors.
What version of numpy are you using? This should already be solved in numpy
master, and be part of the 1.9 release. Here's the relevant commit,
although the code has been cleaned up a bit in later ones:
Hi,
We came across this bug while using np.cross on 3D arrays of 2D vectors.
The first example shows the problem and we looked at the source for np.cross
and believe we found the bug - an unnecessary swapaxes when returning the
output (comment inserted in the code).
Thanks
Neil
# Example
, numpy.printoptions,
etc. could expose the dictionary directly. This would make the get methods
redundant.
Best,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
How do I test a patch that I've made locally? I can't seem to import numpy
locally:
Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and
relaunch
your python intepreter from there.
Ah, sorry, didn't see that I can do that from runtests!! Thanks!!
On Sun, Oct 27, 2013 at 7:13 PM, Neil Girdhar mistersh...@gmail.com wrote:
Since I am trying to add a printoptions context manager, I would like to
test it. Should I add tests, or can I somehow use it from an ipython shell
Since I am trying to add a printoptions context manager, I would like to
test it. Should I add tests, or can I somehow use it from an ipython shell?
On Sun, Oct 27, 2013 at 7:12 PM, Charles R Harris charlesr.har...@gmail.com
wrote:
On Sun, Oct 27, 2013 at 4:59 PM, Neil Girdhar mistersh
This is my first code review request, so I may have done some things wrong.
I think the following URL should work?
https://github.com/MisterSheik/numpy/compare
Best,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
, Charles R Harris charlesr.har...@gmail.com
wrote:
On Sun, Oct 27, 2013 at 7:23 PM, Neil Girdhar mistersh...@gmail.comwrote:
This is my first code review request, so I may have done some things
wrong. I think the following URL should work?
https://github.com/MisterSheik/numpy/compare
Is this what I want? https://github.com/numpy/numpy/pull/3987
On Sun, Oct 27, 2013 at 9:42 PM, Neil Girdhar mistersh...@gmail.com wrote:
Yeah, I realized that I missed that and figured it wouldn't matter since
it was my own master and I don't plan on making other changes to numpy. If
you
between() - e.g.
https://bitbucket.org/nhmc/pyserpens/src/4e2cc9b656ae/utilities.py#cl-88
Then you can use
between(a, 4, 8)
instead of
(4 a) (a 8),
which I find less readable and more difficult to type.
Neil
___
NumPy-Discussion mailing list
/reference/generated/numpy.where.html)
I ask because people often post to the list needing in1d() after not being
able to find it via the docs, so it would be nice to add references in
the places people go looking for it.
Neil
___
NumPy-Discussion mailing
Hi,
If someone with commit access has the chance, could they take a
look at ticket 1603:
http://projects.scipy.org/numpy/ticket/1603
and apply it if it looks ok? It speeds up in1d(a, b) a lot for
the common use case where len(b) len(a).
Thanks,
Neil
/lib/python3.2/gzip.py, line 101, in
__getattr__
return getattr(name, self.file)
TypeError: getattr(): attribute name must be string
This was filed and fixed during the python bug weekend
(http://bugs.python.org/issue10465), so it shouldn't be a problem with
a current 3.2 checkout.
--
Neil
Hi,
I been looking around and could spot anything on this. Quite often I want to
read a homogeneous block of data from within a file. The skiprows option is
great for missing out the section before the data starts, but if there is
anything below then loadtxt will choke. I wondered if there
oops, I meant to save my post but I sent it instead - doh!
In the end, the question was; is worth adding start= and stop= markers into
loadtxt to allow grabbing sections of a file between two known headers? I
imagine it's something that people come up against regularly.
Thanks,
Neil
with commit access could take a look and and apply it if ok, that
would be great.
Thanks,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
always faster than kern_in().
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to write:
data.ax_day.mean(axis=0)
data.ax_hour.mean(axis=0)
Thanks, that's a really nice description. Instead of
data.ax_day.mean(axis=0)
I think it would be clearer to do something like
data.mean(axis='day')
but I see the motivation.
Neil
Rob Speer rspeer at MIT.EDU writes:
It's not just about the rows: a 2-D datarray can also index by
columns, an operation that has no equivalent in a 1-D array of records
like your example.
rec['305'] effectively indexes by column. This is one the main attractions of
structured/record arrays.
axes and indices are especially useful, for the peanut gallery's
benefit?
Cheers, Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
tuple?
+1. More than once I've wanted exactly such a function.
I also think this would be useful. For what it's worth, IDL also has a function
called minmax() that does this (e.g.
http://astro.uni-tuebingen.de/software/idl/astrolib/misc/minmax.html)
Neil
to change something in pyfits to avoid this?
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to get around it :) But thanks for the
suggestion, I'll use that in future when I need to switch between chararrays
and
ndarrays.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
: I haven't looked at the kdtree code yet, that might be a better
approach.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
here:
http://projects.scipy.org/numpy/browser/branches/datetime/numpy/lib/arraysetops.
py
to use in your own modules.
Cheers, Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the first
paragraph of:
http://www.sagemath.org/doc/numerical_sage/ctypes.html
You may have to convert the .a library to a .so library.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
? A warning in the next
release, then change it in the following release?
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
conference that takes us up through adding datetime, Python 3 and a
possible major rewrite (that will add the indirection necessary to make
future ABI breaks unneccessary).
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
On 2010-02-02 20:31 , Robert Kern wrote:
On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell n...@wartburg.edu
wrote:
I don't understand Travis's comment that datetime is just a
place-holder for data.
That's not a direct quote and is a misinterpretation of what he said.
In the course
,
12236.06517635, 10221.89370909, 2414.9534157 , 13039.6113439 ,
22967.67537214, 15140.04385727, 2639.67251757, 26461.80402013,
3218.73142713, 15963.71209963, 11755.35677893, 11551.31295568,
29142.37675619])
-Neil
, and then went back to dissertation writing
for a few days. When I looked up, there were 18 answers.
I'll try getting python from python.org and/or building it all from scratch.
Thanks again,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
this?
Some info:
wei...@neil-weisenfeld-macbook-pro:~
[507]$ which python
/usr/bin/python
wei...@neil-weisenfeld-macbook-pro:~
[508]$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type help, copyright, credits or license for more
up as array([ 1.2, 2.1, 3.1, 4. ])
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
((5000,), 4)
it fails with an axis out of bounds error. I presume there's a reason
why a 0-D array gets special treatment?
In [16]: import numpy as np
In [17]: np.__version__
Out[17]: '1.4.0.dev7746'
In [18]: np.min(5000, 4)
...
ValueError: axis(=4) out of bounds
Neil
Hi,
I've written some release notes (below) describing the changes to
arraysetops.py. If someone with commit access could check that these sound ok
and add them to the release notes file, that would be great.
Cheers,
Neil
New features
Improved set operations
me know if it does or doesn't help.
[As an aside, fortranfile.py is code that I've written that isn't part
of Numpy and perhaps the right place for any discussions of it is off-list.]
-Neil
# Copyright 2008, 2009 Neil Martinsen-Burrell
#
# Permission is hereby granted, free of charge, to any
and np.max more difficult.)
I think it would be better to fix this issue. np.min(3,2) should also give
ValueError: axis(=2) out of bounds. Fixing this also removes any possibility
of generating hard-to-find errors by overwriting the builtin min/max. (Unless
there's some corner case I'm missing).
Neil
gladly accept any contributions.
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
always sorts, even if it uses set. So I'm pretty sure
all(unique(A) == unique(B)) is guaranteed.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
something:
(Pdb) np.reshape(trace, newdims)
*** ValueError: total size of new array must be unchanged
Clearly the total size of the new array *is* unchanged.
I think you meant prod(dims[1:]). A 4 x 3 sub-array has 12 elements,
not 7. (Whence the curse of dimensionality...)
-Neil
.
The one from http://r.research.att.com/tools/ is much better and is
the recommended one for SciPy.
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
]]))
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the documentation in the online documentation editor
at http://docs.scipy.org/numpy/docs/numpy-docs/user/index.rst
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
):
for z in range(3):
Cprime[x,y,z] = A[x,y] + B[x,z]
:
In [13]: (C == Cprime).all()
Out[13]: True
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
should be aware of when
introducing changes. It makes sense that we will all see this balance
differently, but I think that we need to acknowledge that this is the
essential tension in removing cruft incompatibly.
-Neil
___
NumPy-Discussion mailing list
sugar for (N+1)-n where N
is the length of the list and that should work for n=0 as well:
b = [1,2,3,4,5]
b[:0]
[]
b[:len(b)+1-0]
[1, 2, 3, 4, 5]
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
-in python constant called Ellipsis. The colon
is a slice object, again a python built-in, called with None as an
argument. So, z[...,2,:] == z[Ellipsis,2,slice(None)].
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
, the work of
porting Numpy and Scipy to Python 3.x hasn't been undertaken, although
it will be in time. If you have a particular situation in which you
need to upgrade, please let us know more about it, so that the NumPy
developers can target their porting efforts appropriately.
-Neil
'and' and
'or' methods (http://www.python.org/dev/peps/pep-0335/), but I don't think it's
ever got enough support to be accepted.
Also, if you don't need the indices, you can just use the conditional
expression as a boolean mask:
condition = (t1 Y[:,0]) (Y[:,0] t2)
Y[:,0][condition]
Neil
= np.genfromtxt(filename, names=listofname, dtype=None)
Then you just need to specify the column names, and not the dtypes (they are
inferred from the data). There are probably backwards compatibility issues, but
it would be great if dtype=None was the default for genfromtxt.
Neil
would be a natural choice, since it
can process on-disk datasets as if they were NumPy arrays (which might be
nice if you don't have all 50GB of memory).
-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Hi Neil,
This sounds good. If you don't have time to do it, I don't mind having
a go at writing
a patch to implement these changes (deprecate the existing unique1d, rename
unique1d to unique and add the set approach from the old unique
for 1.4.0 ?
I'd like to get the patch in ticket 1113
(http://projects.scipy.org/numpy/ticket/1133), or some version of it, into 1.4.
It would also be great to get all the docstrings David Goldsmith and others are
working on into the next release.
Neil
because numpy hasn't been
ported to python 3 yet ;)
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
np.deg2rad
np.radians
np.rad2deg
np.degrees
And maybe more I've missed.
Can we deprecate alltrue and sometrue, and either deg2rad/rad2deg, or
radians/degrees? They would be deprecated in 1.4 and presumably removed in 1.5.
Neil
___
Numpy-discussion
, r7059; IPython 0.10.bzr.r1163 ).
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
.
-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 2009-06-16 16:05 , Robert wrote:
Neil Martinsen-Burrell wrote:
On 06/16/2009 02:18 PM, Robert wrote:
n = 10
xx = np.ones(n)
yy = np.arange(n)
aa = np.column_stack((xx,yy))
bb = np.column_stack((xx+1,yy))
aa
array([[ 1., 0.],
[ 1., 1
and unique1d? They're essentially identical for an
array input, but unique uses the builtin set() for non-array inputs and so is
around 2x faster in this case - see below. Is it worth accepting a speed
regression for unique to get rid of the function duplication? (Or can they be
combined?)
Neil
it into numpy (whatever it ends up being called).
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
for other people to give their opinion on any changes. I can do
this if no one else has time.
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thanks for the summary! I'm +1 on points 1, 2 and 3.
+0 for points 4 and 5 (assume_unique keyword and renaming arraysetops).
Neil
PS. I think you mean deprecate, not depreciate :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http
:
http://projects.scipy.org/numpy/ticket/1036
Is there anything I can do to help get it applied?
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Re-hi!
Robert Cimrman wrote:
Hi all,
I have added to the ticket [1] a script that compares the proposed
setmember1d_nu() implementations of Neil and Kim. Comments are welcome!
[1] http://projects.scipy.org/numpy/ticket/1036
I
capabilities, so you might be able to use the fortran itself as
the mini-language. Something like
spec = fortranfile.OutputSpecification(\
real(4),dimension(2,5):: ux,uy
write(11) ux,uy)
ux, uy = fortranfile.FortranFile('uxuyp.bin').readSpec(spec)
Best of luck. Peace,
-Neil
# Copyright 2008 Neil
On 2009-05-28 12:11 , David Froger wrote:
Thank you very much :-)
Things should be cleared up now on the wiki as well. Peace,
-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
of weeks
ago for a program I was working on. It must come up a lot.
I ended up with a similar solution to Josef's, but it took me more than an hour
to work it out - I should have asked here first!
Neil
___
Numpy-discussion mailing list
Numpy-discussion
1 - 100 of 124 matches
Mail list logo