ok before you leap.
> I had explicit check for
> myarray.base==None, which it is not when I get the ndarray from a pickle.
That is not the way to check if an ndarray owns its data. Instead,
check a.flags['OWNDATA']
--
Robert Kern
___
Nu
s would impact projects like ipython that does
> tab-completion support, but I know that that would drive me nuts in my basic
> tab-completion setup I have for my regular python terminal. Of course, in
> the grand scheme of things, that really isn't all that imp
4?
You need special handling for NaTs to be consistent with how we deal
with NaNs in floats.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
8, 9, 10])
>
> In [26]: b[slice(1)]
> Out[26]: array([1])
>
> In [27]: b[slice(4)]
> Out[27]: array([1, 2, 3, 4])
>
> In [28]: b[slice(None,4)]
> Out[28]: array([1, 2, 3, 4])
>
> so slice(4) is actually slice(None,4), how can I exactly want retrieve a[4]
&
7;1:3,:,4') for a[1:3,:,4] ect.
> I am very close now.
[~]
|1> from numpy import index_exp
[~]
|2> index_exp[1:3,:,2:4]
(slice(1, 3, None), slice(None, None, None), slice(2, 4, None))
--
Robert Kern
___
NumPy-Discussion mailing list
NumP
m?
"python setup.py bdist_egg" should never work, but "python setupegg.py
bdist_egg" should.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> I would prefer not to use: from xxx import *,
>
> because of the name pollution.
>
> The name convention that I copied above facilitates avoiding the pollution.
>
> In the same spirit, I've used:
> import pylab as plb
But in that same spirit, using np and plt separately is preferred.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
;> Values from which to choose. `x` and `y` need to have the same
>> shape as `condition`
>>
>> In the example you gave, x was a scalar.
>
> net.max() returns an array:
>
> >>> print type(net.max())
>
format standard was that
it would accept what numpy spits out for the descr, not that it would
accept absolutely anything that numpy.dtype() can consume, even
deprecated aliases (though I will admit that that is almost what the
NEP says). In particular, endianness really should be included or else
y
On Thu, Aug 2, 2012 at 11:41 PM, Geoffrey Irving wrote:
> On Thu, Aug 2, 2012 at 1:26 PM, Robert Kern wrote:
>> On Thu, Aug 2, 2012 at 8:46 PM, Geoffrey Irving wrote:
>>> Hello,
>>>
>>> The attached .npy file was written from custom C++ code. It loads
>
matics Institute
> >> University of Warwick
> >> Coventry
> >> West Midlands
> >> CV4 7AL
> >> United Kingdom
> >> ___
> >> NumPy-Discussion mailing list
> >> NumPy-Discussion@scipy
On Wed, Aug 8, 2012 at 10:34 AM, David Cournapeau wrote:
> On Wed, Aug 8, 2012 at 12:55 AM, Nathaniel Smith wrote:
>> On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote:
>>> Those are not the original Fortran sources. The original Fortran sources are
>>> in the public
How do I do that?
C, dummy = numpy.broadcast_arrays(A[:,newaxis,:], numpy.empty([1,state,1]))
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ls, so you may want to do some digging of your own.
> 2) Is there a better way to build Cython files than this weird
> monkey-patching thing they propose? (It's still better than the horror
> that setuptools/distribute require, but I guess I have higher
> expectations...)
Sadly,
ink flags for
the language, often because they are more variable with respect to
specific compiler versions.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
it, but it
>> did just bite me and a student a few times.
>
> The trail leads to here:
> http://projects.scipy.org/numpy/attachment/ticket/36/numpy-6-norm-change-
> default.diff
>
> Seems like the chances of learning the reason why this change was done
> are pretty sli
cumulate(reset_idx)
cumsum = np.cumsum(x)
cumsum = cumsum - cumsum[reset_idx]
return cumsum
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
it make sense if pylab.power were the frequently used power
> function rather than a means for sampling from the power distribution?
Matplotlib may be discussed over here:
https://lists.sourceforge.net/lists/listinfo/matplotlib-users
--
Robert Kern
on framework again. Just don't mv it from where it
gets installed. Then the numpy-1.6.2-py2.7-python.org-macosx10.3.dmg
will recognize it.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ly
desirable since many of the masked values will be trip these errors
spuriously even though they will be masked out in the result.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
a good to overwrite
> the error setting.
Precisely.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
alternative that does not expand the API with two-liners is to let
the ndarray.fill() method return self:
a = np.empty(...).fill(20.0)
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Jan 14, 2013 at 1:04 AM, Nathaniel Smith wrote:
> On Sun, Jan 13, 2013 at 11:48 PM, Skipper Seabold wrote:
>> On Sun, Jan 13, 2013 at 6:39 PM, Nathaniel Smith wrote:
>>>
>>> On Sun, Jan 13, 2013 at 11:24 PM, Robert Kern
>>> wrote:
>>> >
fferent, so the result
of the multiplication will be different, so fill can not be used.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
,1]]
http://docs.scipy.org/doc/numpy/user/basics.indexing.html#indexing-multi-dimensional-arrays
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
idx0
array([[0],
[1],
[2],
[3],
[4]])
[~]
|7> v[idx0, idx]
array([[ 3, 4],
[ 1, 12],
[ 7, 23],
[ 6, 11],
[ 8, 9]])
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
e
> equal, so I can remove them. So for example, the 1st and last row.
all_equal_mask = np.logical_and.reduce(arr[:,1:] == arr[:,:-1], axis=1)
some_unequal = arr[~all_equal_mask]
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ion and avoid the
inv() by using solve().
viy0 = np.linalg.solve(v, y0)
for i, t in enumerate(tlist):
# And no need to dot() the first part. Broadcasting works just fine.
sol_t = (v * np.exp(-w*t)).dot(viy0)
...
--
Robert Kern
___
NumPy-Discussi
the community and we want
>> to share the work done.
>
> It certainly does look useful. My question is -- why do we need two
> complete copies of the linear algebra routine interfaces? Can we just
> replace the existing linalg functions with these new implementations?
> Or if not, wh
not public, I guess. I suppose anyone
> who uses the image would have to have their own licenses for the Intel
> stuff? Does anyone have experience of this?
You need to purchase one license per developer:
http://software.intel.com/en-us/articles/intel-math-kernel-libra
o the docstrings via sphinx, like in scipy?
Click on the "Edit Page" link on the left. Follow the instructions on
the front page of the numpy Docstring Editor site to sign up:
http://docs.scipy.org/numpy/Front%20Page/
--
Robert Kern
___
ne? (I didn't find them, only a similar package
> https://github.com/schmir/pypiserver, but that doesn't seem to be it.)
http://wiki.python.org/moin/CheeseShopDev
You can get help with PyPI on Catalog-SIG:
http://mail.python.org/mailman/listinfo/catalog-sig
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
numpy-discussion is for both development discussions and support
questions. The set of people interested in the developer discussions
is mostly the same as the set of people giving support, so there has
never been too much impetus for breaking the list into two
6e-01+7.071067811865431318e-01j)
> ...
What were you expecting? A single row? savetxt() always writes out
len(arr) rows. Reshape your vector into a (1,N) array if you want a
single row.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
m/numpy/numpy
> http://sourceforge.net/projects/numpy/files/
>
> Is there some misleading documentation still around that gave
> you a different impression?
Todd is responding to a message about PyDSTool, which is developed on
Sourceforge, not numpy.
--
Robert Kern
why
> np.abs(a) is so much harder than a.abs(), and why this function and
> not other unary functions?
Or even abs(a).
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Feb 26, 2013 at 12:11 AM, Charles R Harris
wrote:
>
> On Sat, Feb 23, 2013 at 1:33 PM, Robert Kern wrote:
>>
>> On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote:
>> > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki
>> > wrote:
>> >&
1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: no commands supplied
Anyone who was expecting the interactive setup will probably complain here.
--
Robert Kern
___
n to heart, though. We shouldn't remove stuff
faster than 12 months or so. I just think that it should modify our
release process, not our "marking for deprecation" process.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Mar 6, 2013 at 10:45 PM, Nathaniel Smith wrote:
> On Wed, Mar 6, 2013 at 10:33 PM, Robert Kern wrote:
>> On Wed, Mar 6, 2013 at 8:09 PM, Nathaniel Smith wrote:
>>> A number of items on the 1.8 todo list are reminders to remove things
>>> that we deprecate
On Wed, Mar 6, 2013 at 10:56 PM, Nathaniel Smith wrote:
> On Wed, Mar 6, 2013 at 10:53 PM, Robert Kern wrote:
>> On Wed, Mar 6, 2013 at 10:45 PM, Nathaniel Smith wrote:
>>> On Wed, Mar 6, 2013 at 10:33 PM, Robert Kern wrote:
>>>> On Wed, Mar 6, 2013 at 8:09 PM, N
e is some low-level C work that needs to be done to allow the
non-uniform distributions to be shared between implementations of the
core uniform PRNG, but that's the same no matter how you organize the
upper layer.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Mar 12, 2013 at 10:38 PM, Neal Becker wrote:
> Nathaniel Smith wrote:
>
>> On Tue, Mar 12, 2013 at 9:25 PM, Nathaniel Smith wrote:
>>> On Mon, Mar 11, 2013 at 9:46 AM, Robert Kern wrote:
>>>> On Sun, Mar 10, 2013 at 6:12 PM, Siu Kwan Lam wrote:
>&g
omputations and
> making text output completely non-readable.
>
>>>> from numpy import __version__
>>>> __version__
> '2.0.0.dev-1fe8136'
We really should change the default to 'warn' for numpy 2.0. Maybe
even for numpy 1.6. We've talked about it
s on parallelizing expensive computation like
matrix-matrix multiplication, not things like finding the minimum
elements of an array.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
he just
wants bools (or even just 0s and 1s) and not a real string of bits
compacted into bytes.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
thou
On Tue, Mar 29, 2011 at 13:33, wrote:
> Any suggestions on how to achieve stable sort based on multiple columns with
> numpy ?
http://docs.scipy.org/doc/numpy/reference/generated/numpy.lexsort.html#numpy.lexsort
It uses mergesort for stability.
--
Robert Kern
"I have come to b
nted. In
particular, you can cause an exception to be raised so that you can
use a debugger to locate the source.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made t
On Wed, Mar 30, 2011 at 16:03, Ralf Gommers wrote:
> On Thu, Mar 24, 2011 at 5:25 PM, Ralf Gommers
> wrote:
>> On Thu, Mar 24, 2011 at 5:11 PM, Robert Kern wrote:
>>> We really should change the default to 'warn' for numpy 2.0. Maybe
>>> even for numpy
taller originally (the original ticket that led to this
change[1088]). I don't think it was through the generic setup.py. The
proper fix would probably be something specific to each binary
installer.
[1088] http://projects.scipy.org/numpy/ticket/1088
--
Robert Kern
"I have come to
On Thu, Mar 31, 2011 at 13:39, Ralf Gommers wrote:
> On Thu, Mar 31, 2011 at 8:32 PM, Robert Kern wrote:
>> On Thu, Mar 31, 2011 at 12:00, Ralf Gommers
>> wrote:
>>> On Thu, Mar 31, 2011 at 6:33 PM, Orion Poplawski
>>> wrote:
>>>> I'm looki
On Thu, Mar 31, 2011 at 14:08, Ralf Gommers wrote:
> On Thu, Mar 31, 2011 at 8:52 PM, Robert Kern wrote:
>> Linux distributions start to complain when data files (especially
>> documentation data files not used at runtime) are placed into the
>> Python packages. I woul
On Thu, Mar 31, 2011 at 14:16, Orion Poplawski wrote:
> On 03/31/2011 01:12 PM, Robert Kern wrote:
>> Well, they're meant to be copied into your own code, which is why they
>> end up under a doc/ directory. Lots of things like this tend to end up
>> in doc/ directories.
mended practice is for library users (who do or
should know those things) to run their test suites with the warnings
turned on.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
ase. Previously, the result was a float64
array, as expected.
Mark, I expect this is a result of one of your changes. Can you take a
look at this?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by
for floats,
for which the limiting attribute is precision, not range. For floats,
the result of min_scalar_type should be the type of the object itself,
nothing else. E.g. min_scalar_type(x)==float64 if type(x) is float no
matter what value it has.
--
Robert Kern
"I have come to believe that t
On Tue, Apr 12, 2011 at 11:20, Mark Wiebe wrote:
> On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern wrote:
>>
>> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe wrote:
>> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant
>> >
>> > wrote:
>>
>
t; have been deficient up to this point.
It's been documented for a long time now.
http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad atte
On Tue, Apr 12, 2011 at 11:49, Mark Wiebe wrote:
> On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern wrote:
>> You're missing the key part of the rule that numpy uses: for
>> array*scalar cases, when both array and scalar are the same kind (both
>> floating point or both
On Tue, Apr 12, 2011 at 13:17, Charles R Harris
wrote:
>
>
> On Tue, Apr 12, 2011 at 11:56 AM, Robert Kern wrote:
>>
>> On Tue, Apr 12, 2011 at 12:27, Charles R Harris
>> wrote:
>>
>> > IIRC, the behavior with respect to scalars sort of happened in t
stated, which is limited to numpy.random. It might even
be documented somewhere. Unfortunately, most of the individual methods
had their parameters documented before this capability was added.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
t that everyone do exactly the same thing for
consistency, both inside scikits.learn and in code that uses or
extends scikits.learn. The best way to ensure that is to provide a
utility function as the One, Obvious Way To Do It. Note that if you do
hide the details behind a utility function, I would
NG to control
> what is going on.
Honestly, they really shouldn't be, except as a workaround to
poorly-written functions that don't let you pass in your own PRNG.
Someone snuck in the module-level alias to the global PRNG's seed()
method when I wasn't paying attention. :-)
numpy fudges with them). We have had
regressions in the past that went unnoticed for years. Rather, you
should not be running the test suite from an environment with a bunch
of other packages imported (I know, I know, the existence of np.test()
kind of implicitly encourages this, but still...).
pen-source
> matra of everything on-list:
>
> http://producingoss.com/en/setting-tone.html#avoid-private-discussions
Having project-relevant *discussions* on-list doesn't preclude getting
someone's *attention* off-list.
I can't speak for the rest of the group, but as for
return 10
> else:
> return sqrt( n )
> f = numpy.compile( lambda(x): 0 if ( x < 10 ) else capped_sqrt( x ) )
> numpy.map( f, a )
>
> or something like that, and it would all happen in a single pass within
> numpy, with no "python code&quo
ing approaches he's used.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
N
nice to use as array indexes. And in
> fact to use it in actual code I'd need to do one or more other passes to
> check unmapped_colors for any indexes < 0 or > 2.
Also, still not *quite* as general as you might like, but sufficient
for the problem as stated:
colors = color_array
express your computations in a natural way without having to worry so much
> about the number of temp arrays being created.
Theano does something along those lines.
http://deeplearning.net/software/theano/
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harml
ng done already?
We didn't think of it. If you can write up a patch that works safely
and shows a performance improvement, it's probably worth putting in.
It's probably not *that* common of a bottleneck, though.
--
Robert Kern
"I have come to believe that the whole world is
On Wed, May 4, 2011 at 11:14, Matthew Brett wrote:
> Hi,
>
> On Tue, May 3, 2011 at 7:58 PM, Robert Kern wrote:
>> I can't speak for the rest of the group, but as for myself, if you
>> would like to draft such a letter, I'm sure I will agree with its
>> con
mask object?
No. These two are not semantically equivalent. Your second example
does not actually modify m. For integer and bool mask arrays, m[mask]
necessarily makes a copy, so when you modify t via inplace addition,
you have only modified t and not m. The assignm
print '%s succeeded' % dt.__name__
...>
bool succeeded
uint8 succeeded
int8 succeeded
int succeeded
float succeeded
float32 succeeded
complex64 failed: TypeError: can't convert complex to float
complex128 failed: TypeError: can't convert complex to float
object succeeded
[0]
out[2] = a[2] + out[1]
...
It always reads from a[i] before it writes to out[i], so it's always consistent.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had a
On Fri, May 13, 2011 at 09:58, Bruce Southey wrote:
> Hi,
> How do you create a 'single' structured array using np.array()?
> Basically I am attempting to do something like this that does not work:
> a=np.array([1,2, 3,4, 5,6], dtype=np.dtype([('foo', int)]))
>
> I realize that this is essentially
On Sun, May 15, 2011 at 20:49, Bruce Southey wrote:
> On Fri, May 13, 2011 at 4:38 PM, Robert Kern wrote:
>> On Fri, May 13, 2011 at 09:58, Bruce Southey wrote:
>>> Hi,
>>> How do you create a 'single' structured array using np.array()?
>>> Basic
;t related to numpy at all.
There are a few places where we (improperly) directly call malloc()
instead of PyMem_Malloc(), so yes, you should rebuild numpy against
TCMalloc in addition to the Python interpreter.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a har
> given the same seed?
No general guarantee for all of the scipy distributions, no. I suspect
that all of the RandomState methods do work this way, though.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our o
, 23, 23]],
[[24, 24, 24, 24, 24, 24, 24, 24, 24, 24],
[25, 25, 25, 25, 25, 25, 25, 25, 25, 25],
[26, 26, 26, 26, 26, 26, 26, 26, 26, 26]],
[[27, 27, 27, 27, 27, 27, 27, 27, 27, 27],
[28, 28, 28, 28, 28, 28, 28, 28, 28, 28],
[29, 29, 29, 29, 29, 29, 29
this operation do anything different from
what lists normally do, which is check if the given object is equal to
one of the items in the list.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by ou
ist.__contains__(x), it should treat all objects exactly
the same: check if it equals any item that it contains. There is no
way for it to say, "Oh, I don't know how to deal with this type, so
I'll pass it over to x.__contains__()".
A function call is the best place for this opera
y and compare by boolean equality. In
Numeric/numpy's case, this comparison is broadcasted. So that's why
[3,6,4] works, because there is one row where 3 is in the first
column. [4,2,345] doesn't work because the 4 and the 2 are not in
those columns.
Probably, this should be conside
k values.
> This is easy to do with heapsort and almost as easy with mergesort.
>
> 2) Ufunc fadd (nanadd?) Treats nan as zero in addition. Should make a faster
> version of nansum possible.
>
> 3) Fast medians.
+3
--
Robert Kern
"I have come to believe that the whole
+500)
>
> would not equal 511.493408?
Yes, I object. You can set the accumulator dtype explicitly if you
need it: np.mean(arr, dtype=np.float64)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt
On Wed, Jun 1, 2011 at 11:11, Bruce Southey wrote:
> On 06/01/2011 11:01 AM, Robert Kern wrote:
>> On Wed, Jun 1, 2011 at 10:44, Craig Yoshioka wrote:
>>> would anyone object to fixing the numpy mean and stdv functions, so that
>>> they always used a 64-bit value to
, 2])
>
> In [55]: a + b
> Out[55]: array([ 1., 3., nan])
>
> and nanadd(a,b) would yield:
>
> array([ 1., 3., 2.)
>
> I don't see how that is particularly useful, at least not any more
> useful that nanprod, nandiv, etc, etc...
>
> What am I missing?
It&
bly efficient
algorithm. Leap seconds are determined by committee every few years
based on astronomical observations. We would need to keep a table of
leap seconds up to date.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by ou
s?
>>
>
> Just looks like it wasn't coded that way, but it's low-hanging fruit.
> Any objections to adding this behavior? This commit should take care
> of it. Tests pass. Comments welcome, as I'm just getting my feet wet
> here.
>
> https://github.com/jsea
th.join(numpy.__path__[0], 'fft')]
That said, there is no good cross-platform way to link against other
Python extension modules. Please do not try. You will have to include
a copy of the FFTPACK code in your own extension module.
--
Robert Kern
"I have come to believe that the whole
zed
events.
The machinery to handle both is basically the same inside their areas
of applicability; you just have to disallow certain ambiguous
conversions between them, as Pierre suggests.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is
abled dtypes (*not* NaN-enabled dtypes) would have (x + NA) ==
NA, just like R. fadd() would be useful for other things.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it
, 8.+0.j, 9.+0.j])
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion m
t;seconds,
we assume that the day is representing the initial second at midnight
of that day. We then use offsets to allow the user to add more
information to specify it more precisely.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made
ode dtype.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion
always pass the test suite. Developers
need a clean, working master to branch from too, not just production
users.
[1] http://docs.scipy.org/doc/numpy/dev/gitwash/index.html
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terribl
, '2012', '2013', '2014', '2015', '2016', '2017', '2018',
> '2019'], dtype='datetime64[Y]')
>>>> np.arange('today', 10, 3, dtype='M8')
> array(['2011-06-09',
On Thu, Jun 9, 2011 at 16:27, Robert Kern wrote:
> On Thu, Jun 9, 2011 at 15:01, Mark Wiebe wrote:
>> I've replaced the previous two pull requests with a single pull request
>> rolling up all the changes so far. The newest changes include finishing the
>> generic u
mplex algorithms that can be correctly
implemented for real arrays simply by implicitly assuming that the
imaginary component is all 0s.
If you happen to have an algorithm where passing a float array is more
likely an indicator of an error, you can do the check yourself.
--
Robert Kern
"I ha
rmulate. The unit test for the implicit case
fits a general 2D ellipse to a 2D cloud of points, as described in the
User's Guide.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
t
ve',
'itemsize',
'kind',
'metadata',
'name',
'names',
'newbyteorder',
'num',
'shape',
'str',
'subdtype',
'type']
The numpy *scalar* types do, because they are actual Py
501 - 600 of 2838 matches
Mail list logo