ing a
quick glance at the pull request, it looks like the filename is
included in the message, so the warning will appear once for each file
that needs to be warned about. This seems entirely appropriate
behavior to me.
--
Robert Kern
"I have come to believe that the whole world is an eni
r what it's worth, I have found this paper by James Diebel to be the
most complete listing of all of the different conventions and
conversions amongst quaternions, Euler angles, and rotation vectors:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.5134
--
Robert Kern
"I have
particular values and that it was a boolean
array. Instead, it tested the precise bytes of the repr of the array.
The repr of ndarrays are not a stable API, and we don't make
guarantees about the precise details of its behavior from version to
version. doctests work better to test simpler types
include']
>>>
>>> These defaults won't work on the forthcoming Ubuntu 11.10, which
>>> installs X into /usr/lib/X11 and /usr/include/X11.
>
> Did you check that some compilation fails because of this?
Enthought's Enable will probably fail. It uses the syste
nto it where A
!= 0. This makes you do the comparisons twice.
Or you can allocate a B array the same size as A, run your loop to
assign into it when A != 0 and incrementing the index into B, then
slice out or memcpy out the portion that you assigned.
--
Robert Kern
"I have come to believe th
ply inform users that PEP 3118 exists as an
> alternative
>
> http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
>
> Thus my confusion...
It was an error in the 1.4 documentation that was fixed.
--
Robert Kern
"I have come to believe that the whole world is an enigma,
es which
aren't relevant here). For array-array and scalar-scalar, the
"largest" dtype wins. So for the first case, array-scalar, bad_val
gets downcasted to float32. For the latter case, bad_val remains
float64 and upcasts c to float64.
Try this:
bad_val = np.float32(10) * a.max()
--
Rob
On Wed, Jul 13, 2011 at 00:36, Long Duong wrote:
>
> Does anybody know if there are there videos of the conference this year?
Yes. Announcements will be made when they start going online.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma t
could be deferred to inside
load_library() in ctypeslib.py, yes.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
On Mon, Jun 27, 2011 at 11:17, Derek Homeier
wrote:
> On 21.06.2011, at 8:35PM, Christopher Barker wrote:
>
>> Robert Kern wrote:
>>> https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
>>
>> Just a note. From that doc:
>>
>> ""
On Tue, Jun 21, 2011 at 13:35, Christopher Barker wrote:
> Robert Kern wrote:
>> https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
>
> Just a note. From that doc:
>
> """
> HDF5 is a complicated format that more or less implements
&g
you can also be clever
a = array.array('c')
map(a.extend, your_generator_of_4strings)
b = np.frombuffer(a, dtype=np.int32)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad att
here you have gridded data, unmasking and remasking can be
quite useful. They are complementary tools.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to
On Fri, Jun 24, 2011 at 11:05, Nathaniel Smith wrote:
> On Fri, Jun 24, 2011 at 8:14 AM, Robert Kern wrote:
>> On Fri, Jun 24, 2011 at 10:07, Laurent Gautier wrote:
>>> May be there is not so much need for reservation over the string NA, when
>>> making the dist
On Fri, Jun 24, 2011 at 10:02, Pierre GM wrote:
>
> On Jun 24, 2011, at 4:44 PM, Robert Kern wrote:
>
>> On Fri, Jun 24, 2011 at 09:35, Robert Kern wrote:
>>> On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
>>>> On Fri, Jun 24, 2011 at 7:06 AM, Robert Ker
On Fri, Jun 24, 2011 at 10:07, Laurent Gautier wrote:
> On 2011-06-24 16:43, Robert Kern wrote:
>>
>> On Fri, Jun 24, 2011 at 09:33, Charles R Harris
>> wrote:
>>>
>>> >
>>> > On Fri, Jun 24, 2011 at 8:06 AM, Robert Kern
>>> >
On Fri, Jun 24, 2011 at 09:35, Robert Kern wrote:
> On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
>> On Fri, Jun 24, 2011 at 7:06 AM, Robert Kern wrote:
>>
>>> The alternative proposal would be to add a few new dtypes that are
>>> NA-aware. E.g. an nafloat
On Fri, Jun 24, 2011 at 09:33, Charles R Harris
wrote:
>
> On Fri, Jun 24, 2011 at 8:06 AM, Robert Kern wrote:
>> The alternative proposal would be to add a few new dtypes that are
>> NA-aware. E.g. an nafloat64 would reserve a particular NaN value
>> (there are l
On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
> On Fri, Jun 24, 2011 at 7:06 AM, Robert Kern wrote:
>
>> The alternative proposal would be to add a few new dtypes that are
>> NA-aware. E.g. an nafloat64 would reserve a particular NaN value
>> (there are lots of di
for 1D arrays.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion maili
alternative proposal would be to add a few new dtypes that are
NA-aware. E.g. an nafloat64 would reserve a particular NaN value
(there are lots of different NaN bit patterns, we'd just reserve one)
that would represent NA. An naint32 would probably reserve the most
negative int32 value (like R
On Fri, Jun 24, 2011 at 06:47, Matthew Brett wrote:
> Hi,
>
> On Thu, Jun 23, 2011 at 10:44 PM, Robert Kern wrote:
>> On Thu, Jun 23, 2011 at 15:53, Mark Wiebe wrote:
>>> Enthought has asked me to look into the "missing data" problem and how NumPy
>>>
On Thu, Jun 23, 2011 at 17:05, Charles R Harris
wrote:
>
> On Thu, Jun 23, 2011 at 4:04 PM, Robert Kern wrote:
>>
>> On Thu, Jun 23, 2011 at 17:02, Charles R Harris
>> wrote:
>> >
>> > On Thu, Jun 23, 2011 at 3:48 PM, Gael Varoquaux
>> > wro
he simple numpy array model.
>
> I wonder how much of the complication could be located in the dtype.
What dtype? There are no new dtypes in this proposal.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad at
#x27;s
approach seems to be very well-liked by a lot of users. In essence,
*that's* the "missing data problem" that you were charged with: making
happy the users who are currently dissatisfied with masked arrays. It
doesn't seem to me that moving the functionality from numpy.ma to
mpy/reference/generated/numpy.save.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html
(Note the mmap_mode argument)
https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
eni
On Fri, Jun 17, 2011 at 15:18, Benjamin Root wrote:
> On Fri, Jun 17, 2011 at 3:07 PM, Robert Kern wrote:
>>
>> On Fri, Jun 17, 2011 at 15:03, Benjamin Root wrote:
>> > Using the master branch, I was running the scipy tests when a crash
>> > occurred. I be
also accept anything that
can be converted to a dtype via np.dtype(x). The following are all
equivalent:
dtype=float
dtype='float'
dtype='float64'
dtype=np.float64
dtype='f8'
dtype='d'
dtype=np.dtype(float)
--
Robert Kern
"I have come
lease run "scipy.test(verbose=2)" in order to determine what test is
triggering the crash.
Even better, if you can install faulthandler and run the tests under
that, you can give us a full Python traceback where the segfault
occurs.
http://pypi.python.org/pypi/faulthandler
--
Robert Ke
ve',
'itemsize',
'kind',
'metadata',
'name',
'names',
'newbyteorder',
'num',
'shape',
'str',
'subdtype',
'type']
The numpy *scalar* types do, because they are actual Py
rmulate. The unit test for the implicit case
fits a general 2D ellipse to a 2D cloud of points, as described in the
User's Guide.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
t
mplex algorithms that can be correctly
implemented for real arrays simply by implicitly assuming that the
imaginary component is all 0s.
If you happen to have an algorithm where passing a float array is more
likely an indicator of an error, you can do the check yourself.
--
Robert Kern
"I ha
On Thu, Jun 9, 2011 at 16:27, Robert Kern wrote:
> On Thu, Jun 9, 2011 at 15:01, Mark Wiebe wrote:
>> I've replaced the previous two pull requests with a single pull request
>> rolling up all the changes so far. The newest changes include finishing the
>> generic u
, '2012', '2013', '2014', '2015', '2016', '2017', '2018',
> '2019'], dtype='datetime64[Y]')
>>>> np.arange('today', 10, 3, dtype='M8')
> array(['2011-06-09',
always pass the test suite. Developers
need a clean, working master to branch from too, not just production
users.
[1] http://docs.scipy.org/doc/numpy/dev/gitwash/index.html
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terribl
ode dtype.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion
t;seconds,
we assume that the day is representing the initial second at midnight
of that day. We then use offsets to allow the user to add more
information to specify it more precisely.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made
, 8.+0.j, 9.+0.j])
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion m
abled dtypes (*not* NaN-enabled dtypes) would have (x + NA) ==
NA, just like R. fadd() would be useful for other things.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it
zed
events.
The machinery to handle both is basically the same inside their areas
of applicability; you just have to disallow certain ambiguous
conversions between them, as Pierre suggests.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is
th.join(numpy.__path__[0], 'fft')]
That said, there is no good cross-platform way to link against other
Python extension modules. Please do not try. You will have to include
a copy of the FFTPACK code in your own extension module.
--
Robert Kern
"I have come to believe that the whole
s?
>>
>
> Just looks like it wasn't coded that way, but it's low-hanging fruit.
> Any objections to adding this behavior? This commit should take care
> of it. Tests pass. Comments welcome, as I'm just getting my feet wet
> here.
>
> https://github.com/jsea
bly efficient
algorithm. Leap seconds are determined by committee every few years
based on astronomical observations. We would need to keep a table of
leap seconds up to date.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by ou
, 2])
>
> In [55]: a + b
> Out[55]: array([ 1., 3., nan])
>
> and nanadd(a,b) would yield:
>
> array([ 1., 3., 2.)
>
> I don't see how that is particularly useful, at least not any more
> useful that nanprod, nandiv, etc, etc...
>
> What am I missing?
It&
On Wed, Jun 1, 2011 at 11:11, Bruce Southey wrote:
> On 06/01/2011 11:01 AM, Robert Kern wrote:
>> On Wed, Jun 1, 2011 at 10:44, Craig Yoshioka wrote:
>>> would anyone object to fixing the numpy mean and stdv functions, so that
>>> they always used a 64-bit value to
+500)
>
> would not equal 511.493408?
Yes, I object. You can set the accumulator dtype explicitly if you
need it: np.mean(arr, dtype=np.float64)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt
k values.
> This is easy to do with heapsort and almost as easy with mergesort.
>
> 2) Ufunc fadd (nanadd?) Treats nan as zero in addition. Should make a faster
> version of nansum possible.
>
> 3) Fast medians.
+3
--
Robert Kern
"I have come to believe that the whole
y and compare by boolean equality. In
Numeric/numpy's case, this comparison is broadcasted. So that's why
[3,6,4] works, because there is one row where 3 is in the first
column. [4,2,345] doesn't work because the 4 and the 2 are not in
those columns.
Probably, this should be conside
ist.__contains__(x), it should treat all objects exactly
the same: check if it equals any item that it contains. There is no
way for it to say, "Oh, I don't know how to deal with this type, so
I'll pass it over to x.__contains__()".
A function call is the best place for this opera
this operation do anything different from
what lists normally do, which is check if the given object is equal to
one of the items in the list.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by ou
, 23, 23]],
[[24, 24, 24, 24, 24, 24, 24, 24, 24, 24],
[25, 25, 25, 25, 25, 25, 25, 25, 25, 25],
[26, 26, 26, 26, 26, 26, 26, 26, 26, 26]],
[[27, 27, 27, 27, 27, 27, 27, 27, 27, 27],
[28, 28, 28, 28, 28, 28, 28, 28, 28, 28],
[29, 29, 29, 29, 29, 29, 29
> given the same seed?
No general guarantee for all of the scipy distributions, no. I suspect
that all of the RandomState methods do work this way, though.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our o
;t related to numpy at all.
There are a few places where we (improperly) directly call malloc()
instead of PyMem_Malloc(), so yes, you should rebuild numpy against
TCMalloc in addition to the Python interpreter.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a har
On Sun, May 15, 2011 at 20:49, Bruce Southey wrote:
> On Fri, May 13, 2011 at 4:38 PM, Robert Kern wrote:
>> On Fri, May 13, 2011 at 09:58, Bruce Southey wrote:
>>> Hi,
>>> How do you create a 'single' structured array using np.array()?
>>> Basic
On Fri, May 13, 2011 at 09:58, Bruce Southey wrote:
> Hi,
> How do you create a 'single' structured array using np.array()?
> Basically I am attempting to do something like this that does not work:
> a=np.array([1,2, 3,4, 5,6], dtype=np.dtype([('foo', int)]))
>
> I realize that this is essentially
[0]
out[2] = a[2] + out[1]
...
It always reads from a[i] before it writes to out[i], so it's always consistent.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had a
print '%s succeeded' % dt.__name__
...>
bool succeeded
uint8 succeeded
int8 succeeded
int succeeded
float succeeded
float32 succeeded
complex64 failed: TypeError: can't convert complex to float
complex128 failed: TypeError: can't convert complex to float
object succeeded
mask object?
No. These two are not semantically equivalent. Your second example
does not actually modify m. For integer and bool mask arrays, m[mask]
necessarily makes a copy, so when you modify t via inplace addition,
you have only modified t and not m. The assignm
On Wed, May 4, 2011 at 11:14, Matthew Brett wrote:
> Hi,
>
> On Tue, May 3, 2011 at 7:58 PM, Robert Kern wrote:
>> I can't speak for the rest of the group, but as for myself, if you
>> would like to draft such a letter, I'm sure I will agree with its
>> con
ng done already?
We didn't think of it. If you can write up a patch that works safely
and shows a performance improvement, it's probably worth putting in.
It's probably not *that* common of a bottleneck, though.
--
Robert Kern
"I have come to believe that the whole world is
express your computations in a natural way without having to worry so much
> about the number of temp arrays being created.
Theano does something along those lines.
http://deeplearning.net/software/theano/
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harml
nice to use as array indexes. And in
> fact to use it in actual code I'd need to do one or more other passes to
> check unmapped_colors for any indexes < 0 or > 2.
Also, still not *quite* as general as you might like, but sufficient
for the problem as stated:
colors = color_array
ing approaches he's used.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
N
return 10
> else:
> return sqrt( n )
> f = numpy.compile( lambda(x): 0 if ( x < 10 ) else capped_sqrt( x ) )
> numpy.map( f, a )
>
> or something like that, and it would all happen in a single pass within
> numpy, with no "python code&quo
pen-source
> matra of everything on-list:
>
> http://producingoss.com/en/setting-tone.html#avoid-private-discussions
Having project-relevant *discussions* on-list doesn't preclude getting
someone's *attention* off-list.
I can't speak for the rest of the group, but as for
numpy fudges with them). We have had
regressions in the past that went unnoticed for years. Rather, you
should not be running the test suite from an environment with a bunch
of other packages imported (I know, I know, the existence of np.test()
kind of implicitly encourages this, but still...).
NG to control
> what is going on.
Honestly, they really shouldn't be, except as a workaround to
poorly-written functions that don't let you pass in your own PRNG.
Someone snuck in the module-level alias to the global PRNG's seed()
method when I wasn't paying attention. :-)
t that everyone do exactly the same thing for
consistency, both inside scikits.learn and in code that uses or
extends scikits.learn. The best way to ensure that is to provide a
utility function as the One, Obvious Way To Do It. Note that if you do
hide the details behind a utility function, I would
stated, which is limited to numpy.random. It might even
be documented somewhere. Unfortunately, most of the individual methods
had their parameters documented before this capability was added.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
On Tue, Apr 12, 2011 at 13:17, Charles R Harris
wrote:
>
>
> On Tue, Apr 12, 2011 at 11:56 AM, Robert Kern wrote:
>>
>> On Tue, Apr 12, 2011 at 12:27, Charles R Harris
>> wrote:
>>
>> > IIRC, the behavior with respect to scalars sort of happened in t
On Tue, Apr 12, 2011 at 11:49, Mark Wiebe wrote:
> On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern wrote:
>> You're missing the key part of the rule that numpy uses: for
>> array*scalar cases, when both array and scalar are the same kind (both
>> floating point or both
t; have been deficient up to this point.
It's been documented for a long time now.
http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad atte
On Tue, Apr 12, 2011 at 11:20, Mark Wiebe wrote:
> On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern wrote:
>>
>> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe wrote:
>> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant
>> >
>> > wrote:
>>
>
for floats,
for which the limiting attribute is precision, not range. For floats,
the result of min_scalar_type should be the type of the object itself,
nothing else. E.g. min_scalar_type(x)==float64 if type(x) is float no
matter what value it has.
--
Robert Kern
"I have come to believe that t
ase. Previously, the result was a float64
array, as expected.
Mark, I expect this is a result of one of your changes. Can you take a
look at this?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by
mended practice is for library users (who do or
should know those things) to run their test suites with the warnings
turned on.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
On Thu, Mar 31, 2011 at 14:16, Orion Poplawski wrote:
> On 03/31/2011 01:12 PM, Robert Kern wrote:
>> Well, they're meant to be copied into your own code, which is why they
>> end up under a doc/ directory. Lots of things like this tend to end up
>> in doc/ directories.
On Thu, Mar 31, 2011 at 14:08, Ralf Gommers wrote:
> On Thu, Mar 31, 2011 at 8:52 PM, Robert Kern wrote:
>> Linux distributions start to complain when data files (especially
>> documentation data files not used at runtime) are placed into the
>> Python packages. I woul
On Thu, Mar 31, 2011 at 13:39, Ralf Gommers wrote:
> On Thu, Mar 31, 2011 at 8:32 PM, Robert Kern wrote:
>> On Thu, Mar 31, 2011 at 12:00, Ralf Gommers
>> wrote:
>>> On Thu, Mar 31, 2011 at 6:33 PM, Orion Poplawski
>>> wrote:
>>>> I'm looki
taller originally (the original ticket that led to this
change[1088]). I don't think it was through the generic setup.py. The
proper fix would probably be something specific to each binary
installer.
[1088] http://projects.scipy.org/numpy/ticket/1088
--
Robert Kern
"I have come to
On Wed, Mar 30, 2011 at 16:03, Ralf Gommers wrote:
> On Thu, Mar 24, 2011 at 5:25 PM, Ralf Gommers
> wrote:
>> On Thu, Mar 24, 2011 at 5:11 PM, Robert Kern wrote:
>>> We really should change the default to 'warn' for numpy 2.0. Maybe
>>> even for numpy
nted. In
particular, you can cause an exception to be raised so that you can
use a debugger to locate the source.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made t
On Tue, Mar 29, 2011 at 13:33, wrote:
> Any suggestions on how to achieve stable sort based on multiple columns with
> numpy ?
http://docs.scipy.org/doc/numpy/reference/generated/numpy.lexsort.html#numpy.lexsort
It uses mergesort for stability.
--
Robert Kern
"I have come to b
he just
wants bools (or even just 0s and 1s) and not a real string of bits
compacted into bytes.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
thou
s on parallelizing expensive computation like
matrix-matrix multiplication, not things like finding the minimum
elements of an array.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
omputations and
> making text output completely non-readable.
>
>>>> from numpy import __version__
>>>> __version__
> '2.0.0.dev-1fe8136'
We really should change the default to 'warn' for numpy 2.0. Maybe
even for numpy 1.6. We've talked about it
2(2**63)
> Traceback (most recent call last):
> File "", line 1, in
> log2(2**63)
> AttributeError: log2
>
> integer conversion problem
Right. numpy cannot safely convert a long object of that size to a
dtype it knows about, so it leaves it as an object array.
manner, but I think
> it's more important for __eq__ to follow it's usual semantics of returning a
> boolean. I'd way prefer it if the element-wise equality array generation was
> exposed as a different method.
I'm afraid that it is far too late to make such a change.
lls on the same file
object:
fp = open('TempFile', 'rb')
for i in range(3):
print np.load(fp)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had
umeric.py", line 754, in
> argmin
> return argmin(axis)
> TypeError: unsupported operand type(s) for -: 'int' and 'datetime.datetime'
>
> Is this a bug, or am I just doing datetimes wrong?
Heh. x.argmin() is implemented as (0-x).argmax(). It should prob
speed for
> lots of cases.
x.__pow__(2) is indeed strength-reduced down to multiplication by
default. This occurs in the C implementation of ndarray.__pow__().
Feel free to override __pow__() in your class to directly call
np.power() which will just do the power calculation directly.
--
Rob
for all true
dtype objects, if they compare equal then they hash equal. So you
could make a well-behaved dict with true dtypes as keys, but not
dtypes and "dtype-coercable" objects.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made
; In [12]: x.flat[0]['cl']
> Out[12]: array(array(7.399500875785845e-10), dtype=object)
>
> In [13]: x[0]
> ---
> IndexError Traceback (most recent call last)
>
> /src/ in ()
>
> IndexError: 0-d arrays can't be indexed
It's not tha
On Wed, Mar 16, 2011 at 12:15, Mark Wiebe wrote:
> On Wed, Mar 16, 2011 at 10:00 AM, Robert Kern wrote:
>>
>> On Wed, Mar 16, 2011 at 11:55, Robert Kern wrote:
>> > On Wed, Mar 16, 2011 at 11:43, Matthew Brett
>> > wrote:
>> >
>> >> I can
On Wed, Mar 16, 2011 at 11:55, Robert Kern wrote:
> On Wed, Mar 16, 2011 at 11:43, Matthew Brett wrote:
>
>> I can git-bisect it later in the day, will do so unless it's become
>> clear in the meantime.
>
> I'm almost done bisecting.
6c6dc487ca15818d1f4cc76
On Wed, Mar 16, 2011 at 11:43, Matthew Brett wrote:
> I can git-bisect it later in the day, will do so unless it's become
> clear in the meantime.
I'm almost done bisecting.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma tha
On Wed, Mar 16, 2011 at 10:27, Charles R Harris
wrote:
>
> On Wed, Mar 16, 2011 at 8:56 AM, Charles R Harris
> wrote:
>>
>>
>> On Wed, Mar 16, 2011 at 8:46 AM, Robert Kern
>> wrote:
>>>
>>> On Wed, Mar 16, 2011 at 01:18, Matthew Brett
>>
hashdescr.c", nothing has changed in the
implementation of the hash function since Oct 31, before numpy 1.5.1
which also passes the second test. I'm not sure what would be causing
the difference in HEAD.
--
Robert Kern
"I have come to believe that the whole world is an enigma
On Tue, Mar 15, 2011 at 13:10, Mark Sienkiewicz wrote:
> Robert Kern wrote:
>> On Tue, Mar 15, 2011 at 12:39, Charles R Harris
>> wrote:
>>
>>
>>> Yes, I think it is a bug. IIRC, it also shows up for object arrays.
>>>
>>
>> It's ex
9863163822 ] )
[~]
|3> print repr(a)
array([ 16.50698632])
You can disagree with the feature, but it's not a bug.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though
501 - 600 of 2838 matches
Mail list logo