lease run "scipy.test(verbose=2)" in order to determine what test is
triggering the crash.
Even better, if you can install faulthandler and run the tests under
that, you can give us a full Python traceback where the segfault
occurs.
http://pypi.python.org/pypi/faulthandler
--
Robert Ke
also accept anything that
can be converted to a dtype via np.dtype(x). The following are all
equivalent:
dtype=float
dtype='float'
dtype='float64'
dtype=np.float64
dtype='f8'
dtype='d'
dtype=np.dtype(float)
--
Robert Kern
"I have come
On Fri, Jun 17, 2011 at 15:18, Benjamin Root wrote:
> On Fri, Jun 17, 2011 at 3:07 PM, Robert Kern wrote:
>>
>> On Fri, Jun 17, 2011 at 15:03, Benjamin Root wrote:
>> > Using the master branch, I was running the scipy tests when a crash
>> > occurred. I be
mpy/reference/generated/numpy.save.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html
(Note the mmap_mode argument)
https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
eni
#x27;s
approach seems to be very well-liked by a lot of users. In essence,
*that's* the "missing data problem" that you were charged with: making
happy the users who are currently dissatisfied with masked arrays. It
doesn't seem to me that moving the functionality from numpy.ma to
he simple numpy array model.
>
> I wonder how much of the complication could be located in the dtype.
What dtype? There are no new dtypes in this proposal.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad at
On Thu, Jun 23, 2011 at 17:05, Charles R Harris
wrote:
>
> On Thu, Jun 23, 2011 at 4:04 PM, Robert Kern wrote:
>>
>> On Thu, Jun 23, 2011 at 17:02, Charles R Harris
>> wrote:
>> >
>> > On Thu, Jun 23, 2011 at 3:48 PM, Gael Varoquaux
>> > wro
On Fri, Jun 24, 2011 at 06:47, Matthew Brett wrote:
> Hi,
>
> On Thu, Jun 23, 2011 at 10:44 PM, Robert Kern wrote:
>> On Thu, Jun 23, 2011 at 15:53, Mark Wiebe wrote:
>>> Enthought has asked me to look into the "missing data" problem and how NumPy
>>>
alternative proposal would be to add a few new dtypes that are
NA-aware. E.g. an nafloat64 would reserve a particular NaN value
(there are lots of different NaN bit patterns, we'd just reserve one)
that would represent NA. An naint32 would probably reserve the most
negative int32 value (like R
for 1D arrays.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Discussion maili
On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
> On Fri, Jun 24, 2011 at 7:06 AM, Robert Kern wrote:
>
>> The alternative proposal would be to add a few new dtypes that are
>> NA-aware. E.g. an nafloat64 would reserve a particular NaN value
>> (there are lots of di
On Fri, Jun 24, 2011 at 09:33, Charles R Harris
wrote:
>
> On Fri, Jun 24, 2011 at 8:06 AM, Robert Kern wrote:
>> The alternative proposal would be to add a few new dtypes that are
>> NA-aware. E.g. an nafloat64 would reserve a particular NaN value
>> (there are l
On Fri, Jun 24, 2011 at 09:35, Robert Kern wrote:
> On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
>> On Fri, Jun 24, 2011 at 7:06 AM, Robert Kern wrote:
>>
>>> The alternative proposal would be to add a few new dtypes that are
>>> NA-aware. E.g. an nafloat
On Fri, Jun 24, 2011 at 10:07, Laurent Gautier wrote:
> On 2011-06-24 16:43, Robert Kern wrote:
>>
>> On Fri, Jun 24, 2011 at 09:33, Charles R Harris
>> wrote:
>>>
>>> >
>>> > On Fri, Jun 24, 2011 at 8:06 AM, Robert Kern
>>> >
On Fri, Jun 24, 2011 at 10:02, Pierre GM wrote:
>
> On Jun 24, 2011, at 4:44 PM, Robert Kern wrote:
>
>> On Fri, Jun 24, 2011 at 09:35, Robert Kern wrote:
>>> On Fri, Jun 24, 2011 at 09:24, Keith Goodman wrote:
>>>> On Fri, Jun 24, 2011 at 7:06 AM, Robert Ker
On Fri, Jun 24, 2011 at 11:05, Nathaniel Smith wrote:
> On Fri, Jun 24, 2011 at 8:14 AM, Robert Kern wrote:
>> On Fri, Jun 24, 2011 at 10:07, Laurent Gautier wrote:
>>> May be there is not so much need for reservation over the string NA, when
>>> making the dist
here you have gridded data, unmasking and remasking can be
quite useful. They are complementary tools.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to
you can also be clever
a = array.array('c')
map(a.extend, your_generator_of_4strings)
b = np.frombuffer(a, dtype=np.int32)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad att
On Tue, Jun 21, 2011 at 13:35, Christopher Barker wrote:
> Robert Kern wrote:
>> https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
>
> Just a note. From that doc:
>
> """
> HDF5 is a complicated format that more or less implements
&g
On Mon, Jun 27, 2011 at 11:17, Derek Homeier
wrote:
> On 21.06.2011, at 8:35PM, Christopher Barker wrote:
>
>> Robert Kern wrote:
>>> https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
>>
>> Just a note. From that doc:
>>
>> ""
could be deferred to inside
load_library() in ctypeslib.py, yes.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
On Wed, Jul 13, 2011 at 00:36, Long Duong wrote:
>
> Does anybody know if there are there videos of the conference this year?
Yes. Announcements will be made when they start going online.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma t
es which
aren't relevant here). For array-array and scalar-scalar, the
"largest" dtype wins. So for the first case, array-scalar, bad_val
gets downcasted to float32. For the latter case, bad_val remains
float64 and upcasts c to float64.
Try this:
bad_val = np.float32(10) * a.max()
--
Rob
ply inform users that PEP 3118 exists as an
> alternative
>
> http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
>
> Thus my confusion...
It was an error in the 1.4 documentation that was fixed.
--
Robert Kern
"I have come to believe that the whole world is an enigma,
nto it where A
!= 0. This makes you do the comparisons twice.
Or you can allocate a B array the same size as A, run your loop to
assign into it when A != 0 and incrementing the index into B, then
slice out or memcpy out the portion that you assigned.
--
Robert Kern
"I have come to believe th
include']
>>>
>>> These defaults won't work on the forthcoming Ubuntu 11.10, which
>>> installs X into /usr/lib/X11 and /usr/include/X11.
>
> Did you check that some compilation fails because of this?
Enthought's Enable will probably fail. It uses the syste
particular values and that it was a boolean
array. Instead, it tested the precise bytes of the repr of the array.
The repr of ndarrays are not a stable API, and we don't make
guarantees about the precise details of its behavior from version to
version. doctests work better to test simpler types
r what it's worth, I have found this paper by James Diebel to be the
most complete listing of all of the different conventions and
conversions amongst quaternions, Euler angles, and rotation vectors:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.5134
--
Robert Kern
"I have
ing a
quick glance at the pull request, it looks like the filename is
included in the message, so the warning will appear once for each file
that needs to be warned about. This seems entirely appropriate
behavior to me.
--
Robert Kern
"I have come to believe that the whole world is an eni
On Sat, Aug 20, 2011 at 17:37, Chris Withers wrote:
> Hi All,
>
> What's the best type of array to use for decimal values?
> (ie: where I care about precision and want to avoid any possible
> rounding errors)
dtype=object
--
Robert Kern
"I have come to believe t
On Sat, Aug 20, 2011 at 17:49, Chris Withers wrote:
> On 20/08/2011 15:38, Robert Kern wrote:
>> On Sat, Aug 20, 2011 at 17:37, Chris Withers wrote:
>>> Hi All,
>>>
>>> What's the best type of array to use for decimal values?
>>> (ie: where I c
On Mon, Aug 22, 2011 at 10:07, Chris Withers wrote:
> On 22/08/2011 00:18, Mark Dickinson wrote:
>> On Sun, Aug 21, 2011 at 1:08 AM, Robert Kern wrote:
>>> You may want to try the cdecimal package:
>>>
>>> http://pypi.python.org/pypi/cdecimal/
>>
>&
to know when to lock the region by mprotect again.
Well, if you're willing to go *that* far, you might was well make a
userspace file system with fuse and mmap a file within that.
http://fuse.sourceforge.net/
You can even implement it in Python!
http://pypi.python.org/pypi/fuse-python
ht
nderlying
the str object. This is a hack, but it's the only way to avoid copying
potentially large amounts of data. This is the cause the unaligned
memory.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad att
under': 'ignore'}
[~]
|4> np.log([-1])
---
FloatingPointErrorTraceback (most recent call last)
/Users/rkern/ in ()
> 1 np.log([-1])
FloatingPointError: invalid value encount
t */
ret = PyArray_Type.tp_as_number->nb_multiply(m1, m2);
}
return ret;
}
The PyInt_AsLong() calls should be changed to check for
__index__ability, instead. Not sure about the other operators. Some
people *may* be relying on the coerce-sequences-to-ndarray behavior
with numpy scal
aise an exception.
What is an unambiguous bug is the behavior of * with a *float* scalar.
It should never have the "repeat" semantics, no matter what.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own ma
a matrix
> that has linear dependent columns and other times I get the LinAlgError()…
> this suggests that there is some kind of random component to the INV
> method. Is this normal? Thanks much ahead of time,
With exactly the same input in the same process? Can you provide that input?
On Tue, Aug 30, 2011 at 18:34, Mark Janikas wrote:
> When I export to ascii I am losing precision and it getting consistency... I
> will try a flat dump. More to come. TY
Might as well np.save() it to an .npy binary file and attach it.
--
Robert Kern
"I have come to believe tha
involved with inversion. But if you
use an optimized LAPACK from some vendor, I don't know what they may
be doing. Some optimized LAPACK/BLAS libraries may be threaded and may
dynamically determine how to break up the problem based on load (I
don't know of any that specifically do this, bu
Ns in.
[~]
|10> a = np.array([4.5, 6.7, 8.0, 9.0, 0.1])
[~]
|11> b = np.array([0.0001, 6.7, 8.0, 9.0, 0.1])
[~]
|12> mask = (a < 0.01) | (b < 0.01)
[~]
|13> ma = np.ma.masked_array(a, mask=mask)
[~]
|14> mb = np.ma.masked_array(b, mask=mask)
[~]
|15> ma / mb
masked_
ule here.
> I can't think of many other instances of aliased functions like that
> in numpy, though--but perhaps I'm not thinking hard enough. It
> certainly seemed strange to have 4 names for the same function.
numpy.random was actually replacing multiple libraries at once. The
ali
choice() makes sense, in analogy to random.choice() from the
standard library.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
On Thu, Sep 1, 2011 at 22:31, Christopher Jordan-Squire wrote:
> On Thu, Sep 1, 2011 at 11:14 PM, Robert Kern wrote:
>> On Thu, Sep 1, 2011 at 22:07, Christopher Jordan-Squire
>> wrote:
>>
>>> So in the mean time, are there any suggestions for what this R sample
ursed
upon; i.e. which sequences create another dimension and which are the
atomic elements. Otherwise, it has to make some guesses and do some
intelligent backtracking. It's not that intelligent.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigm
On Sun, Sep 11, 2011 at 14:44, Travis Vaught wrote:
>
> On Sep 11, 2011, at 2:58 AM, Robert Kern wrote:
>
>> On Sun, Sep 11, 2011 at 00:30, Travis Vaught wrote:
>>> Greetings,
>>>
>>> Is there a particular reason why a list of lists can't be passed
but it's harder to understand what is going on that just using four
separate lines and no easier to maintain.
Don't use eval() or locals().
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made
worse, not better. It's also
unreliable. The locals() dictionary is meant to be read-only (and even
then for debugger tooling and the like, not regular code), and this is
sometimes enforced. If you want to use variable names instead of
dictionaries, use them, but write o
t.
No. Even lists are "array-like" in the terminology of the docstring
standard. Anything that np.asarray() or np.asanyarray() can accept is
"array-like". Please stop making things up and being sanctimonious
about it.
--
Robert Kern
"I have come to believe that the whole
gt;>> here IF you think a pandas object is either array-like or a numpy
>>> object.
>>
>> No, the reason it is failing is because np.std takes the
>> EAFP/duck-typing approach:
>>
>> try:
>> std = a.std
>> except AttributeError:
>> re
both work. There's also no attempt at broadcasting indexes.
If the array is short in a dimension, it gets implicitly continued
with Falses. You can see this in one dimension:
[~]
|1> x = np.arange(12)
[~]
|2> x[np.array([True, False, True])]
array([0, 2])
I honestly don't know
ay be doing something different to load files
that have been installed into site-packages rather than other
locations. Unfortunately, I don't know a workaround other than to
modify all of our __init__.py file to do more explicit imports rather
than relying on the implicit behavior of the full Py
ink
nosetests does either (well, only for its plugin mechanism, but that's
neither here nor there). In any case, I've tried importing
pkg_resources while running from the build/lib.*/ directory and still
get the same errors as reported by Sandro.
--
Robert Kern
"I have come to beli
7;numpy'
Still, it shouldn't segfault, and it's worth figuring out why it does.
gdb has been mostly unenlightening for me since gdb won't let me
navigate the traceback.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is
the NumPy Example List. Since the
> website does not offer any wiki-type
>
> functionality for reverting changes or referring to a history of changes,
> there doesn't seem to be a way for me to fix the problem.
http://www.scipy.org/Numpy_Example_List?action=info
--
Robert Kern
ted could comment on it,
>> Benjamin Root did so, for instance. The fact things didn't go the way you
>> wanted doesn't indicate insufficient discussion. And you are certainly
>> welcome to put together an alternative and put up a pull request.
>
> I was also guessing
qrt, variances)
[~]
|23> cor = 0.7
[~]
|24> cov = np.array([[Sx, cor*sx*sy], [cor*sy*sx, Sy]])
[~]
|26> samples = np.random.multivariate_normal(means, cov, 1)
[~]
|27> cov
array([[ 100., 221.35943621],
[ 221.35943621,
array([ 9.09090909, 5.64593301, 5.26315789, 13., 6.6667])
This may be what you are doing already. I'm not sure what is in your
getx() and gety() methods. If so, then I think you are on the right
track. If you still have problems, then we might need to see some of
the pro
ually,
> but it may be more appropriate.
No, you do want to compute the interpolated values at the boundaries
of the new bins. Then differencing the values at the boundaries will
give you the correct values for the "mass" between the bounds.
--
Robert Kern
"I have come to beli
a platform where
Python's int type is 64-bits, numpy.int64 will include int in its
inheritance tree. On platforms where the Python int type is 32-bit,
numpy.int32 will include it instead.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that
pon a pull request that
> implements a better algorithm. It's just work, as you know given your
> contributions to other project.
Actually, last time I suggested it, it was brought up that the online
algorithms can be worse numerically. I'll try to find the thread.
--
Robert Kern
think
we have backed away from that since the cost of maintaining the build
configuration for all of those different backends was so high. It's
worth noting that numpy.fft is already using a C translation of
FFTPACK. I'm not sure what the differences are between this
translation and Mart
ines with functions signature-compatible with those in
numpy.linalg.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.&quo
chical object will all of the parameters as attributes.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying t
ning
> to dtype directly is not a dangerous thing to do.
It's no worse than .view(dt). The same kind of checking goes on in both places.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by ou
do with multiple interpreters, and will not let you use
multiple CPython interpreters in your application. The problem is that
Python does not have good isolation between multiple interpreters for
extension modules. Many extension modules happen to work in this
environment, but numpy is not o
On Fri, Dec 9, 2011 at 11:00, Yang Zhang wrote:
> Thanks for the clarification. Alas. So is there no simple workaround
> to making numpy work in environments such as Jepp?
I don't think so, no.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harm
On Fri, Dec 9, 2011 at 13:18, Pierre Haessig wrote:
> Le 09/12/2011 09:31, Robert Kern a écrit :
>> We have some global state
>> that we need to keep, and this gets interfered with in a multiple
>> interpreter environment.
> I recently got interested in multiprocessing co
]])
[~]
|4> i = np.argmin(y, axis=0)
[~]
|5> y[i, np.arange(y.shape[1])]
array([0, 0, 0, 0, 0])
[~]
|6> y[np.argmin(y, axis=0), np.arange(y.shape[1])]
array([0, 0, 0, 0, 0])
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made te
ed. I can certainly understand if it
causes bad interactions with the garbage collector, say (though hiding
information from the GC seems like a suboptimal approach).
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our
l side-effects or if it is
> the right way to use numpy with cython.
Yes, it is the right way to do it. Cython can *mostly* tell from
context whether to resolve NPY. from either the C side or
the Python side, but sometimes it's ambiguous. It's more often
ambiguous to the human reader, to
t; finished.
They are different files. In order to determine your configuration,
the setup.py will try to compile many different small C programs. All
of them are named _configtest. You can ignore all of these unless if
you think the discovered configuration is incorrect somehow. A
"failure&qu
f arbitrary mixtures of objects), then the stricture is obeyed. This
is a useful domain that is used internally in numpy.
Is this the problem that you found?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad at
one all of the macro-optimizations that you can.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
_
ndom.getstate() and
np.random.RandomState.get_state() and their associated setter
functions. You really just need to reformat the information to be
acceptable to the other.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by
On Fri, Dec 30, 2011 at 18:57, Andreas Kloeckner
wrote:
> Hi Robert,
>
> On Tue, 27 Dec 2011 10:17:41 +0000, Robert Kern wrote:
>> On Tue, Dec 27, 2011 at 01:22, Andreas Kloeckner
>> wrote:
>> > Hi all,
>> >
>> > Two questions:
>> >
&g
t; Segmentation fault (core dumped)
Can you provide an example that replicates the crash? Since it looks
like you have a core dump handy, can you get a gdb backtrace to show
us where the crash is? Platform details would also be handy.
--
Robert Kern
"I have come to believe that the whole
en() calls (ENOENT) as the interpreter tries to
>> find the module files.
>
> It sounds like there is a scalability problem with imp.find_module. I'd
> report
> this on python-dev or python-ideas.
It's well-known.
--
Robert Kern
"I have come to believe that the whole wor
a ramdisk capability?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumP
On Fri, Jan 13, 2012 at 21:42, Sturla Molden wrote:
> Den 13.01.2012 22:24, skrev Robert Kern:
>> Do these systems have a ramdisk capability?
>
> I assume you have seen this as well :)
>
> http://www.cs.uoregon.edu/Research/paracomp/papers/iccs11/iccs_paper_final.pdf
I h
wever I had no joy compiling this on Mac OS X
> with QuickTime support. Is this the best bet?
I've had luck with pyffmpeg, though I haven't tried QuickTime .mov files:
http://code.google.com/p/pyffmpeg/
--
Robert Kern
"I have come to believe that the whole world is an eni
On Tue, Jan 17, 2012 at 05:11, Andreas Kloeckner
wrote:
> Hi Robert,
>
> On Fri, 30 Dec 2011 20:05:14 +0000, Robert Kern wrote:
>> On Fri, Dec 30, 2011 at 18:57, Andreas Kloeckner
>> wrote:
>> > Hi Robert,
>> >
>> > On Tue, 27 Dec 2011 10:17:41 +0
On Wed, Jan 18, 2012 at 10:19, Peter
wrote:
> Sending this again (sorry Robert, this will be the second time
> for you) since I sent from a non-subscribed email address the
> first time.
>
> On Sun, Jan 15, 2012 at 7:12 PM, Robert Kern wrote:
>> On Sun, Jan 15, 2012
tiple Py_DECREFs). This is probably being hidden from you by the
boost.python interface and/or the boost::detail::sp_counted_impl_p<>
smart(ish) pointer. Check the backtrace where your code starts to
verify if this looks to be the case.
--
Robert Kern
"I have come to believe that the
can pin down which object has an
> unbalanced amount - to do this I want to know the address of the
> array, rather than the associated datatype descriptor - I assume I
> want to pay attention to the (self=0x117e0e850) in this line, and that
> is the address of the array I am mishandling?
>
machine that wasn't a DEC Alpha at the
time, so I knew very little about the issues.
So yes, please, fix whatever you can.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it
27;m sorry, what are you demonstrating there?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
__
On Tue, Jan 24, 2012 at 09:19, Sturla Molden wrote:
> On 24.01.2012 10:16, Robert Kern wrote:
>
>> I'm sorry, what are you demonstrating there?
>
> Both npy_intp and C long are used for sizes and indexing.
Ah, yes. I think Travis added the multiiter code to cont1_array(),
on(len(b))
[~/scratch]
|31> ps = p.argsort()
[~/scratch]
|41> p
array([2, 3, 5, 4, 6, 1, 0])
[~/scratch]
|42> ps
array([6, 5, 0, 1, 3, 2, 4])
[~/scratch]
|43> ps[loi]
array([5, 2, 4])
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma tha
ng = np.random.RandomState(1234567890)
>>> blah = prng.binomial(5, 0.5, size=551)
If you run python under gdb, you can then set a breakpoint in
rk_binomial_btpe() in distributions.c to step through the next call to
prng.binomial(). Sometimes you can fix these issues in a rejection
al
e I managed to explain the problem well. Is there a recommended way
> to test for empty arrays?
[~]
|5> x = np.zeros([0])
[~]
|6> x
array([], dtype=float64)
[~]
|7> x.size == 0
True
Note that checking for len(x) will fail for some empty arrays:
[~]
|8> x = np.zeros([10, 0]
On Fri, Jan 27, 2012 at 21:17, Emmanuel Mayssat wrote:
> In [20]: dt_knobs =
> [('pvName',(str,40)),('start','float'),('stop','float'),('mode',(str,10))]
>
> In [21]: r_knobs = np.recarray([],dtype=dt_knobs)
>
> In [22]: r_knobs
> Out[22]:
> rec.array(('\xa0\x8c\xc9\x02\x00\x00\x00\x00(\xc8v\x02\x
== 0
None should rarely be treated the same as an empty list or a 0-size
array, so that should be left to application-specific code.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it
to
explicitly use numpy.fromiter() to convert them to ndarrays. asarray()
and array() can't do it in general because they need to autodiscover
the shape and dtype all at the same time.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is
4a08>, dtype=object)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
___
NumPy-Disc
On Tue, Jan 31, 2012 at 15:35, Benjamin Root wrote:
>
>
> On Tue, Jan 31, 2012 at 9:18 AM, Robert Kern wrote:
>>
>> On Tue, Jan 31, 2012 at 15:13, Benjamin Root wrote:
>>
>> > Is np.all() using np.array() or np.asanyarray()? If the latter, I would
>>
ge is also in np.asarray(). The only
additional feature of np.asanyarray() is that is does not convert
ndarray subclasses like matrix to ndarray objects. np.asanyarray()
does not accept more types of objects than np.asarray().
--
Robert Kern
"I have come to believe that the whole world is an enigma
least.
I would rather we deprecate the all() and any() functions in favor of
the alltrue() and sometrue() aliases that date back to Numeric.
Renaming them to match the builtin names was a mistake.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma t
people kvetch about it. I
don't like removing long-standing, documented features based on
suspicions that their user base is small. Our suspicions and
intuitions about such things aren't worth much.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigm
ine
segments and circular arcs. Parts of these curves will be too close to
the reference curve. You will have to go through these curves to find
the locations of self-intersection and remove the parts of the
segments and arcs that are too close to the reference curve. This is
tricky to do, but the fo
601 - 700 of 2838 matches
Mail list logo