On Jan 24, 2012, at 1:33 PM, K.-Michael Aye wrote:
I know I know, that's pretty outrageous to even suggest, but please
bear with me, I am stumped as you may be:
2-D data file here:
http://dl.dropbox.com/u/139035/data.npy
Then:
In [3]: data.mean()
Out[3]: 3067.024383998
In [4]:
You have a million 32-bit floating point numbers that are in the
thousands. Thus you are exceeding the 32-bitfloat precision and, if you
can, you need to increase precision of the accumulator in np.mean() or
change the input dtype:
a.mean(dtype=np.float32) # default and lacks precision
a[x,y,:]
Read the slicing part of the tutorial:
http://www.scipy.org/Tentative_NumPy_Tutorial
(section 1.6)
And the documentation:
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
On Jan 30, 2012, at 10:25 AM, Ted To wrote:
Hi,
Is there some straightforward way to access
value but not element index.
2012/1/30 Zachary Pincus zachary.pin...@yale.edu
a[x,y,:]
Read the slicing part of the tutorial:
http://www.scipy.org/Tentative_NumPy_Tutorial
(section 1.6)
And the documentation:
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
On Jan 30
Thanks! That works great if I only want to search over one index but I
can't quite figure out what to do with more than a single index. So
suppose I have a labeled, multidimensional array with labels 'month',
'year' and 'quantity'. a[['month','year']] gives me an array of indices
but
How about the following?
exact: numpy.all(a == a[0])
inexact: numpy.allclose(a, a[0])
On Mar 5, 2012, at 2:19 PM, Keith Goodman wrote:
On Mon, Mar 5, 2012 at 11:14 AM, Neal Becker ndbeck...@gmail.com wrote:
What is a simple, efficient way to determine if all elements in an array (in
my
Hi,
Is it possible to have a view of a float64 array that is itself float32?
So that:
A = np.arange(5, dtype='d')
A.view(dtype='f')
would return a size 5 float32 array looking at A's data?
I think not. The memory layout of a 32-bit IEEE float is not a subset of that
of a 64-bit float
\x00\x00`@'
str(numpy.array(128,dtype=numpy.float32).data)
'\x00\x00\x00C'
There's obviously no stride trick whereby one will look like the other.
Zach
On Wed, Mar 21, 2012, at 11:19, Zachary Pincus wrote:
Hi,
Is it possible to have a view of a float64 array that is itself float32?
So
That all sounds like no option -- sad.
Cython is no solution cause, all I want is to leave Python Syntax in
favor for strong OOP design patterns.
What about ctypes?
For straight numerical work where sometimes all one needs to hand across the
python-to-C/C++/Fortran boundary is a pointer to
Here's one way you could do it:
In [43]: indices = [0,1,2,3,5,7,8,9,10,12,13,14]
In [44]: jumps = where(diff(indices) != 1)[0] + 1
In [45]: starts = hstack((0, jumps))
In [46]: ends = hstack((jumps, len(indices)))
In [47]: slices = [slice(start, end) for start, end in zip(starts,
Hello all,
The below seems to be a bug, but perhaps it's unavoidably part of the indexing
mechanism?
It's easiest to show via example... note that using [0,1] to pull two columns
out of the array gives the same shape as using :2 in the simple case, but
when there's additional slicing
On Mon, May 14, 2012 at 4:33 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
The below seems to be a bug, but perhaps it's unavoidably part of the
indexing mechanism?
It's easiest to show via example... note that using [0,1] to pull two
columns out of the array gives the same shape
There is a fine line here. We do need to make people clean up lax code in
order to improve numpy, but hopefully we can keep the cleanups reasonable.
Oh agreed. Somehow, though, I was surprised by this, even though I keep tabs on
the numpy lists -- at no point did it become clear that big
On Tue, Jun 5, 2012 at 8:41 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
There is a fine line here. We do need to make people clean up lax code
in order to improve numpy, but hopefully we can keep the cleanups
reasonable.
Oh agreed. Somehow, though, I was surprised by this, even
On 10.10.2012 15:42, Nathaniel Smith wrote:
This PR submitted a few months ago adds a substantial new API to numpy,
so it'd be great to get more review. No-one's replied yet, though...
Any thoughts, anyone? Is it useful, could it be better...?
Fast neighbor search is what
14, 2012 at 8:24 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
It would be useful for the author of the PR to post a detailed comparison of
this functionality with scipy.ndimage.generic_filter, which appears to have
very similar functionality.
I'll be durned. I created neighbor
I have 2D array, let's say: `np.random.random((100,100))` and I want to do
simple manipulation on each point neighbors, like divide their values by 3.
So for each array value, x, and it neighbors n:
n n nn/3 n/3 n/3
n x n - n/3 x n/3
n n nn/3 n/3 n/3
I searched a bit, and
You are right. I needed generic filter - to update current point, and not the
neighbors as I wrote.
Initial code is slow loop over 2D python lists, which I'm trying to convert
to numpy and make it useful. In that loop there is inner loop for calculating
neighbors properties, which confused
Thank you,
No if the location ( space time or depth) of choice is not
available then the function I was looking for should give an interpolated
value at the choice.
with best regards,
Sudheer
scipy.ndimage.map_coordinates may be exactly what you want.
- Original
And as you pointed out,
most of the time for non-trivial datasets the numpy operations will be
faster. (I'm daunted by the notion of trying to do linear algebra on
lists of tuples, assuming that's the relevant set of operations given
the comparison to the matrix class.)
Note the
As str objects are supposed to be immutable, I think anything
official that makes a string from a numpy array is supposed to copy
the data. But I think you can use ctypes to wrap a pointer and a
length as a python string.
Zach
On Sep 27, 2010, at 8:28 AM, Francesc Alted wrote:
Hi,
Here's a good list of basic geometry algorithms:
http://www.softsurfer.com/algorithms.htm
Zach
On Oct 6, 2010, at 5:08 PM, Renato Fabbri wrote:
supose you have a line defined by two points and a point. you want
the distance
what are easiest possibilities? i am doing it, but its nasty'n
I'm trying to write an implementation of the amoeba function from
numerical recipes and need to be able to pass a function name and
parameter list to be called from within the amoeba function. Simply
passing the name as a string doesn't work since python doesn't know it
is a function and
Hi Robert,
so in a big data analysis framework, that is essentially written in C
++,
exposed to python with SWIG, plus dedicated python modules, the user
performs an analysis choosing some given modules by name,as in :
myOpt=foo
my_analyse.perform(use_optimizer=myOpt)
The attribute
This is silly: the structure of the python language prevents
meaningful short-circuiting in the case of
np.any(a!=b)
While it's true that np.any itself may short-circuit, the 'a!=b'
statement itself will be evaluated in its entirety before the result
(a boolean array) is passed to np.any.
Help! I'm having a problem in searching through the *elements* if a
2d array. I have a loop over a numpy array:
n,m = G.shape
print n,m
for i in xrange(n):
for j in xrange(m):
print type(G), type(G[i,j]), type(float(G[i,j]))
g =
But wouldn't the performance hit only come when I use it in this
way?
__getattr__ is only called if the named attribute is *not* found (I
guess it falls off the end of the case statement, or is the result
of
the attribute hash table miss).
That's why I said that __getattr__ would
On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
At first glance it looks as if a relaxation is simply not possible:
either there are additional rows or not.
But with some technical transformations it is possible to
mask = numpy.zeros(medical_image.shape, dtype=uint16)
mask[ numpy.logical_and( medical_image = lower, medical_image =
upper)] = 255
Where lower and upper are the threshold bounds. Here I' m marking the
array positions where medical_image is between the threshold bounds
with 255, where
def repeat(arr, num):
arr = numpy.asarray(arr)
return numpy.ndarray(arr.shape+(num,), dtype=arr.dtype,
buffer=arr, strides=arr.strides+(0,))
There are limits to what these sort of stride tricks can accomplish,
but repeating as above, or similar, is feasible.
On Jan 1, 2011, at 8:42
Is it possible to use a variable in an array name? I am looping
through a
bunch of calculations, and need to have each array as a separate
entity.
I'm pretty new to python and numpy, so forgive my ignorance. I'm
sure there
is a simple answer, but I can't seem to find it.
let's say
Thank you very much for the prompt response. I have already done
what you
have suggested, but there are a few cases where I do need to have an
array
named with a variable (looping through large numbers of unrelated
files and
calculations that need to be dumped into different
textlist = [test1.txt, test2.txt, test3.txt]
for i in textlist:
text_file = open(textlist, a)
text_file.write(\nI suck at Python and need help)
text_file.close()
But, this doesn't work. It gives me the error:
coercing to Unicode: need string or buffer, list found
I am using python for a while now and I have a requirement of
creating a
numpy array of microscopic tiff images ( this data is 3d, meaning
there are
100 z slices of 512 X 512 pixels.) How can I create an array of
images?
It's quite straightforward to create a 3-d array to hold this
In a 1-d array, find the first point where all subsequent points
have values
less than a threshold, T.
Maybe something like:
last_greater = numpy.arange(arr.shape)[arr = T][-1]
first_lower = last_greater + 1
There's probably a better way to do it, without the arange, though...
In a 1-d array, find the first point where all subsequent points
have values
less than a threshold, T.
Maybe something like:
last_greater = numpy.arange(arr.shape)[arr = T][-1]
first_lower = last_greater + 1
There's probably a better way to do it, without the arange, though...
I'm
On Feb 9, 2011, at 10:58 AM, Neal Becker wrote:
Zachary Pincus wrote:
In a 1-d array, find the first point where all subsequent points
have values
less than a threshold, T.
Maybe something like:
last_greater = numpy.arange(arr.shape)[arr = T][-1]
first_lower = last_greater + 1
As before, the line below does what you said you need, though not
maximally efficiently. (Try it in an interpreter...) There may be
another way in numpy that doesn't rely on constructing the index
array, but this is the first thing that came to mind.
last_greater =
This assumes monotonicity. Is that allowed?
The twice-stated problem was:
[Note to avert email-miscommunications] BTW, I wasn't trying to snipe
at you with that comment, Josef...
I just meant to say that this solution solves the problem as Neal
posed it, though that might not be the exact
In a 1-d array, find the first point where all subsequent points
have values
less than a threshold.
This doesn't imply monotonicity.
Suppose with have a sin curve, and I want to find the last trough. Or
a business cycle and I want to find the last recession.
Unless my english
Here's a ctypes interface to FreeImage that I wrote a while back and
was since cleaned up (and maintained) by the scikits.image folk:
https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py
If it doesn't work out of the box on python 3, then it should
a, b, c = np.array([10]), np.array([2]), np.array([7])
min_val = np.minimum(a, b, c)
min_val
array([2])
max_val = np.maximum(a, b, c)
max_val
array([10])
min_val
array([10])
(I'm using numpy 1.4, and I observed the same behavior with numpy
2.0.0.dev8600 on another machine). I'm quite
On Apr 26, 2011, at 2:31 PM, Daniel Lepage wrote:
You need PIL no matter what; scipy.misc.imread, scipy.ndimage.imread,
and scikits.image.io.imread all call PIL.
scikits.image.io also has a ctypes wrapper for the freeimage library.
I prefer these (well, I wrote them), though apparently
You could try:
src_mono = src_rgb.astype(float).sum(axis=-1) / 3.
But that speed does seem slow. Here are the relevant timings on my machine (a
recent MacBook Pro) for a 3.1-megapixel-size array:
In [16]: a = numpy.empty((2048, 1536, 3), dtype=numpy.uint8)
In [17]: timeit
Hello all,
As a result of the fast greyscale conversion thread, I noticed an anomaly
with numpy.ndararray.sum(): summing along certain axes is much slower with
sum() than versus doing it explicitly, but only with integer dtypes and when
the size of the dtype is less than the machine word. I
On Jun 21, 2011, at 1:16 PM, Charles R Harris wrote:
It's because of the type conversion sum uses by default for greater precision.
Aah, makes sense. Thanks for the detailed explanations and timings!
___
NumPy-Discussion mailing list
Hello Keith,
While I also echo Johann's points about the arbitrariness and non-utility of
benchmarking I'll briefly comment just on just a few tests to help out with
getting things into idiomatic python/numpy:
Tests 1 and 2 are fairly pointless (empty for loop and empty procedure) that
won't
I think the remaining delta between the integer and float boxcar smoothing is
that the integer version (test 21) still uses median_filter(), while the float
one (test 22) is using uniform_filter(), which is a boxcar.
Other than that and the slow roll() implementation in numpy, things look
I keep meaning to use matplotlib as well, but every time I try I also get
really turned off by the matlabish interface in the examples. I get that it's a
selling point for matlab refugees, but I find it counterintuitive in the same
way Christoph seems to.
I'm glad to hear the OO interface
As an example, it'd be nice to have scipy.ndimage available without the GIL:
http://docs.scipy.org/doc/scipy/reference/ndimage.html
Now, this *can* easily be done as the core is written in C++. I'm just
pointing out that some people may wish more for calling scipy.ndimage
inside their
Hello folks,
I recently was trying to write code to modify an array in-place (so
as not to invalidate any references to that array) via the standard
python idiom for lists, e.g.:
a[:] = numpy.flipud(a)
Now, flipud returns a view on 'a', so assigning that to 'a[:]'
provides pretty strange
Zachary Pincus wrote:
Hello folks,
I recently was trying to write code to modify an array in-place (so
as not to invalidate any references to that array)
I'm not sure what this means exactly.
Say one wants to keep two different variables referencing a single in-
memory list, as so
A question, then: Does this represent a bug? Or perhaps there is a
better idiom for modifying an array in-place than 'a[:] = ...'? Or is
incumbent on the user to ensure that any time an array is directly
modified, that the modifying array is not a view of the original
array?
Yes, it is
Hello all,
It seems that the 'eigh' routine from numpy.linalg does not follow
the same convention as numpy.linalg.eig in terms of the order of the
returned eigenvalues. (And thus eigenvectors as well...)
Specifically, eig returns eigenvalues in order from largest to
smallest, while eigh
Hello folks,
I've developed some command-line tools for biologists using python/
numpy and some custom C and Fortran extensions, and I'm trying to
figure out how to easily distribute them...
For people using linux, I figure a source distribution is no problem
at all. (Right?)
On the other
I have found that the python 'unicode name' escape sequence, combined
with the canonical list of unicode names ( http://unicode.org/Public/
UNIDATA/NamesList.txt ), is a good way of getting the symbols you
want and still keeping the python code legible.
From the above list, we see that the
Your results are indeed around zero.
numpy.allclose(0, 1.22460635382e-016)
True
It's not exactly zero because floating point math is in general not
exact. You'll need to check out a reference about doing floating
point operations numerically for more details, but in general you
should
Scipy's ndimage module has a function that takes a generic callback
and calls it with the values of each neighborhood (of a given size,
and optionally with a particular mask footprint) centered on each
array element. That function handles boundary conditions, etc nicely.
Unfortunately, I'm
Hi folks,
I've been doing a lot of web-reading on the subject, but have not
been completely able to synthesize all of the disparate bits of
advice about building python extensions as Mac-PPC and Mac-Intel fat
binaries, so I'm turning to the wisdom of this list for a few questions.
My
If I recall correctly, there's a bug in numpy 1.0.1 on Linux-x86-64
that causes this segfault. This is fixed in the latest SVN version of
numpy, so if you can grab that, it should work.
I can't find the trac ticket, but I ran into this some weeks ago.
Zach
On Mar 14, 2007, at 1:36 PM,
Hello all,
By the way, ringing at sharp edges is an intrinsic feature of higher-
order spline interpolation, right? I believe this kind of interpolant
is really intended for smooth (band-limited) data. I'm not sure why
the pre-filtering makes a difference though; I don't yet understand
well
Hello all,
On Mar 23, 2007, at 3:04 AM, Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 11:20:37PM -0700, Zachary Pincus wrote:
The actual transform operators then use these coefficients to
(properly) compute pixel values at different locations. I just
assumed that the pre-filtering
Hello folks,
Hmm, this is worrisome. There really shouldn't be ringing on
continuous-tone images like Lena -- right? (And at no step in an
image like that should gaussian filtering be necessary if you're
doing spline interpolation -- also right?)
That's hard to say. Just because it's
Thanks for the information and the paper link, James. I certainly
appreciate the perspective, and now see why the anti-aliasing and
reconstruction filtering might best be left to clients of a
resampling procedure.
Hopefully at least some of the kinks in the spline interpolation (to
date:
Hi folks,
Sorry to rain on this parade, but unicode variable names and/or other
syntactic elements have already been rejected for Python 3:
http://www.python.org/dev/peps/pep-3099/
Python 3000 source code won't use non-ASCII Unicode characters for
anything except string literals or
Since matrices are an iterable Python object,
we *expect* to iterate over the contained objects.
(Arrays.) I am not sure why this is not evident to all,
but it is surely the sticking point in this discussion.
A matrix is not a container of matrices.
That it acts like one is surprsing.
Exactly: that was one other thing I found artificial.
Surely the points will then be wanted as arrays.
So my view is that we still do not have a use case
for wanting matrices yielded when iterating across
rows of a matrix.
It's pretty clear from my perspective: 1-D slices of matrices *must*
Hello all,
I suspect my previous email did not contain the full chain of my
reasoning, because I thought that some parts were basically obvious.
My point was merely that given some pretty basic fundamental tenants
of numpy, Alan's suggestions quickly lead to *serious issues* far
worse
Hi,
I have a specific question and then a general question, and some
minor issues for clarification.
Specifically, regarding the arguments to getbufferproc:
166 format
167address of a format-string (following extended struct
168syntax) indicating what is in each element of
169
Hello,
There's unique and unique1d, but these don't output the number of
occurences.
There's also bincount, which outputs the number of each element, but
includes zeros for non-present elements and so could be problematic
for certain data.
Zach
On Mar 27, 2007, at 2:28 PM, Pierre GM
Is this saying that either NULL or a pointer to B can be supplied
by getbufferproc to indicate to the caller that the array is unsigned
bytes? If so, is there a specific reason to put the (minor)
complexity of handling this case in the caller's hands, instead of
dealing with it internally to
Looks promising!
On 3/29/07, Bill Baxter [EMAIL PROTECTED] wrote: On 3/30/07,
Timothy Hochberg [EMAIL PROTECTED] wrote:
Note, however that you can't (for instance) multiply column
vector with
a row vector:
(c)(r)
Traceback (most recent call last):
...
TypeError: Cannot
Hello all,
I've got a few questions that came up as I tried to calculate various
statistics about an image time-series. For example, I have an array
of shape (t,x,y) representing t frames of a time-lapse of resolution
(x,y).
Now, say I want to both argsort and sort this time-series, pixel-
I've got a few questions that came up as I tried to calculate various
statistics about an image time-series. For example, I have an array
of shape (t,x,y) representing t frames of a time-lapse of resolution
(x,y).
Now, say I want to both argsort and sort this time-series, pixel-
wise. (For
Hello Dave,
I don't know if this will be useful to your research, but it may be
worth pointing out in general. As you know PCA (and perhaps some
other spectral algorithms?) use eigenvalues of matrices that can be
factored out as A'A (where ' means transpose). For example, in the
PCA case,
Hello all,
I just recently updated to the SVN version of numpy to test my code
against it, and found that a small change made to
numpy.get_printoptions (it now returns a dictionary instead of a
list) breaks my code.
Here's the changeset:
http://projects.scipy.org/scipy/numpy/changeset/3877
On Jul 27, 2007, at 2:42 AM, Nils Wagner wrote:
I cannot reproduce the problem concerning #401. It is Mac specific
problem. Am I missing something ?
I can't reproduce this problem either. I just yesterday built scipy
from SVN on two different OS X 10.4.10 boxes, one using the fortran
Hello,
'len' is a (pretty basic) python builtin function for getting the
length of anything with a list-like interface. (Or more generally,
getting the size of anything that is sized, e.g. a set or dictionary.)
Numpy arrays offer a list-like interface allowing you to iterate
along their
Hello all,
On several occasions, I've had the need to find only the first
occurrence of a value in an unsorted numpy array. I usually use
numpy.where(arr==val)[0] or similar, and don't worry about the fact
that I'm iterating across the entire array.
However, sometimes the arrays are pretty
Thanks for the suggestions, everyone! All very informative and most
helpful.
For what it's worth, here's my application: I'm building a tool for
image processing which needs some manual input in a few places (e.g.
user draws a few lines). The images are greyscale images with 12-14
bits of
how to draw lines based on user input using matplotlib. It is
not suited for a big application, but useful for demonstrations.
Try it on
http://mentat.za.net/results/window.jpg
Regards
Stéfan
On Thu, Nov 29, 2007 at 11:59:05PM -0500, Zachary Pincus wrote:
Thanks for the suggestions
Hi all,
I use numpy's own ndindex() for tasks like these:
In: numpy.ndindex?
Type: type
Base Class: type 'type'
String Form:class 'numpy.lib.index_tricks.ndindex'
Namespace: Interactive
File: /Library/Frameworks/Python.framework/Versions/2.4/
Hello all,
That's well and good. But NumPy should *never* automatically -- and
silently -- chop the imaginary part off your complex array elements,
particularly if you are just doing an innocent assignment!
Doing something drastic like silently throwing half your data away can
lead to all
There are two related hierarchies of datatypes: different kinds
(integer,
floating point, complex floating point) and different precisions
within a given
kind (int8, int16, int32, int64). The term downcasting should
probably be
reserved for the latter only.
It seems to me that Zach
For large arrays, it makes since to do automatic
conversions, as is also the case in functions taking output arrays,
because the typecast can be pushed down into C where it is time and
space efficient, whereas explicitly converting the array uses up
temporary space. However, I can imagine an
Hello all,
In order to help make things regarding this casting issue more
explicit, let me present the following table of potential down-casts.
(Also, for the record, nobody is proposing automatic up-casting of
any kind. The proposals on the table focus on preventing some or all
implicit
Hello all,
On my (older) version of numpy (1.0.4.dev3896), I found several
oddities in the handling of assignment of long-integer values to
integer arrays:
In : numpy.array([2**31], dtype=numpy.int8)
---
ValueError
attached.
Zach Pincus
Postdoctoral Fellow, Lab of Dr. Frank Slack
Molecular, Cellular and Developmental Biology
Yale University
# Copyright 2007 Zachary Pincus
#
# This is free software; you can redistribute it and/or modify
# it under the terms of the Python License version 2.4 as published
Check out this thread:
http://www.mail-archive.com/numpy-discuss...@lists.sourceforge.net/msg01154.html
In shot, it can be done, but it can be tricky to make sure you don't
leak memory. A better option if possible is to pre-allocate the array
with numpy and pass that buffer into the C code --
Hi,
Having just written some cython code to iterate a neighborhood across
an array, I have some ideas about features that would be useful for a
general frame. Specifically, being able to pass in a footprint
boolean array to define the neighborhood is really useful in many
contexts. Also
Hi all,
I'm curious as to what the most straightforward way is to convert an
offset into a memory buffer representing an arbitrarily strided array
into the nd index into that array. (Let's assume for simplicity that
each element is one byte...)
Does sorting the strides from largest to
Not to be a downer, but this problem is technically NP-complete. The
so-called knapsack problem is to find a subset of a collection of
numbers that adds up to the specified number, and it is NP-complete.
Unfortunately, it is exactly what you need to do to find the indices
to a particular
I'm having some trouble here. I have a list of numpy arrays. I
want to
know if an array 'u' is in the list.
Try:
any(numpy.all(u == l) for l in array_list)
standard caveats about float comparisons apply; perhaps
any(numpy.allclose(u, l) for l in array_list)
is more appropriate in certain
Is there a good way in NumPy to convert from a bit string to a boolean
array?
For example, if I have a 2-byte string s='\xfd\x32', I want to get a
16-length boolean array out of it.
numpy.unpackbits(numpy.fromstring('\xfd\x32', dtype=numpy.uint8))
Hello,
I assume it is a bug that calling numpy.array() on a flatiter of a
fortran-strided array that owns its own data causes that array to be
rearranged somehow?
Not sure what happens with a fancier-strided array that also owns its
own data (because I'm not sure how to create one of those
You should open a ticket for this.
http://projects.scipy.org/numpy/ticket/1439
On Mar 26, 2010, at 11:26 AM, Charles R Harris wrote:
On Wed, Mar 24, 2010 at 1:13 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
Hello,
I assume it is a bug that calling numpy.array() on a flatiter
In an array I want to replace all NANs with some number say 100, I
found a method nan_to_num but it only replaces with zero.
Any solution for this?
Indexing with a mask is one approach here:
a[numpy.isnan(a)] = 100
also cf. numpy.isfinite as well in case you want the same with infs.
Zach
On Wed, Apr 14, 2010 at 10:25, Peter Shinners p...@shinners.org
wrote:
Is there a way to combine two 1D arrays with the same size into a 2D
array? It seems like the internal pointers and strides could be
combined. My primary goal is to not make any copies of the data.
There is absolutely
Hi
Can anyone think of a clever (non-lopping) solution to the following?
A have a list of latitudes, a list of longitudes, and list of data
values. All lists are the same length.
I want to compute an average of data values for each lat/lon pair.
e.g. if lat[1001] lon[1001] = lat[2001]
be pretty
trivial (just keep a table of the lat/long for each hash value, which
you'll need anyway, and check that different lat/long pairs don't get
assigned the same bin).
Zach
-Mathew
On Tue, Jun 1, 2010 at 1:49 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
Hi
Can anyone think
1 - 100 of 179 matches
Mail list logo