Hello,
I have an array of data for a global grid at 1 degree resolution. It's
filled with 1s and 0s, and it is just a land sea mask (not only, but
as an example). I want to be able to regrid the data to higher or
lower resolutions (i.e. 0.5 or 2 degrees). But if I try to use any
standard interp
, but I haven't managed to get this to work...
Could someone please explain what is going on here?
Thanks,
john
## Create recarray so we can easily sort
dtype=np.dtype([('indices','int32'),('time','f8'),('zen','f4'),\
('az','f4'),('sza','f4'),('saz','f4'),('muslope','f4
prefer not to create an external
module (f2py, cython) unless there is really no way to make this more
efficient... it's the looping through the grid I guess that takes so
long.
Thanks,
john
def grid_emissions(lon,lat,emissions,grid.dx, grid.dy,
grid.outlat0, grid.outlon0, grid.nxmax
I know we're not supposed to 'broadcast' thanks, but Thanks! This
works much more efficiently!
On Mon, Jan 24, 2011 at 3:50 PM, David Huard david.hu...@gmail.com wrote:
Hi John,
Since you have a regular grid, you should be able to find the x and y
indices without np.where, ie something like
I will try this as well and report back with a timing...
On Mon, Jan 24, 2011 at 3:56 PM, Vincent Schut sc...@sarvision.nl wrote:
On 01/24/2011 02:53 PM, John wrote:
Hello,
I'm trying to cycle over some vectors (lat,lon,emissions) of
irregularly spaced lat/lon spots, and values. I need
methods provide identical results in terms of the sums.
Original method:
~ 13.3 seconds
Pure Python per David:
~ 0.017 seconds
Numpy histogramdd per Vincent:
~ 0.007 seconds
Thanks,
john
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
.
Is there something I can do that would be more efficient than looping
through and using np.concatenate (vstack)?
--john
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Yes, stacking is fine, and looping per John's suggestion is what I've
done, I was just wondering if there was possibly a more 'pythonic' or
more importantly efficient way than the loop.
Thanks,
john
On Wed, Mar 16, 2011 at 3:38 PM, John Salvatier
jsalv...@u.washington.edu wrote:
I think he
Hey all, just wanted to let you know that this works nicely:
stacked = np.vstack(MyDict.values())
So long as the dictionary only cotains the recarrays.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
hi,
why does the ValueError appear below, and how can i make that 2a5
expression work when a is an array?
thanks.
from numpy import reshape,arange
a=reshape(arange(9),(3,3))
a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
2a
array([[False, False, False],
[ True, True,
I am in the process of trying to build numpy with OpenMP support but
have had several issues.
Has anyone else built it with success that could offer some guidance
in what needs to be passed at build time.
For reference I am using Atlas 3.10.2 built with OpenMP as well (-F
alg -fopenmp)
Thanks,
on this issue.
Thank you in advance,
John
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hellow, while using the nested_iters function, I've noticed that it does
not accept length zero nestings. For example, the following fails:
nested_iters([ones(3),ones(3)], [[], [0]])
with ValueError: If 'op_axes' or 'itershape' is not NULL in theiterator
constructor, 'oa_ndim' must be greater
362, in build_msvcr_library
generate_def(dll_file, def_file)
File
C:\Users\jsalvatier\workspace\numpy\numpy\distutils\mingw32ccompiler.py,
line 282, in generate_def
raise ValueError(Symbol table not found)
ValueError: Symbol table not found
Thank you,
John
I would just use a lookup dict:
names = [ uc_berkeley, stanford, uiuc, google, intel,
texas_instruments, bool]
lookup = dict( zip( range(len(names)), names ) )
Now, given you have n entries:
S = numpy.zeros( (n, len(names)) ,dtype=numpy.int32)
for k in [uc_berkeley, google, bool]:
[sorry for duplicate - I used the wrong mail address]
I am afraid, I didn't quite get the question.
What is the scenario? What is the benefit that would weight out the performance
hit of checking whether there is a callback or not. This has to be evaluated
quite a lot.
Oh well ... and 1.3.0 is
I ran into this a while ago and was confused why cov did not behave the way
pierre suggested.
On Jan 21, 2012 12:48 PM, Elliot Saba staticfl...@gmail.com wrote:
Thank you Sturla, that's exactly what I want.
I'm sorry that I was not able to reply for so long, but Pierre's code is
similar to
I'd like to add
http://git.tiker.net/pyopencl.git/blob/HEAD:/examples/demo_mandelbrot.py to the
discussion, since I use pyopencl (http://mathema.tician.de/software/pyopencl)
with great success in my daily scientific computing. Install with pip.
PyOpenCL does understand numpy arrays. You write
On 23.01.2012, at 11:23, David Warde-Farley wrote:
a = numpy.array(numpy.random.randint(256,size=(500,972)),dtype='uint8')
b = numpy.random.randint(500,size=(4993210,))
c = a[b]
In [14]: c[100:].sum()
Out[14]: 0
Same here.
Python 2.7.2, 64bit, Mac OS X (Lion), 8GB RAM,
I get the same results as you, Kathy.
*surprised*
(On OS X (Lion), 64 bit, numpy 2.0.0.dev-55472ca, Python 2.7.2.
On 24.01.2012, at 16:29, Kathleen M Tacina wrote:
I was experimenting with np.min_scalar_type to make sure it worked as
expected, and found some unexpected results for integers
I know you wrote that you want TEXT files, but never-the-less, I'd like to
point to http://code.google.com/p/h5py/ .
There are viewers for hdf5 and it is stable and widely used.
Samuel
On 24.01.2012, at 00:26, Emmanuel Mayssat wrote:
After having saved data, I need to know/remember the data
Sorry for the late answer. But at least for the record:
If you are using eclipse, I assume you have also installed the eclipse plugin
[pydev](http://pydev.org/). Is use it myself, it's good.
Then you have to go to the preferences-pydev-PythonInterpreter and select the
python version you want
Yes, I agree 100%.
On 26.01.2012, at 10:19, Sturla Molden wrote:
When we have nice libraries like OpenCL, OpenGL and OpenMP, I am so glad
we have Microsoft to screw it up.
Congratulations to Redmond: Another C++ API I cannot read, and a
scientific compute library I hopefully never have to
Hi Hans-Martin!
You could try my instructions recently posted to this list
http://thread.gmane.org/gmane.comp.python.scientific.devel/15956/
Basically, using llvm-gcc scipy segfaults when scipy.test() (on my system at
least).
Therefore, I created the homebrew install formula.
They work for
Hi Ruby,
I still do not fully understand your question but what I do in such cases is to
construct a very simple array and test the functions.
The help of numpy.histogram2d or numpy.histogramdd (for more than two dims)
might help here.
So I guess, basically you want to ignore the x,y positions
Hello,
Is there a way to specify a format for the datetime64 constructor? The
constructor doesn't have a doc. I have dates in a file with the format
MM/dd/YY. datetime64 used to be able to parse these in 1.6.1 but the dev
version throws an error.
Cheers,
John
That makes sense.
I figured that ambiguity was the reason it was removed.
Thank you for the explanation. I'm a big fan of your work.
John
On Mon, Feb 6, 2012 at 1:18 PM, Mark Wiebe mwwi...@gmail.com wrote:
Hey John,
NumPy doesn't provide this, because it's already provided
Hello, is there a good way to get just the date part of a datetime64?
Frequently datetime datatypes have month(), date(), hour(), etc functions
that pull out part of the datetime, but I didn't see those mentioned in the
datetime64 docs. Casting to a 'D' dtype didn't work as I would have hoped:
In
Thanks Mark!
John
On Wed, Feb 8, 2012 at 6:48 PM, Mark Wiebe mwwi...@gmail.com wrote:
Converting between date and datetime requires caution, because it depends
on your time zone. Because all datetime64's are internally stored in UTC,
simply casting as in your example treats it in UTC
Wow, I wasn't aware of that even if I work with numpy for years now.
NumPy is amazing.
Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Feb 16, 2012 at 7:26 PM, Alan G Isaac alan.is...@gmail.com wrote:
On 2/16/2012 7:22 PM, Matthew Brett wrote:
This has not been an encouraging episode in striving for consensus.
I disagree.
Failure to reach consensus does not imply lack of striving.
Hey Alan, thanks for your
On 17.02.2012, at 21:46, Ralf Gommers wrote:
[...]
So far no one has managed to build the numpy/scipy combo with the LLVM-based
compilers, so if you were willing to have a go at fixing that it would be
hugely appreciated. See http://projects.scipy.org/scipy/ticket/1500 for
details.
Hi
The plain gcc (non-llvm) is no longer there, if you install Lion and directly
Xcode 4.3.
Only, if you have the old Xcode 4.2 or lower, then you may have a non-llvm gcc.
For Xcode 4.3, I recommend installing the Command Line Tools for Xcode from
the preferences of Xcode. Then you'll have the
On Sat, Feb 18, 2012 at 5:09 PM, David Cournapeau courn...@gmail.comwrote:
There are better languages than C++ that has most of the technical
benefits stated in this discussion (rust and D being the most
obvious ones), but whose usage is unrealistic today for various
reasons: knowledge,
On Wed, Feb 29, 2012 at 1:20 PM, Neal Becker ndbeck...@gmail.com wrote:
Much of Linus's complaints have to do with the use of c++ in the _kernel_.
These objections are quite different for an _application_. For example,
there
are issues with the need for support libraries for exception
On Mon, Mar 5, 2012 at 1:29 PM, Keith Goodman kwgood...@gmail.com wrote:
I[8] np.allclose(a, a[0])
O[8] False
I[9] a = np.ones(10)
I[10] np.allclose(a, a[0])
O[10] True
One disadvantage of using a[0] as a proxy is that the result depends on the
ordering of a
(a.max() - a.min())
0000010
Any Ideas?
thanks,
John
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thanks Sameer. I confirmed on my side as well. I will try to understand
the why part now. Much appreciated.
On Mon, Apr 16, 2012 at 11:58 PM, Sameer Grover
sameer.grove...@gmail.comwrote:
On Tuesday 17 April 2012 11:02 AM, John Mitchell wrote:
Hi,
I am using f2py to pass a numpy array
really like to
know why and your answer alludes to that.
Please excuse my ignorance on this topic. Can you perhaps educate me a
little on 'literal kind values'? I take you to mean that
'int8' is not a literal kind value while 1 and 8 are examples of literal
kind values.
Thanks,
John
On Tue, Apr
On 2012-05-11, at 4:01 PM, Norman Shelley wrote:
Running on Linux RHEL4
numpy.random.multivariate_normal seems to work well for 25 mean
values but when I go to 26 mean values (and their corresponding
covariance values) it produces garbage.
Any ideas?
The implementation of
Hello,
I've noticed that If you try to increment elements of an array with
advanced indexing, repeated indexes don't get repeatedly incremented. For
example:
In [30]: x = zeros(5)
In [31]: idx = array([1,1,1,3,4])
In [32]: x[idx] += [2,4,8,10,30]
In [33]: x
Out[33]: array([ 0., 8., 0.,
at 4:48 AM, John Salvatier
jsalv...@u.washington.edu wrote:
Hello,
I've noticed that If you try to increment elements of an array with
advanced
indexing, repeated indexes don't get repeatedly incremented. For example:
In [30]: x = zeros(5)
In [31]: idx = array([1,1,1,3,4])
In [32
? Is there an easy way to gain access to these
functions? Should I give up on this approach?
(The function I was trying to build is index_inc in here (
https://github.com/jsalvatier/advinc/blob/master/advinc/advinc.c))
Cheers,
John
___
NumPy-Discussion mailing list
. 11.]]
Cheers,
John
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Can you clarify why it would be super hard? I just reused the code for
advanced indexing (a modification of PyArray_SetMap). Am I missing
something crucial?
On Tue, Jun 26, 2012 at 9:57 AM, Travis Oliphant tra...@continuum.iowrote:
On Jun 26, 2012, at 11:46 AM, John Salvatier wrote:
Hello
said.
Fred
On Tue, Jun 26, 2012 at 1:27 PM, John Salvatier
jsalv...@u.washington.edu wrote:
Can you clarify why it would be super hard? I just reused the code for
advanced indexing (a modification of PyArray_SetMap). Am I missing
something
crucial?
On Tue, Jun 26, 2012 at 9:57 AM
On Tue, Jun 26, 2012 at 3:27 PM, Thouis (Ray) Jones tho...@gmail.com wrote:
+1 !
Speaking as someone trying to get started in contributing to numpy, I
find this discussion extremely off-putting. It's childish,
meaningless, and spiteful, and I think it's doing more harm than any
possible
the addition operation for that
type. It looks like some of the numpy code uses .c.src files to do
templating. Is that what I want to do here? Is the syntax described
somewhere?
John
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
Some examples would be nice. A lot of people did move already. And I haven't
seen reports of those that tried and got stuck. Also, Debian and Python(x,
y) have 1.6.2, EPD has 1.6.1.
In my company, the numpy for our production python install is well
behind 1.6. In the world of trading, the
Thanks nathaniel, that does tricky...
On Wed, Jun 27, 2012 at 9:25 AM, Nathaniel Smith n...@pobox.com wrote:
On Tue, Jun 26, 2012 at 10:53 PM, John Salvatier
jsalv...@u.washington.edu wrote:
I want to support multiple types in the index_increment function that
I've
written here:
https
at 1:27 PM, John Salvatier
jsalv...@u.washington.edu wrote:
Can you clarify why it would be super hard? I just reused the code for
advanced indexing (a modification of PyArray_SetMap). Am I missing
something
crucial?
On Tue, Jun 26, 2012 at 9:57 AM, Travis Oliphant tra...@continuum.io
On Thu, Jun 28, 2012 at 7:25 AM, Travis Oliphant tra...@continuum.io wrote:
Hey all,
I'd like to propose dropping support for Python 2.4 in NumPy 1.8 (not the 1.7
release). What does everyone think of that?
As a tangential point, MPL is dropping support for python2.4 in it's
next major
On Fri, Jun 29, 2012 at 2:20 PM, Jim Vickroy jim.vick...@noaa.gov wrote:
As a lurker and user, I too wish for a distinct numpy-users list. -- jv
This thread is a perfect example of why another list is needed. It's
currently 42 semi-philosophical posts about what kind community numpy
should
Hi Fred,
That's an excellent idea, but I am not too familiar with this use case.
What do you mean by list in 'matrix[list]'? Is the use case, just
incrementing in place a sub matrix of a numpy matrix?
John
On Fri, Jun 29, 2012 at 11:43 AM, Frédéric Bastien no...@nouiz.org wrote:
Hi,
I
]]+=numpy.random.rand(3,5)
print x
This won't work if in the list [0,2,4], there is index duplication,
but with your new code, it will. I think it is the most used case of
advanced indexing. At least, for our lab:)
Fred
On Mon, Jul 2, 2012 at 7:48 PM, John Salvatier
jsalv...@u.washington.edu wrote
://docs.picloud.com/**environment.htmlhttp://docs.picloud.com/environment.html
[3] http://docs.picloud.com/howto/pyscientifictools.html
[4] http://docs.picloud.com/howto/primer.html
Best Regards,
John
--
John Riley
PiCloud, Inc.
___
NumPy-Discussion mailing
I have the following C code which is an extension to my python code.
The python and C code use
#include Numeric/arrayobject.h
what is the equivalent I can use in numpy that causes the minimum code change?
I did look through the old messages but didn't really find the answer-any help
to a
On 08/22/2014 11:14 PM, Cleo Drakos wrote:
How can I convert this numpy array so that its first element belongs
to (49.875N,179.625W), i.e., upper left latitude and longitude
respectively; and the last element belong to (49.625S,179.875E), i.e.,
lower right latitute and longitude
OK, that makes sense. Thanks guys!
On Tue, Sep 28, 2010 at 5:43 PM, Fernando Perez fperez@gmail.comwrote:
On Tue, Sep 28, 2010 at 11:19 AM, John Salvatier
jsalv...@u.washington.edu wrote:
My other question is whether datarray will be able to handle multiple
data
types in the same
one is trying to be
sure that the installation is as good as possible.
Not sure if it matters, but my compiler is gcc-4.4.1 and I'm using
gfortran. Both were wrapped with mpich2.
Regards,
John
FAIL: test_doctests (test_polynomial.TestDocs
()
{'infstr': 'inf', 'threshold': 1000, 'suppress': False, 'linewidth': 75,
'edgeitems': 3, 'precision': 8, 'nanstr': 'nan'}
On Thu, Oct 7, 2010 at 3:35 AM, Ralf Gommers ralf.gomm...@googlemail.comwrote:
On Sat, Oct 2, 2010 at 12:51 PM, John Mitchell worka...@gmail.com wrote:
After spending
Maybe put it up on youtube? Or write down your pitch? As someone who
dislikes R but thinks data.frames are good for something, I'd be interested
in hearing the pitch, what the developers think the strengths and weaknesses
of DatArray are.
On Mon, Oct 18, 2010 at 2:18 PM, Fernando Perez
The difference is that dis[k,:] eliminates the first dimension since
you are using a single number as an index, but dis[k:k+1,:] does not
eliminate that dimension.
On Sat, Nov 6, 2010 at 1:24 PM, josef.p...@gmail.com wrote:
On Sat, Nov 6, 2010 at 4:14 PM, K. Sun sunk...@gmail.com wrote:
Thanks
read about basic slicing :
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
On Sun, Nov 21, 2010 at 11:28 AM, John Salvatier
jsalv...@u.washington.edu wrote:
yes use the symbol ':'
so you want
t[:,x,y]
2010/11/21 Ernest Adrogué eadro...@gmx.net:
Hi,
Suppose an array
yes use the symbol ':'
so you want
t[:,x,y]
2010/11/21 Ernest Adrogué eadro...@gmx.net:
Hi,
Suppose an array of shape (N,2,2), that is N arrays of
shape (2,2). I want to select an element (x,y) from each one
of the subarrays, so I get a 1-dimensional array of length
N. For instance:
In
I didn't realize the x's and y's were varying the first time around.
There's probably a way to omit it, but I think the conceptually
simplest way is probably what you had to begin with. Build an index by
saying i = numpy.arange(0, t.shape[0])
then you can do t[i, x,y]
On Mon, Nov 22, 2010 at
I think that the only speedup you will get is defining an index only
once and reusing it.
2010/11/22 Ernest Adrogué eadro...@gmx.net:
22/11/10 @ 14:04 (-0600), thus spake Robert Kern:
This way, I get the elements (0,1) and (1,1) which is what
I wanted. The question is: is it possible to omit
I am very interested in this result. I have wanted to know how to do an
apply_along_axis function for a while now.
On Tue, Nov 30, 2010 at 11:21 AM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Sep 1, 2009 at 2:37 PM, Sturla Molden stu...@molden.no wrote:
Dag Sverre Seljebotn skrev:
it.
On Tue, Nov 30, 2010 at 12:06 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Nov 30, 2010 at 11:58 AM, Matthew Brett matthew.br...@gmail.com
wrote:
Hi,
On Tue, Nov 30, 2010 at 11:35 AM, Keith Goodman kwgood...@gmail.com
wrote:
On Tue, Nov 30, 2010 at 11:25 AM, John Salvatier
Does NumPy 1.5 work with Python 2.7 or Python 3.x?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
@Keith Goodman
I think I figured it out. I believe something like the following will do
what you want, iterating across one axis specially, so it can apply a median
function along an axis. This code in particular is for calculating a moving
average and seems to work (though I haven't checked my
On Wed, Dec 1, 2010 at 6:07 PM, Keith Goodman kwgood...@gmail.com wrote:
On Wed, Dec 1, 2010 at 5:53 PM, David da...@silveregg.co.jp wrote:
On 12/02/2010 04:47 AM, Keith Goodman wrote:
It's hard to write Cython code that can handle all dtypes and
arbitrary number of dimensions. The former
to be able to create
a moving average with a weighting that changes according to another array.
Best Regards,
John
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Dec 1, 2010 at 7:56 PM, David da...@silveregg.co.jp wrote:
On 12/02/2010 12:35 PM, John Salvatier wrote:
Hello,
I am writing a UFunc creation utility, and I would like to know: is
there a way to mimic the behavior ofPyArray_IterAllButAxis for multiple
arrays at a time
I think this is not possible to do efficiently with just numpy. If you want
to do this efficiently, I wrote a no-replacement sampler in Cython some time
ago (below). I hearby release it to the public domain.
'''
Created on Oct 24, 2009
trying to make a numpy patch with this functionality,
and I have some questions: 1) How difficult would this kind of task be for
someone with non-expert C knowledge and good numpy knowledge? 2) Does anyone
have advice on how to do this kind of thing?
Best Regards,
John
That is an amazing christmas present.
On Tue, Dec 21, 2010 at 4:53 PM, Mark Wiebe mwwi...@gmail.com wrote:
Hello NumPy-ers,
After some performance analysis, I've designed and implemented a new
iterator designed to speed up ufuncs and allow for easier multi-dimensional
iteration. The new
I applaud you on your vision. I only have one small suggestion: I suggest
you put a table of contents at the beginning of your NEP so people may skip
to the part that most interests them.
On Tue, Dec 21, 2010 at 4:59 PM, John Salvatier
jsalv...@u.washington.eduwrote:
That is an amazing
offsetting works. What are the offsets
measured from? It seems like they are measured from another iterator, but
I'm not sure and I don't see how it gets that information.
John
On Tue, Dec 21, 2010 at 5:12 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Mon, Dec 20, 2010 at 1:42 PM, John Salvatier
I'm curious whether this kind of thing is expected to be relatively easy
after the numpy refactor.
On Thu, Dec 23, 2010 at 2:24 PM, Thunemann, Paul Z
paul.z.thunem...@boeing.com wrote:
I'd be very interested in hearing more about a numpy port to Java and
Jython. If anyone has more info about
Wouldn't that be a cast? You do casts in Cython with double(expression)
and that should be the equivalent of float64 I think.
On Tue, Dec 28, 2010 at 3:32 PM, Keith Goodman kwgood...@gmail.com wrote:
I'm looking for the C-API equivalent of the np.float64 function,
something that I could use
This thread is a bit old, but since it's not possible to use the C-API is
possible to accomplish this same thing with the Python API?
On Tue, Dec 21, 2010 at 5:12 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Mon, Dec 20, 2010 at 1:42 PM, John Salvatier jsalv...@u.washington.edu
wrote:
A while
, Jan 1, 2011 at 11:23 AM, John Salvatier jsalv...@u.washington.edu
wrote:
This thread is a bit old, but since it's not possible to use the C-API is
possible to accomplish this same thing with the Python API?
I've committed Python exposure for nested iteration to the new_iterator
branch
the iterator values creates new array
objects.
-Mark
On Tue, Jan 4, 2011 at 12:59 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Tue, Jan 4, 2011 at 12:15 PM, John Salvatier
jsalv...@u.washington.edu wrote:
Wow, great! I'm excited to try this. I think your patch significantly
increases
jo...@udesktop253:~ gcc --version
gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Did you try larger arrays/tuples? I would guess that makes a significant
difference.
On Fri, Jan 7, 2011 at 7:58 AM, EMMEL Thomas thomas.em...@3ds.com wrote:
Hi,
There are some discussions on the speed of numpy compared to Numeric in
this list, however I have a topic
I don't understand in
Is evaluate_iter basically numpexpr but using your numpy branch or are there
other changes?
On Sun, Jan 9, 2011 at 2:45 PM, Mark Wiebe mwwi...@gmail.com wrote:
As a benchmark of C-based iterator usage and to make it work properly in a
multi-threaded context, I've updated numexpr to use the new
of how developed it is.
Best Regards,
John Salvatier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hello,
I have discovered a strange bug with numexpr. numexpr.evaluate gives
randomized results on arrays larger than 2047 elements. The following
program demonstrates this:
from numpy import *
from numexpr import evaluate
def func(x):
return evaluate(sum(x, axis = 0))
x = zeros(2048)+.01
Forgot to mention that I am using numexpr 1.4.1 and numpy 1.5.1
On Mon, Jan 24, 2011 at 9:47 AM, John Salvatier
jsalv...@u.washington.eduwrote:
Hello,
I have discovered a strange bug with numexpr. numexpr.evaluate gives
randomized results on arrays larger than 2047 elements. The following
evaluate('sum(x, axis=0)')
71.74
In [33]: print evaluate('sum(x, axis=0)')
81.93
In [34]: x = zeros(8191)+0.01
In [35]: print evaluate('sum(x, axis=0)')
81.91
In [36]: print evaluate('sum(x, axis=0)')
81.91
Warren
On Mon, Jan 24, 2011 at 12:19 PM, John Salvatier
jsalv
Looks like this is related to issue 41 (
http://code.google.com/p/numexpr/issues/detail?id=41can=1).
On Mon, Jan 24, 2011 at 10:29 AM, John Salvatier
jsalv...@u.washington.eduwrote:
I also get the same issue with prod()
On Mon, Jan 24, 2011 at 10:23 AM, Warren Weckesser
warren.weckes
.
And the fftw3 is no longer supported, I guess (even if it is still
mentioned in the site.cfg.example)
Bests,
Samuel
--
Dipl.-Inform. Samuel John
- - - - - - - - - - - - - - - - - - - - - - - - -
PhD student, CoR-Lab(.de) and
Neuroinformatics Group, Faculty
of Technology, D33594 Bielefeld
in cooperation
Hi Paul,
thanks for your answer! I was not aware of numpy.show_config().
However, it does not say anything about libamd.a and libumfpack.a, right?
How do I know if they were successfully linked (statically)?
Does anybody have a clue?
greetings
Samuel
Have you thought about using cython to work with the numpy C-API (
http://wiki.cython.org/tutorials/numpy#UsingtheNumpyCAPI)? This will be
fast, simple (you can mix and match Python and Cython).
As for your specific issue: you can simply cast to all the inputs to numpy
arrays (using asarray
to using the
C-API? Have I misunderstood something?
John
On Tue, Feb 1, 2011 at 11:41 AM, Sturla Molden stu...@molden.no wrote:
Den 01.02.2011 20:30, skrev John Salvatier:
Have you thought about using cython to work with the numpy C-API
(http://wiki.cython.org/tutorials/numpy#UsingtheNumpyCAPI
Good things to know.
On Tue, Feb 1, 2011 at 1:10 PM, Sturla Molden stu...@molden.no wrote:
Den 01.02.2011 20:50, skrev John Salvatier:
I am curious: why you recommend against this? Using the C-API through
cython seems more attractive than using the Cython-specific numpy
features since
Ping.
How to tell, if numpy successfully build against libamd.a and libumfpack.a?
How do I know if they were successfully linked (statically)?
Is it possible from within numpy, like show_config() ?
I think show_config() has no information about these in it :-(
Anybody?
Thanks,
Samuel
Thanks Robin,
that makes sense and explains why I could not find any reference.
Perhaps the scipy.org wiki and install instructions should be updated.
I mean how many people try to compile amd and umfpack, because they
think it's good for numpy to have them, because the site.cfg contains
those
Does anyone have or know of an example of how to
use PyArray_RemoveSmallest(multiiterator) to make the inner loop fast (how
to get the right strides etc.)? I am finding it a bit confusing.
Best Regards,
John
___
NumPy-Discussion mailing list
NumPy
1 - 100 of 323 matches
Mail list logo