er with 1.10.4 no matter
how I tried to define such variables (even by inserting them in the
site.cfg file), there was no way to make it work.
Davide
On Thu, 2016-01-28 at 14:23 -0800, Nathaniel Smith wrote:
> What does
> ldd
> /usr/local/python2/2.7.8/x86_64/gcc46/New_build/lib/pyth
Hi all,
I recently upgraded NumPy from 1.9.1 to 1.10.4 on Python 2.7.8 by using
pip. As always I specified the paths to Blas, Lapack and Atlas in the
respective environment variables. I used the same compiler I used to
compile both Python and the libraries (GCC 4.6.1). The problem is that
it
knows why the number of tests run are different among
different runs of the same binary/library on different nodes?
https://github.com/numpy/numpy/blob/master/doc/TESTS.rst.txt implies
they shouldn't...
Regards,
Davide Del Vento,
On 02/11/2013 08:54 PM, Davide Del Vento wrote:
I compiled numpy
this code?
Regards,
Davide Lasagna
--
Phd Student
Dipartimento di Ingegneria Aeronautica a Spaziale
Politecnico di Torino, Italy
tel: 011/0906871
e-mail: davide.lasa...@polito.it; lasagnadav...@gmail.com
___
NumPy-Discussion mailing list
NumPy-Discussion
Thanks for your good work!
Cheers,
Davide
On 09/19/2011 02:15 AM, Hoyt Koepke wrote:
Hello,
I'm pleased to announce the first release of a wrapper I wrote for
IBM's CPlex Optimizer Suite. It focuses on ease of use and seamless
integration with numpy, and allows one to naturally specify
to use for the CWT? I
may be interested in providing some help.
Cheers,
Davide Lasagna
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
) )
returns 10, where i would expect it to return 1, to be consistent with
the behaviour when there are multiple columns.
Is there a reason for why it is not like that?
Cheers
Davide Lasagna
___
NumPy-Discussion mailing list
NumPy-Discussion
publish such codes somewhere, i.e Github, so anyone
interested can pick it up.
Cheers,
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
,
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
in with code and
documentation. If you can provide some, and if you are interested in
becoming a core developer, please drop us an email at openpiv-develop at
lists dot sourceforge dot net. A draft website can be found at
www.openpiv.net/python.
Thanks for your attention,
Davide Lasagna
Here is some *working* code i wrote once. It uses strides, look at the
docs for what it is.
from numpy.lib import stride_tricks
def overlap_array( y, len_blocks, overlap=0 ):
Make use of strides to return a two dimensional whose
rows come from a one dimensional array. Strides
of the nd.array to get function evaluation
at the point/s specified.
Maybe this can help,
Davide
I have a n dimensional grid. The grids axes are linear but not
intergers. Let's say I want the value in gridcell [3.2,-5.6,0.01]. Is
there an easy way to transform the index? Do I have to write my own
On 16/feb/2011, at 00:04, numpy-discussion-requ...@scipy.org wrote:
I'm sorry that I don't have some example code for you, but you
probably need to break down the problem if you can't fit it into
memory: http://en.wikipedia.org/wiki/Overlap-add_method
Jonathan
Thanks! You saved my day,
Hi all,
I have to work with huge numpy.array (i.e. up to 250 M long) and I have to
perform either np.correlate or np.convolve between those.
The process can only work on big memory machines but it takes ages. I'm writing
to get some hint on how to speed up things (at cost of precision,
Hi,
I want to compute the following dot product:
P = np.array( [[ p11, p12 ], [p21, p22]] )
C = np.array( [c1, c2] )
where c1 and c2 are m*m matrices, so that
C.shape = (2,m,m)
I want to compute:
A = np.array([a1, a2])
where a1 and a2 are two matrices m*m, from the dot product of P and
array facilities
Ciao
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
You may have a look to the nice python-h5py module, which gives an OO
interface to the underlying hdf5 file format. I'm using it for storing
large amounts (~10Gb) of experimental data. Very fast, very convenient.
Ciao
Davide
On Thu, 2010-06-17 at 08:33 -0400, greg whittier wrote:
On Thu, Jun
Hi all,
What is the fastest and lowest memory consumption way to compute this?
y = np.arange(2**24)
bases = y[1:] + y[:-1]
Actually it is already quite fast, but i'm not sure whether it is occupying
some temporary memory
is the summation. Any help is appreciated.
Cheers
Davide
Well, actually np.arange(2**24) was just to test the following line ;). I'm
particularly concerned about memory consumption rather than speed.
On 16 May 2010 22:53, Brent Pedersen bpede...@gmail.com wrote:
On Sun, May 16, 2010 at 12:14 PM, Davide Lasagna
lasagnadav...@gmail.com wrote:
Hi all
, 1e6)
:func = lambda x: np.sin(x)
:timeit derive(func, x)
10 loops, best of 3: 177 ms per loop
I'm curious if someone comes up with something faster.
Regards,
Davide
On 4 May 2010 22:17, josef.p...@gmail.com wrote:
On Tue, May 4, 2010 at 4:06 PM, gerardob gberbeg...@gmail.com wrote
look at the memory usage during the execution of
the function.
The functions i gave here can be easily modified to accept an axis option
and other stuff needed.
Is there any drawback of using them? Why np.mean and np.std are so slow?
I'm sure I'm missing something.
Cheers
Davide
Hi all,
Is there a fast numpy way to find the peak boundaries in a (looong, millions of
points) smoothed signal? I've found some approaches, like this:
z = data[1:-1]
l = data[:-2]
r = data[2:]
f = np.greater(z, l)
f *= np.greater(z, r)
boundaries = np.nonzero(f)
but it is too sensitive... it
), 1597-1611,
2006.
/* da */
Matthieu Brucher ha scritto:
Hi,
How does it compare to the elarn scikit, especially for the SVM part ?
How was it implemented ?
Matthieu
2008/2/14, Davide Albanese [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
*Machine Learning Py* (MLPY) is a *Python
: how does it compare to libsvm,
which is one of the most known packages for SVMs ?
Matthieu
2008/2/15, Davide Albanese [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
Dear Matthieu,
I don't know very well scikit.
The Svm is implemented by Sequential Minimal Optimization (SMO
Whith standard Python:
who()
Robin ha scritto:
On Thu, Feb 14, 2008 at 8:43 PM, Alexander Michael [EMAIL PROTECTED] wrote:
Is there a way to list all of the arrays that are referencing a given
array? Similarly, is there a way to get a list of all arrays that are
currently in memory?
the two classes.
Matthieu
2008/2/15, Davide Albanese [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
Yes: https://mlpy.fbk.eu/wiki/MlpyExamplesWithDoc
* svm()
Initialize the svm class.
Inputs:
...
cost - for cost-sensitive classification [-1.0
*Machine Learning Py* (MLPY) is a *Python/NumPy* based package for
machine learning.
The package now includes:
* *Support Vector Machines* (linear, gaussian, polinomial,
terminated ramps) for 2-class problems
* *Fisher Discriminant Analysis* for 2-class problems
* *Iterative
learning open source software)
Regards, D.
Davide Albanese wrote:
*Machine Learning Py* (MLPY) is a *Python/NumPy* based package for
machine learning.
The package now includes:
* *Support Vector Machines* (linear, gaussian, polinomial,
terminated ramps) for 2-class problems
28 matches
Mail list logo