On 12/03/2014 04:32 AM, Ryan Nelson wrote:
Emanuele,
This doesn't address your question directly. However, I wonder if you
could approach this problem from a different way to get what you want.
First of all, create a index array and then just vstack all of your
arrays at once.
Ryan,
On 12/03/2014 12:17 PM, Jaime Fernández del Río wrote:
The safe way to create 1D object arrays from a list is by preallocating them,
something like this:
a = [np.random.rand(2, 3), np.random.rand(2, 3)]
b = np.empty(len(a), dtype=object)
b[:] = a
b
array([ array([[ 0.124382 ,
Hi,
I am using 2D arrays where only one dimension remains constant, e.g.:
---
import numpy as np
a = np.array([[1, 2, 3], [4, 5, 6]]) # 2 x 3
b = np.array([[9, 8, 7]]) # 1 x 3
c = np.array([[1, 3, 5], [7, 9, 8], [6, 4, 2]]) # 3 x 3
d = np.array([[5, 5, 4], [4, 3, 3]]) # 2 x 3
---
I have a large
Hi,
I just came across this unexpected behaviour when creating
a np.array() from two other np.arrays of different shape.
Have a look at this example:
import numpy as np
a = np.zeros(3)
b = np.zeros((2,3))
c = np.zeros((3,2))
ab = np.array([a, b])
print ab.shape, ab.dtype
ac = np.array([a,
Hi,
I'm using NumPy v1.6.1 shipped with Ubuntu 12.04 (Python 2.7.3). I observed an
odd behavior of the multivariate_normal function, which does not like int64 for
the 'size' argument.
Short example:
import numpy as np
print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), size=1)
np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2),
size=np.int64(1))
[[ 0.40274243 -0.33922682]]
Nicolas
On May 24, 2013, at 2:02 PM, Emanuele Olivetti emanu...@relativita.com
wrote:
import numpy as np
print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), size=1
Maybe of interest.
E.
Original Message
-- Forwarded message --
From: mikiobraun [EMAIL PROTECTED]
Date: 2008/9/8
Subject: [ML-news] Call for Submissions: Workshop on Machine Learning
Open Source Software (MLOSS), NIPS*08
To: Machine Learning News [EMAIL
Damian Eads wrote:
Emanuele Olivetti wrote:
...
[*] : ||x - x'||_w = (\sum_{i=1...N} (w_i*|x_i - x'_i|)**p)**(1/p)
This feature could be implemented easily. However, I must admit I'm not
very familiar with weighted p-norms. What is the reason for raising w
to the p instead of w_i*(|x_i-x
='mahalanobis')
computes a m_A by m_B distance matrix M. The ij'th entry is the distance
between XA[i,:] and XB[j,:]. The core computation is implemented in C
for efficiency. I've committed the new function along with documentation
and about two dozen tests.
Cheers,
Damian
Emanuele
David Cournapeau wrote:
Emanuele Olivetti wrote:
Hi,
I'm trying to compute the distance matrix (weighted p-norm [*])
between two sets of vectors (data1 and data2). Example:
You may want to look at scipy.cluster.distance, which has a bunch of
distance matrix implementation. I believe
David Cournapeau wrote:
FWIW, distance is deemed to move to a separate package, because distance
computation is useful in other contexts than clustering.
Excellent. I was thinking about something similar. I'll have a look
to the separate package. Please drop an email to this list when
Hi,
I'm trying to compute the distance matrix (weighted p-norm [*])
between two sets of vectors (data1 and data2). Example:
import numpy as N
p = 3.0
data1 = N.random.randn(100,20)
data2 = N.random.randn(80,20)
weight = N.random.rand(20)
distance_matrix = N.zeros((data1.shape[0],data2.shape[0]))
Dear all,
I need to speed up this function (a little example follows):
--
import numpy as N
def distance_matrix(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(rows):
for j in range(columns):
Matthieu Brucher wrote:
Hi,
Bill Baxter proposed a version of this problem some months ago on this
ML. I use it regularly and it is fast enough for me.
Excellent. Exactly what I was looking for.
Thanks,
Emanuele
___
Numpy-discussion mailing list
Rob Hetland wrote:
I think you want something like this:
x1 = x1 * weights[np.newaxis,:]
x2 = x2 * weights[np.newaxis,:]
x1 = x1[np.newaxis, :, :]
x2 = x2[:, np.newaxis, :]
distance = np.sqrt( ((x1 - x2)**2).sum(axis=-1) )
x1 and x2 are arrays with size of (npoints, ndimensions), and
James Philbin wrote:
OK, i've written a simple benchmark which implements an elementwise
multiply (A=B*C) in three different ways (standard C, intrinsics, hand
coded assembly). On the face of things the results seem to indicate
that the vectorization works best on medium sized inputs. If
Dear all,
Look at this little example:
import numpy
a = numpy.array([1])
b = numpy.array([1,2,a])
c = numpy.array([a,1,2])
Which has the following output:
Traceback (most recent call last):
File b.py, line 4, in module
c = numpy.array([a,1,2])
ValueError: setting an array
/usr/lib/python2.5/struct.py, line 63, in pack
return o.pack(*args)
SystemError: ../Objects/longobject.c:322: bad argument to internal function
No error with python2.4 so I believe it is a 32bit issue.
HTH,
Emanuele
Emanuele Olivetti wrote:
Hi,
this snippet is causing troubles
Simone Marras wrote:
Hello everyone,
I am trying to install numpy on my Suse 10.2 using Python 2.5
Python is correctly installed and when I launch python setup.py
install, I get the following error:
numpy/core/src/multiarraymodule.c:7604: fatal error: error writing
to /tmp/ccNImg9Q.s:
Hi,
I'm working with 4D integer matrices and need to compute std() on a
given axis but I experience problems with excessive memory consumption.
Example:
---
import numpy
a = numpy.random.randint(100,size=(50,50,50,200)) # 4D randint matrix
b = a.std(3)
---
It seems that this code requires 100-200
An even simpler example generating the same error:
import numpy
x = numpy.array([0,0])
numpy.histogram2d(x,x)
HTH,
Emanuele
Emanuele Olivetti wrote:
While using histogram2d on simple examples I got these errors:
import numpy
x = numpy.array([0,0])
y = numpy.array([0,1
David Huard wrote:
Hi Emanuele,
The bug is due to a part of the code that shifts the last bin's
position to make sure the array's maximum value is counted in the last
bin, and not as an outlier. To do so, the code computes an approximate
precision used the shift the bin edge by amount small
Look at this:
--bug.py---
import numpy
a=numpy.array([1,2])
b=a.sum()
print type(b)
c=numpy.random.permutation(b)
---
If I run it (Python 2.5, numpy 1.0.1 on a Linux box) I get:
---
# python /tmp/bug.py
type 'numpy.int32'
Traceback (most recent call last):
23 matches
Mail list logo