I am trying out the eigenvectors related functions in numpy.linalg.I
came across some portions where i have doubts.
1).
i have an array X
if i calculate L=dot(X,X.transpose())
can L be called the covariance matrix of X?I read so in a paper by
TurkPentland(equation 3 i think)
can someone clarify
On Fri, 2 May 2008 23:34:19 -0700 (PDT)
wilson [EMAIL PROTECTED] wrote:
I am trying out the eigenvectors related functions in
numpy.linalg.I
came across some portions where i have doubts.
1).
i have an array X
if i calculate L=dot(X,X.transpose())
can L be called the covariance matrix of
You know, for linkage clustering and BHC, I've found it a lot easier
to work with an intermediate 1d map of indices and never resize the
distance matrix. I then just remove one element from this map at each
iteration, which is a LOT faster than removing a column and a row from
a matrix. if
On Wed, 2008-04-30 at 21:09 +0200, Gael Varoquaux wrote:
On Wed, Apr 30, 2008 at 11:57:44AM -0700, Christopher Barker wrote:
I think I still like the idea of an iterator (or maybe making rollaxis a
method?), but this works pretty well.
Generally, in object oriented programming, you
Hi,
I am starting to push the limits of the available memory and I'd like
to understand a bit better how Python handles memory...
If I try to allocate something too big for the available memory I
often get a MemoryError exception. However, in other situations,
Python memory use continues to grow
thanks for the links..
but why the different signs for entries in eigenvectors? is it a
library specific thing? shouldn't they be identical?
W
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Hi,
The opposite of an eigenvector is an eigenvector as well, with the same
eigenvalue. Depending on the algorithm, both can be returned.
Matthieu
2008/5/3 wilson [EMAIL PROTECTED]:
thanks for the links..
but why the different signs for entries in eigenvectors? is it a
library specific
Robin schrieb:
If I try to allocate something too big for the available memory I
often get a MemoryError exception. However, in other situations,
Python memory use continues to grow until the machine falls over. I
was hoping to understand the difference between those cases. From what
I've
On Fri, May 2, 2008 at 11:51 PM, Hoyt Koepke [EMAIL PROTECTED] wrote:
You know, for linkage clustering and BHC, I've found it a lot easier
to work with an intermediate 1d map of indices and never resize the
distance matrix. I then just remove one element from this map at each
iteration,
Robin wrote:
Hi,
I am starting to push the limits of the available memory and I'd like
to understand a bit better how Python handles memory...
This is why I switched to 64 bit linux and never looked back.
If I try to allocate something too big for the available memory I
often get a
Robert Kern wrote:
I can get a ~20% improvement with the following:
In [9]: def mycut(x, i):
...: A = x[:i,:i]
...: B = x[:i,i+1:]
...: C = x[i+1:,:i]
...: D = x[i+1:,i+1:]
...: return hstack([vstack([A,C]),vstack([B,D])])
Might it be a touch faster to
On Sat, May 3, 2008 at 5:05 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
Robert Kern wrote:
I can get a ~20% improvement with the following:
In [9]: def mycut(x, i):
...: A = x[:i,:i]
...: B = x[:i,i+1:]
...: C = x[i+1:,:i]
...: D = x[i+1:,i+1:]
You could also try complete linkage, where you merge two clusters
based on the farthest distance between points in two clusters instead
of the smallest. This will tend to get clusters of equal size (which
isn't always ideal, either). However, it also uses sufficient
statistics, so it will be
13 matches
Mail list logo