Hi,

X is a n x p matrix, V is a linear transformation (p x q with q<=p) and Y =
XV (n x q).
If V is formed by the eigenvectors of the covariance matrix of X, then :

sum(var(y_i)) = sum(var(x_j))     i=1..q, j=1..p

I understand that it is this theorem that enables us in PCA to say that
var(y_1)/sum(var(y_i)) can be interpreted as the percentage of explained
variation by the factor 1. If sum(var(y_i)) does not equal sum(var(x_j))
(the "aggregated variance" in the original x-space), it is not possible
anymore to interpret the ratio as the % of explained variation. Am I right?

So my question is : if V is a general linear transformation with v_i * v_i'
= 1 (where ' denotes the transpose and i = 1..q) but not the eigenvector
matrix, not even orthogonal (so the matrix does not qualify as a rigid
rotation of the axes), is there a way to express the percentage of explained
variation by the "factor" 1?

Any hint is appreciated,
Patrick


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to