-------- Original Message --------
Subject:        Re: PCA with VERY large number of landmarks?
Date:   Thu, 6 Oct 2011 06:32:08 -0400
From:   Peter Claes <peter.claes1...@gmail.com>
To:     morphmet@morphometrics.org



Dear all,
From my experience with spatially-dense landmarked 3D facial surfaces I hope this can help as well:

Problem statement: PCA on a mean centered matrix X with dimensions VxN, with N the number of observations, V the number of variables and where V>>N.

Solution: The eigen-decomposition of the covariance XX^t being a VxV matrix implies that:

XX^t e_i = L_i e_i

where E_i en L_i are an eigenvector and an eigenvalue of the covariance matrix respectively.

Now consider the eigen-decomposition of the smaller NxN matrix X^t X:

X^t XE_i = L_i E_j

Multiplying both sides by X and grouping togeher in brackets:

X X^t (XE_i ) = L_i (XE_j )

one can see that the N vectors e_i =XE_j are all eigenvectors of XX^t with corresponding eigenvalues L_i and all remaining eigenvectors of XX^t have zero eigenvalues. Hence in the case V is large and N is much smaller, the eigenvectors and eigenvalues are best computed using the smaller X^t X matrix.

Matlab’s functions ‘princomp’ and ‘svd’ (singular value decomposition), allow you to put a flag to ‘econ’ in order to perform the more memory efficient decomposition using X^t X instead of XX^t .

Note that: In the case of spatially-dense landmarks or image pixels, e.g., V can be in the order of 10.000 and more, however due to the spatially-dense nature in landmarks and pixels strong correlations are expected and hence a technique such as PCA, should be able to eliminate all ( or most of) the redundancy in the data, resulting in huge dimensionality reductions. For example, 691 faces mapped with ~10.000 spatially-dense landmarks as in (P. Claes, M. Walters, D. Vandermeulen, J.G. Clement, Spatially-dense 3d facial asymmetry assessment in both typical and disordered growth, J. Anat. 219 (2011) 444-455.), after PCA and retaining 98% of the total variance needed only 23 PC’s (from a maximum of 690 components). (Also note that, the better the consistency of the spatially-dense indications the better the reduction obtained).

In the case that the amount of observations increases as well to impractical numbers, a good test to see whether enough observations are obtained is to test the covariance generalization in function of the number of observations using point distribution models. Given a number of observations to learn the covariance structure from, how well is this knowledge able to explain unseen observations? (P. Claes, A robust statistical surface registration framework using implicit function representations: Application in craniofacial reconstruction, in Faculteit ingenieurswetenschappen, department Elektrotechniek, afdeling PSI. 2007, K.U.Leuven, Belgium: Leuven., chapter 6, section 6.3.2). The generalization error should decrease with increasing number of observations, and when a plateau is reached, enough number of observations is obtained (the estimation of the covariance structure will not improve any further).

Alternatively, one could opt to perform incremental SVD decompositions, here a limited number of observations are used to initialize a (memory efficient using X^t X) SVD decomposition and then more observations are put in sequentially to update the SVD decomposition. With some additional coding, e.g., a maximum number of eigenvectors can be set to a fixed value speeding up the computations in the case of a vast amount of observations as typically not all eigenvectors are needed or meaningfull. (Brand, M.E., "Incremental Singular Value Decomposition ofUncertain Data with Missing Values", European Conference on Computer Vision (ECCV), Vol 2350, pps 707-720, May 2002). Matlab code (written by Nathan Faggian) can be found in attachment or can be requested from me by email.

Off course this all is just to allow the extraction of principal components from spatially-dense data, and the critiques towards statistical testing previously mentioned still stand.

Warm regards to all.

Peter Claes

peter.cl...@esat.kuleuven.be <mailto:peter.cl...@esat.kuleuven.be>



On Wed, Oct 5, 2011 at 10:52 PM, morphmet <morphmet_modera...@morphometrics.org <mailto:morphmet_modera...@morphometrics.org>> wrote:



   -------- Original Message --------
   Subject:     Re: PCA with VERY large number of landmarks?
   Date:        Wed, 5 Oct 2011 16:49:22 -0400
   From:        Adam Douglas Yock <adam.y...@gmail.com>
   <mailto:adam.y...@gmail.com>
   To:  morphmet@morphometrics.org <mailto:morphmet@morphometrics.org>



   Great responses!

   After some further thought, I will be severely decreasing the number
   of landmarks for several reasons.

   Thank you all for your help!


   Adam




   On Tue, Oct 4, 2011 at 2:27 PM, morphmet
   <morphmet_modera...@morphometrics.org
   <mailto:morphmet_modera...@morphometrics.org>> wrote:



       -------- Original Message --------
       Subject:         PCA with VERY large number of landmarks?
       Date:    Mon, 3 Oct 2011 21:48:03 -0400
       From:    Adam Douglas Yock <adam.y...@gmail.com>
       <mailto:adam.y...@gmail.com>
       To:      morphmet@morphometrics.org <mailto:morphmet@morphometrics.org>



       Hello,

       I am new to the field of morphometrics and have a (potentially
       very ignorant) question.

       I have images that contain a deformable body and a rigid body.
       The images are rigidly registered to align the rigid bodies. The
       deformable bodies are described by ~5,000 points which are
       matched across each image. I believe my data is then comprised
       of the 3D coordinates of the ~5,000 points of the deformable
       body depicted in each image.

       Can I treat these points as landmarks and perform a very
       high-dimensional (~15,000-D) PCA? Is there any "curse of
       dimensionality" with this method?

       I appreciate your help.
       Adam
       adam.y...@gmail.com <mailto:adam.y...@gmail.com>





Attachment: svdUpdate.m
Description: Binary data

Reply via email to