Lakshmi Rhone wrote:
> Jim, does the mention of factor analysis mean that that there is an
> inference to an underlying visible cause
> which accounts for manifest correlations? LIke the existence of g is
> (falsely)  inferred from correlations among a battery of mental tests?

That's what most factor analysis is about. According to the Wikipedia,

> Factor analysis is a statistical method used to describe variability among 
> observed variables in terms of fewer unobserved variables called factors. The 
> observed variables are modeled as linear combinations of the factors, plus 
> "error" terms. The information gained about the interdependencies can be used 
> later to reduce the set of variables in a dataset. Factor analysis originated 
> in psychometrics, and is used in behavioral sciences, social sciences, 
> marketing, product management, operations research, and other applied 
> sciences that deal with large quantities of data.<

Spearman's g is a classic example of a "factor."

Further, the Wikipedia says:

> Exploratory factor analysis (EFA) is used to uncover the underlying structure 
> of a relatively large set of variables. The researcher's a priori assumption 
> is that any indicator may be associated with any factor [i.e., pure 
> empiricism]. This is the most common form of factor analysis. There is no 
> prior theory and one uses factor loadings to intuit the factor structure of 
> the data.

> Confirmatory factor analysis (CFA) seeks to determine if the number of 
> factors and the loadings of measured (indicator) variables on them conform to 
> what is expected on the basis of pre-established theory. Indicator variables 
> are selected on the basis of prior theory and factor analysis is used to see 
> if they load as predicted on the expected number of factors. The researcher's 
> à priori assumption is that each factor (the number and labels of which may 
> be specified à priori) is associated with a specified subset of indicator 
> variables. A minimum requirement of confirmatory factor analysis is that one 
> hypothesizes beforehand the number of factors in the model, but usually also 
> the researcher will posit expectations about which variables will load on 
> which factors. The researcher seeks to determine, for instance, if measures 
> created to represent a latent variable really belong together.<

This second (less common) form of factor analysis imposes some
theoretical constraints on the analysis.

It's quite possible that all factor analysis involves some theoretical
priors and is thus really CFA. Utter and total empiricism seems
impossible, since the investigators are people and live in human
society and thus have ethical values, preconceptions, biases, etc.

My impression is that the Stanford folks (Terman, etc.) were looking
for something like g (the IQ score) when they started. They _knew_
that there was something called "intelligence" (which they saw
themselves as having in abundance) and went looking for it. Also,
being reductionist, they wanted there to be only _one kind_ of
intelligence. (Of course, they were also familiar with Binet's work,
but he likely had similar priors.)

BTW, Lakshmi, are you referring to S.J. Gould's critique of IQ? it
would be interesting if his critique of IQ factor analysis parallels
Koopmans' critique of Mitchell's empiricism.
-- 
Jim Devine / "Segui il tuo corso, e lascia dir le genti." (Go your own
way and let people talk.) -- Karl, paraphrasing Dante.
_______________________________________________
pen-l mailing list
[email protected]
https://lists.csuchico.edu/mailman/listinfo/pen-l

Reply via email to