help on factor analysis/non-normality

2002-03-01 Thread Mobile Survey

What do i do if I need to run a factor analysis and have non-normal
distribution for some of the items (indicators)? Does Principal
component analysis require the normality assumption Can I use GLS to
extract the factors and get over the problem of non-normality Please
do give references if you are replying
Thanks


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jsestatncsuedu/
=



Re: help on factor analysis/non-normality

2002-03-01 Thread Rich Ulrich

On 1 Mar 2002 04:51:42 -0800, [EMAIL PROTECTED] (Mobile Survey)
wrote:

 What do i do if I need to run a factor analysis and have non-normal
 distribution for some of the items (indicators)? Does Principal
 component analysis require the normality assumption. 

There is no problem of non-normality, except that it *implies*
that decomposition  *might*  not give simple structures.
Complications are more likely when covariances are high.

What did you read, that you are trying to respond to?

  Can I use GLS to
 extract the factors and get over the problem of non-normality. Please
 do give references if you are replying.
 Thanks.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: help on factor analysis/non-normality

2002-03-01 Thread Robert Ehrlich

to amplifiy a bit, the interpretability of regression tends to go down as
the assumptions of normality and homogeneous variance are markedly
different from reality.  You can still go through the calcualtions but the
interpretation of results gets tricky.  Factor analysis is a sort of
regression analysis and so suffers in the same way from break downs of
assumptions.

Rich Ulrich wrote:

 On 1 Mar 2002 04:51:42 -0800, [EMAIL PROTECTED] (Mobile Survey)
 wrote:

  What do i do if I need to run a factor analysis and have non-normal
  distribution for some of the items (indicators)? Does Principal
  component analysis require the normality assumption.

 There is no problem of non-normality, except that it *implies*
 that decomposition  *might*  not give simple structures.
 Complications are more likely when covariances are high.

 What did you read, that you are trying to respond to?

   Can I use GLS to
  extract the factors and get over the problem of non-normality. Please
  do give references if you are replying.
  Thanks.

 --
 Rich Ulrich, [EMAIL PROTECTED]
 http://www.pitt.edu/~wpilib/index.html



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis

2002-02-05 Thread Herman Rubin

In article [EMAIL PROTECTED],
Roland Pesch  [EMAIL PROTECTED] wrote:
HI,

I'm trying to perform factor analysis on mosses from 1028 moss
monitoring sites, each of which was chemically anaylsed on 20 heavy
metal elements. All of these samples do not follow a normal distribution
pattern, they are all skewed positively.

It is my understanding that, to calculate the correlation coefficient
matrix, one should be very careful when the data samples are other than
normally distributed. So I transformed each sample lognormally but most
of the results still do not follow a normal distribution pattern (I
checked this with a Kolmogorow-Smirnow goodness-of-fit-test).

Linear methods, such as correlations or factor analysis, 
depend very heavily on LINEARITY assumptions.  The have
reasonable problems under non-normality, assuming the
moments exist, although the usual tests will not have the
same sampling distribution.

Attempting to make things normal is likely to destroy the
linearity of the relations.  Nothing in nature is normal,
although some MAY be close.  The tests, etc., for regression
analysis are not precise without normality of the errors,
but the reasonableness of the procedures holds if the true
relation is linear.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Huxley

Thank you for explanation. Bu my question was unclear therefore let me ask
again. I invented an exapmle.

I have 10 questions in a questionnaire. These questions are my 10 variables.
A consumers fill this questionnaire for each 15 products e.g cars. Because
10 variables (X1, X2, ...,X10) are correlated with each other I use factor
analysis and (for convinence I ordered it) I get
Factor1: X1,X2,X3,X4,X5,X6,X7
Factor2: X8,X9,X10

I can  e.g put X1 into 2-D space, because I know that
X1= -1*F1+ (-1*F2). It means that X1 has co-ordinates X1=(-1,-1).
It's simple. But I'm not interested in positioning X1. For me it's important
where there are products (cars) in 2-D space. Therefore my question is how
to do it. I heard (but I do not know) that using e.g variable X1,...X10
mean and factor loadings I can do it i.e. for car1: I multiple  factor
loadings and variables mean (suitable) and I get this position
Could you help me verify this?
I would be very appreciate

Regards
Huxley

Uzytkownik John Uebersax [EMAIL PROTECTED] napisal w wiadomosci
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 A program like SAS or SPSS will calculate factor scores for you.  A
 factor score is an estimated location of an object (not a variable)
 relative to a factor.  If your factors are orthogonal, then you can
 plot each case using that case's score on Factor 1 and the score on
 Factor 2 as the X- and Y- coordinates of in a 2-dimensional space.

 I believe the formula for estimating factor scores of a common-factor
 model is not trvial (unless all communalities are 1).  Therefore one
 might as well let the software calculate factor scores.  The topic is
 well explained in the SAS manual (PROC FACTOR)--perhaps also in the
 SPSS manual.

 --
--
 John Uebersax, PhD (805) 384-7688
 Thousand Oaks, California  (805) 383-1726 (fax)
 email: [EMAIL PROTECTED]

 Agreement Stats:
http://ourworld.compuserve.com/homepages/jsuebersax/agree.htm
 Latent Structure:  http://ourworld.compuserve.com/homepages/jsuebersax
 Existential Psych: http://members.aol.com/spiritualpsych
 Diet  Fitness:http://members.aol.com/WeightControl101
 --
--

 Huxley [EMAIL PROTECTED] wrote in message
news:a2u3sa$q3e$[EMAIL PROTECTED]...
  Hi,
  I've got a question. Does anyone know how to set object in 2-factor
  dimensional space ...
  I heard that factor score for a product is equal to product of the
suitable
  factor loadings and variables mean. i.e.
  f(m,p)=a(1,m)u(1,p) +a(2,m)u(2,p)+ ...+a(j,m)u(j,p)
  where: f(m,d) - factor score for m-factor,  p-th - consumer product ,
u(*) -
  mean for variable j and product p.
  Could you tell me is this true? How to proof this formally




=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Gottfried Helms

It's not so simple. You have to do matrix-inversion for
that. 

If your statistical program is able to spit out factor scores,
you just take these as your coordinates. For each of your objects
you get values in each factor, which you can use as coordinates 
in the factorspace. 

Regards -

Gottfried.


Huxley schrieb:
 
 Thank you for explanation. Bu my question was unclear therefore let me ask
 again. I invented an exapmle.
 
 I have 10 questions in a questionnaire. These questions are my 10 variables.
 A consumers fill this questionnaire for each 15 products e.g cars. Because
 10 variables (X1, X2, ...,X10) are correlated with each other I use factor
 analysis and (for convinence I ordered it) I get
 Factor1: X1,X2,X3,X4,X5,X6,X7
 Factor2: X8,X9,X10
 
 I can  e.g put X1 into 2-D space, because I know that
 X1= -1*F1+ (-1*F2). It means that X1 has co-ordinates X1=(-1,-1).
 It's simple. But I'm not interested in positioning X1. For me it's important
 where there are products (cars) in 2-D space. Therefore my question is how
 to do it. I heard (but I do not know) that using e.g variable X1,...X10
 mean and factor loadings I can do it i.e. for car1: I multiple  factor
 loadings and variables mean (suitable) and I get this position
 Could you help me verify this?
 I would be very appreciate
 
 Regards
 Huxley



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Huxley


Uzytkownik Gottfried Helms [EMAIL PROTECTED] napisal w wiadomosci
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 It's not so simple. You have to do matrix-inversion for
 that.

Not simple? I heard that taking suitable factor loadings and every variable
mean I can obtain this space. e.g. (I do not know is it true)
Let mean for car1 and questions 10 (variables):
mean X1=1
mean X2=2
..
mean X10=10
I have 2 factor score.
factor loadins (aij) I have, therefore for first factor score, co-odrinate
for car1 is
F1(for car1)=1*a(1,1)+2*a(2,1)+3*a(3,1)+...+10*a(10,1)
is it true?

Huxley




=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Pedro . Valero-Mora


What you need is a program that makes biplots for principal 
components. ViSta, a freeware program, will do it for you. In facti, 
it includes examples of data about cars and the goal of the analysis 
is to visualize them in the space of the variables.

Pedro 

 It's not so simple. You have to do matrix-inversion for
 that. 
 
 If your statistical program is able to spit out factor scores,
 you just take these as your coordinates. For each of your objects
 you get values in each factor, which you can use as coordinates 
 in the factorspace. 
 
 Regards -
 
 Gottfried.
 
 
 Huxley schrieb:
  
  Thank you for explanation. Bu my question was unclear therefore 
let me ask
  again. I invented an exapmle.
  
  I have 10 questions in a questionnaire. These questions are my 10 
variables.
  A consumers fill this questionnaire for each 15 products e.g 
cars. Because
  10 variables (X1, X2, ...,X10) are correlated with each other I 
use factor
  analysis and (for convinence I ordered it) I get
  Factor1: X1,X2,X3,X4,X5,X6,X7
  Factor2: X8,X9,X10
  
  I can  e.g put X1 into 2-D space, because I know that
  X1= -1*F1+ (-1*F2). It means that X1 has co-ordinates X1=(-1,-1).
  It's simple. But I'm not interested in positioning X1. For me 
it's important
  where there are products (cars) in 2-D space. Therefore my 
question is how
  to do it. I heard (but I do not know) that using e.g variable 
X1,...X10
  mean and factor loadings I can do it i.e. for car1: I multiple  
factor
  loadings and variables mean (suitable) and I get this position
  Could you help me verify this?
  I would be very appreciate
  
  Regards
  Huxley
 
 
 
 =
 Instructions for joining and leaving this list, remarks about the
 problem of INAPPROPRIATE MESSAGES, and archives are available at
   http://jse.stat.ncsu.edu/
 =
 





=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Gottfried Helms

Huxley schrieb:
 
 Uzytkownik Gottfried Helms [EMAIL PROTECTED] napisal w wiadomosci
 [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
  It's not so simple. You have to do matrix-inversion for
  that.
 
 Not simple? I heard that taking suitable factor loadings and every variable
 mean I can obtain this space. e.g. (I do not know is it true)
 Let mean for car1 and questions 10 (variables):
 mean X1=1
 mean X2=2
 ..
 mean X10=10
 I have 2 factor score.
 factor loadins (aij) I have, therefore for first factor score, co-odrinate
 for car1 is
 F1(for car1)=1*a(1,1)+2*a(2,1)+3*a(3,1)+...+10*a(10,1)
 is it true?
 
 Huxley

Loadings of factor f1,f2 for items x1,x2,x3,x4... 
 f1f2
 x1  0.4   0.6
 x2  0.3   0.9
 x3  0.2  -0.1
 x4 -0.8  -0.4
 ...
Call this loadingsmatrix A, your correlation-matrix R 
That means, that A*A' = R
Call your empical datamatrix   (x1,x2,x3,...) X 
Call the unknow factorscores  SC
Then it is assumed that

A*SC = X 

Then you must find inv(A) to be able to find SC:

inv(A)*A*SC = inv(A) *X
SC = inv(A)*X

If the shape of A is not square and/or the rank is lower
then its dimension, then you have to find a workaround to
compute the general_inverse of A. 

I don't find it so simple ;-) 

Gottfried.


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-29 Thread Rich Ulrich

On Tue, 29 Jan 2002 10:52:30 +0100, Huxley [EMAIL PROTECTED] wrote:

 
 Uzytkownik Gottfried Helms [EMAIL PROTECTED] napisal w wiadomosci
 [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
  It's not so simple. You have to do matrix-inversion for
  that.
 
 Not simple? I heard that taking suitable factor loadings and every variable
 mean I can obtain this space. e.g. (I do not know is it true)
 Let mean for car1 and questions 10 (variables):
 mean X1=1
 mean X2=2
 ..
 mean X10=10
 I have 2 factor score.
 factor loadins (aij) I have, therefore for first factor score, co-odrinate
 for car1 is
 F1(for car1)=1*a(1,1)+2*a(2,1)+3*a(3,1)+...+10*a(10,1)
 is it true?

No, that is not true.  
Please believe them.

Factor loadings are *correlations*  and serve as descriptors.  
They were neither scaled nor computed as regression coefficients -
which is what you are trying to use them as.


Now, in clinical research, we don't usually bother to create the
actual, real, true factor, for our practical purposes.   For practical
purposes, it is important to have some face-validity for what 
the factor means.  And it is handy for replication, as well as 
for understanding, if we construct a factor as the summed score
(or average score) of a set of the items.

So I look at the high loadings.  For a good set of items, it
can be realistic and appropriate to 'assign'  each item to the
factor where its loading is greatest, thus using each item just
once in the overall set of several derived factors.  (For a set 
of items where many items were new and untested, it can 
be appropriate to discard some of items -- where the loadings
were split, or were always small.)  Each factor is scored as 
the average score for of a subset of items.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: factor Analysis

2002-01-28 Thread John Uebersax

A program like SAS or SPSS will calculate factor scores for you.  A
factor score is an estimated location of an object (not a variable)
relative to a factor.  If your factors are orthogonal, then you can
plot each case using that case's score on Factor 1 and the score on
Factor 2 as the X- and Y- coordinates of in a 2-dimensional space.

I believe the formula for estimating factor scores of a common-factor
model is not trvial (unless all communalities are 1).  Therefore one
might as well let the software calculate factor scores.  The topic is
well explained in the SAS manual (PROC FACTOR)--perhaps also in the
SPSS manual.


John Uebersax, PhD (805) 384-7688 
Thousand Oaks, California  (805) 383-1726 (fax)
email: [EMAIL PROTECTED]

Agreement Stats:   http://ourworld.compuserve.com/homepages/jsuebersax/agree.htm
Latent Structure:  http://ourworld.compuserve.com/homepages/jsuebersax
Existential Psych: http://members.aol.com/spiritualpsych
Diet  Fitness:http://members.aol.com/WeightControl101


Huxley [EMAIL PROTECTED] wrote in message news:a2u3sa$q3e$[EMAIL PROTECTED]...
 Hi,
 I've got a question. Does anyone know how to set object in 2-factor
 dimensional space ...
 I heard that factor score for a product is equal to product of the suitable
 factor loadings and variables mean. i.e.
 f(m,p)=a(1,m)u(1,p) +a(2,m)u(2,p)+ ...+a(j,m)u(j,p)
 where: f(m,d) - factor score for m-factor,  p-th - consumer product , u(*) -
 mean for variable j and product p.
 Could you tell me is this true? How to proof this formally


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



factor Analysis

2002-01-26 Thread Huxley

Hi,
I've got a question. Does anyone know how to set object in 2-factor
dimensional space i.e I have 2 factor score. Therefore I can put variables
in this space. But variables describe objects (i.e. these are 12 consumer
products) and I don't care variables in space but only these products.
I heard that factor score for a product is equal to product of the suitable
factor loadings and variables mean. i.e.
f(m,p)=a(1,m)u(1,p) +a(2,m)u(2,p)+ ...+a(j,m)u(j,p)
where: f(m,d) - factor score for m-factor,  p-th - consumer product , u(*) -
mean for variable j and product p.
Could you tell me is this true? How to proof this formally


Huxley




=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-09-20 Thread Robert Ehrlich

you may wish to consider NCSS (they have a web site)  provides essentially the same 
output as SAS but is run from  templates not SAS
language.  Less expensive, good documentation, excellant support. However does not 
provide an audit trail--a necessary feature for
some governmental / legal groups.

PeterOut wrote:

 [EMAIL PROTECTED] (Magill, Brett) wrote in message 
news:[EMAIL PROTECTED]...
  Also check out R, a GNU implementation of the S language, most prominently
  known through its use in S-Plus.  R is a fully featured statisitical
  programming environment.  In its MVA (Multivariate) package, it includes
  routines for factor analysis using maximum liklihood estimation with varimax
  and promax rotations.
 

 I have installed R1.3.0 on  my Windows system and have noted that MVA
 is an add-on.  The FAQ tells how to obtain these add-ons but only for
 UNIX.  Is this add-on actually available for Windows?  If so, how do I
 obtain it?

 Thanks,
 Peter



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-09-18 Thread jcd

UNESCO IDAMS team would be very pleased to collect your comments about WinIDAMS
Factor Analysis procedure and any matters regarding the software.

[EMAIL PROTECTED] (Richard Wright) wrote in message 
news:[EMAIL PROTECTED]...
 I can't say whether it any good, let alone the best. But I have just
 seen the following on an archaeological post.
 
 UNESCO has released WinIDAMS 1.0 for 32-bit Windows operating system.
 WinIDAMS is a freeware software package for numerical information
 processing and statistical analysis. It provides a complete set of
 data manipulation and validation facilities and a wide range of
 classical and advanced statistical techniques, including interactive
 construction of multidimensional tables, graphical exploration of data
 and time series analysis.
 
 You can find more information at the following url:
 
 http://www.unesco.org/idams 
 
 I have checked the URL. It does offer factor analysis.
 
 Richard Wright


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



SAS is the best package for Factor analysis in Windows

2001-09-06 Thread Andreas Karlsson

In my opinion SAS is the best computer package for Factor analysis in
Windows. And for most other analyses too...


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: SAS is the best package for Factor analysis in Windows

2001-09-06 Thread Andreas Karlsson

On Thu, 06 Sep 2001 13:41:32 GMT, [EMAIL PROTECTED] (Andreas
Karlsson) wrote:

In my opinion SAS is the best computer package for Factor analysis in
Windows. And for most other analyses too...


testing...



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: Factor analysis - which package is best for Windows?

2001-09-06 Thread Magill, Brett

MVA comes with R base.  However, it is a seperate library.  Libraries that
are not sent with base are available in Windows binaries on CRAN, but you do
not have to worry about that for MVA.

Type:

library()

and you will get a list of the available packages.  To make MVA available
(i.e. load it), type:

library(mva)

then you can ask for, for example:

help (factanal)



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 05, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Re: Factor analysis - which package is best for Windows?


[EMAIL PROTECTED] (Magill, Brett) wrote in message
news:[EMAIL PROTECTED]...
 Also check out R, a GNU implementation of the S language, most prominently
 known through its use in S-Plus.  R is a fully featured statisitical
 programming environment.  In its MVA (Multivariate) package, it includes
 routines for factor analysis using maximum liklihood estimation with
varimax
 and promax rotations.
 

I have installed R1.3.0 on  my Windows system and have noted that MVA
is an add-on.  The FAQ tells how to obtain these add-ons but only for
UNIX.  Is this add-on actually available for Windows?  If so, how do I
obtain it?

Thanks,
Peter


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-09-06 Thread Richard Wright

I can't say whether it any good, let alone the best. But I have just
seen the following on an archaeological post.

UNESCO has released WinIDAMS 1.0 for 32-bit Windows operating system.
WinIDAMS is a freeware software package for numerical information
processing and statistical analysis. It provides a complete set of
data manipulation and validation facilities and a wide range of
classical and advanced statistical techniques, including interactive
construction of multidimensional tables, graphical exploration of data
and time series analysis.

You can find more information at the following url:

http://www.unesco.org/idams 

I have checked the URL. It does offer factor analysis.

Richard Wright


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-09-05 Thread PeterOut

[EMAIL PROTECTED] (Magill, Brett) wrote in message 
news:[EMAIL PROTECTED]...
 Also check out R, a GNU implementation of the S language, most prominently
 known through its use in S-Plus.  R is a fully featured statisitical
 programming environment.  In its MVA (Multivariate) package, it includes
 routines for factor analysis using maximum liklihood estimation with varimax
 and promax rotations.
 

I have installed R1.3.0 on  my Windows system and have noted that MVA
is an add-on.  The FAQ tells how to obtain these add-ons but only for
UNIX.  Is this add-on actually available for Windows?  If so, how do I
obtain it?

Thanks,
Peter


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-09-01 Thread Jerry Harder


Aron Landy [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 Problem is, SAS costs about $20,000 whereas CVF  IMSL come bundled for
 $800

 Aron

 John Uebersax [EMAIL PROTECTED] wrote in message
 [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
[snipped]
 --
 
 
  Aron Landy [EMAIL PROTECTED] wrote in message
 news:3b8b6418$0$8507$[EMAIL PROTECTED]...
   Any ideas, anyone? I am thinking of using IMSL (which comes free with
 Compaq
   Visual Fortran). Can I do better?
  
   Aron Landy


See R which is free and includes all the matrix manipulation functions that
you will probably require. http://lib.stat.cmu.edu/R/CRAN/

--
Good luck,

Jerry Harder
remove spamnein from address to reply



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-08-31 Thread John Uebersax

Thanks for the tip on KyPlot.  It does seem very nice.  

Two questions:

1.  As best I can tell, the Factor Analysis routines work off
a correlation or covariance matrix.  At least from a perusal
of the Help index, I can't see how to run Factor Analysis from
raw data, or to calculate a correlation/covariance matrix from 
raw data (short of applying matrix manipulations).  Is there
a way to produce a corr/cov matrix within KyPlot?

2.  Does anyone know the current homepage for KyPlot?

Thanks

John Uebersax, PhD (805) 384-7688 
Thousand Oaks, California  (805) 383-1726 (fax)
email: [EMAIL PROTECTED]

Agreement Stats:   http://ourworld.compuserve.com/homepages/jsuebersax/agree.htm
Latent Structure:  http://ourworld.compuserve.com/homepages/jsuebersax
Existential Psych: http://members.aol.com/spiritualpsych


 [EMAIL PROTECTED] (Richard Wright) wrote in message 
news:[EMAIL PROTECTED]...
 KyPlot runs under Windows, is freeware and gives you several factor
 analysis algorithms to choose from.
 
 http://www.rocketdownload.com/Details/Math/kyplot.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-08-30 Thread Aron Landy

I have tried it and it is amazing. A bargain ;)


Richard Wright [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 KyPlot runs under Windows, is freeware and gives you several factor
 analysis algorithms to choose from.

 http://www.rocketdownload.com/Details/Math/kyplot.htm







=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: Factor analysis - which package is best for Windows?

2001-08-30 Thread Magill, Brett

Also check out R, a GNU implementation of the S language, most prominently
known through its use in S-Plus.  R is a fully featured statisitical
programming environment.  In its MVA (Multivariate) package, it includes
routines for factor analysis using maximum liklihood estimation with varimax
and promax rotations.

R is open-source, which means that it is frequently updated and, most
importantly, it can be downloaded free of charge.  The only downside (to
some) is that at this stage of its development R is completely
command-prompt driven.  However, I find the R language intuitive and easy to
learn.

http://www.r-project.org


-Original Message-
From: Aron Landy [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 30, 2001 6:33 AM
To: [EMAIL PROTECTED]
Subject: Re: Factor analysis - which package is best for Windows?


I have tried it and it is amazing. A bargain ;)


Richard Wright [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 KyPlot runs under Windows, is freeware and gives you several factor
 analysis algorithms to choose from.

 http://www.rocketdownload.com/Details/Math/kyplot.htm







=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-08-29 Thread Richard Wright

KyPlot runs under Windows, is freeware and gives you several factor
analysis algorithms to choose from.

http://www.rocketdownload.com/Details/Math/kyplot.htm


On Wed, 29 Aug 2001 23:59:44 +0100, Aron Landy [EMAIL PROTECTED]
wrote:

Problem is, SAS costs about $20,000 whereas CVF  IMSL come bundled for
$800

Aron



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Factor analysis - which package is best for Windows?

2001-08-28 Thread Aron Landy

Any ideas, anyone? I am thinking of using IMSL (which comes free with Compaq
Visual Fortran). Can I do better?

Aron Landy





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor analysis - which package is best for Windows?

2001-08-28 Thread Magenta


Aron Landy [EMAIL PROTECTED] wrote in message
3b8b6418$0$8507$[EMAIL PROTECTED]">news:3b8b6418$0$8507$[EMAIL PROTECTED]...
 Any ideas, anyone? I am thinking of using IMSL (which comes free with
Compaq
 Visual Fortran). Can I do better?

Any of the standard statistical packages should be fine (e.g. SPSS, SAS,
S-Plus, Statistica, Minitab).  All have Windows versions, and all have
different types of site licenses.  If you are a student, you may be able to
get a student discount on the statistical software through your educational
institute.  You may also be able to locate a demonstration version, although
you would then have problems once the evaluation period ended (e.g.
inability to open the package-specific files).  I just recommend going with
a package that statisticians use, then you know that the results produced
are accurate.  Your choice of package will possibly be constrained by the
ease of use of the package (and even when you can programme, a menu system
can still be much more rapid).

I've not heard of ISML, but then I'm not a Fortran programmer (SPSS syntax,
SAS, and VBA are my limits - but about to learn Sax!!!).

Hope this helps, and good luck with your analysis!  :-)

cheers
Michelle




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-25 Thread Glen Barnett


Robert Ehrlich [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 Calculation of eigenvalues and eigenvalues requires no assumption.
 However evaluation of the results IMHO implicitly assumes at least a
 unimodal distribution and reasonably homogeneous variance for the same
 reasons as ANOVA or regression.  So think of th consequencesof calculating
 means and variances of a strongly bimodal distribution where no sample
 ocurrs near the mean and all samples are tens of standard devatiations
 from the mean.

The largest number of standard deviations all data can be from the mean is 1.

To get some data further away than that, some of it has to be less than 1 s.d.
from the mean.

Glen





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-22 Thread Robert Ehrlich

Calculation of eigenvalues and eigenvalues requires no assumption.
However evaluation of the results IMHO implicitly assumes at least a
unimodal distribution and reasonably homogeneous variance for the same
reasons as ANOVA or regression.  So think of th consequencesof calculating
means and variances of a strongly bimodal distribution where no sample
ocurrs near the mean and all samples are tens of standard devatiations
from the mean.

 Hi,

 I have a question regarding factor analysis: Is normality an important
 precondition for using factor analysis?

 If no, are there any books that justify this.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Maximum likelihood Was: Re: Factor Analysis

2001-06-18 Thread Herman Rubin

In article [EMAIL PROTECTED],
Ken Reed  [EMAIL PROTECTED] wrote:
It's not really possible to explain this in lay person's terms. The
difference between principal factor analysis and common factor analysis is
roughly that PCA uses raw scores, whereas factor analysis uses scores
predicted from the other variables and does not include the residuals.
That's as close to lay terms as I can get.

I have never heard a simple explanation of maximum likelihood estimation,
but --  MLE compares the observed covariance matrix with a  covariance
matrix predicted by probability theory and uses that information to estimate
factor loadings etc that would 'fit' a normal (multivariate) distribution.

MLE factor analysis is commonly used in structural equation modelling, hence
Tracey Continelli's conflation of it with SEM. This is not correct though.

I'd love to hear simple explanation of MLE!

MLE is triviality itself, if you do not make any attempt to
state HOW it is to be carried out.

For each possible value X of the observation, and each state
of nature \theta, there is a probability (or density with 
respect to some base measure) P(X | \theta).  There is no
assumption that X is a single real number; it can be anything;
the same holds about \theta.

What MLE does is to choose the \theta which makes P(X | \theta)
as large as possible.  That is all there is to it.

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-17 Thread Herman Rubin

In article 9gg7ht$qa3$[EMAIL PROTECTED],
haytham siala [EMAIL PROTECTED] wrote:
Hi,

I have a question regarding factor analysis: Is normality an important
precondition for using factor analysis?

If no, are there any books that justify this.

Factor analysis is quite robust against non-normality.
The essential factor structure is little affected by it
at all, although the representation may get somewhat
sensitive if data-dependent normalizations are used, such
as using correlations rather than covariances, or forcing
normalization on the covariance matrix of the factors.

Some of this is in my paper with Anderson in the
Proceedings of the Third Berkeley Symposium.  The result
on the asymptotic distribution, not at all difficult to
derive, is in one of my abstracts in _Annals of
Mathematical Statistics_, 1955.  It is basically this:

Suppose the factor model is 

x = \Lambda f + s,

f the common factors and s the specific factors.  Further
suppose that f and s, and also the elements of s, are
uncorrelated, and there is adequate normalization and
smooth identification of the model by the elements of
\Lambda alone.  Now estimate \Lambda, M, the covariance
matrix of f, and S, the diagonal covariance matrix of s.
Assuming the usual assumptions for asymptotic normality of
the sample covariances of the elements of f with s, and of
the pairs of different elements of s, the asymptotic
distribution of the estimates of \Lambda and the SAMPLE
values of M and S from their actual values will have the
expected asymptotic joint normal distribution.  This makes
no assumption about the distribution of M and S about 
their expected values, which is the main place were there
is an effect of normality. 



-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-17 Thread Ken Reed

It's not really possible to explain this in lay person's terms. The
difference between principal factor analysis and common factor analysis is
roughly that PCA uses raw scores, whereas factor analysis uses scores
predicted from the other variables and does not include the residuals.
That's as close to lay terms as I can get.

I have never heard a simple explanation of maximum likelihood estimation,
but --  MLE compares the observed covariance matrix with a  covariance
matrix predicted by probability theory and uses that information to estimate
factor loadings etc that would 'fit' a normal (multivariate) distribution.

MLE factor analysis is commonly used in structural equation modelling, hence
Tracey Continelli's conflation of it with SEM. This is not correct though.

I'd love to hear simple explanation of MLE!



 From: [EMAIL PROTECTED] (Tracey Continelli)
 Organization: http://groups.google.com/
 Newsgroups: sci.stat.consult,sci.stat.edu,sci.stat.math
 Date: 15 Jun 2001 20:26:48 -0700
 Subject: Re: Factor Analysis
 
 Hi there,
 
 would someone please explain in lay person's terms the difference
 betwn.
 principal components, commom factors, and maximum likelihood
 estimation
 procedures for factor analyses?
 
 Should I expect my factors obtained through maximum likelihood
 estimation
 tobe highly correlated?  Why?  When should I use a Maximum likelihood
 estimation procedure, and when should I not use it?
 
 Thanks.
 
 Rita
 
 [EMAIL PROTECTED]
 
 
 Unlike the other methods, maximum likelihood allows you to estimate
 the entire structural model *simultaneously* [i.e., the effects of
 every independent variable upon every dependent variable in your
 model].  Most other methods only permit you to estimate the model in
 pieces, i.e., as a series of regressions whereby you regress every
 dependent variable upon every independent variable that has an arrow
 directly pointing to it.  Moreover, maximum likelihood actually
 provides a statistical test of significance, unlike many other methods
 which only provide generally accepted cut-off points but not an actual
 test of statistical significance.  There are very few cases in which I
 would use anything except a maximum likelihood approach, which you can
 use in either LISREL or if you use SPSS you can add on the module AMOS
 which will do this as well.
 
 
 Tracey



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-16 Thread Alexandre Moura

Dear Haytham,

other issue concern with a measure of the latent construct is the
unidimensionality.  Hair et alli(1998): unidimensionality is an assumption
underlying the calculation of reliability and is demonstraded when
indicators of a construct have acceptable fit on a
single-factor(one-dimensional) model.(...) The use of reliability measures,
such Cronbach´s alpha, does not ensure unidimensionality but instead assumes
it exists. The researcher is encouraged to perform unidimensionality tests
on all multiple-indicator constructs before assessing their reliability.

This reference is very important:

Gerbing, David W., Anderson, James C. An updated paadigm for scale
development incorporating unidimensionality and its assesment.

Best regards,

Alexandre Moura.
P.S. Please accept my apologies for my English mistakes.



- Original Message -
From: haytham siala [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, June 15, 2001 5:40 PM
Subject: Factor Analysis


 Hi,
 I will appreciate if someone can help me with this question: if factors
 extracted from a factor analysis were found to be reliable (using an
 internal consistency test like a Cronbach alpha), can they be used to
 represent a measure of the latent construct? If yes, are there any
 references or books that justify this technique?






 =
 Instructions for joining and leaving this list and remarks about
 the problem of INAPPROPRIATE MESSAGES are available at
   http://jse.stat.ncsu.edu/
 =




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-16 Thread Alexandre Moura

The complete reference:

Gerbing, David W., Anderson, James C. An updated paradigm for scale
development incorporating unidimensionality and its assesment. Journal of
Marketing Research. Vol. XXV (May 1988).

Alexandre Moura.

- Original Message -
From: Alexandre Moura [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, June 16, 2001 9:26 AM
Subject: Re: Factor Analysis


 Dear Haytham,

 other issue concern with a measure of the latent construct is the
 unidimensionality.  Hair et alli(1998): unidimensionality is an
assumption
 underlying the calculation of reliability and is demonstraded when
 indicators of a construct have acceptable fit on a
 single-factor(one-dimensional) model.(...) The use of reliability
measures,
 such Cronbach´s alpha, does not ensure unidimensionality but instead
assumes
 it exists. The researcher is encouraged to perform unidimensionality tests
 on all multiple-indicator constructs before assessing their reliability.

 This reference is very important:

 Gerbing, David W., Anderson, James C. An updated paadigm for scale
 development incorporating unidimensionality and its assesment.

 Best regards,

 Alexandre Moura.
 P.S. Please accept my apologies for my English mistakes.



 - Original Message -
 From: haytham siala [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, June 15, 2001 5:40 PM
 Subject: Factor Analysis


  Hi,
  I will appreciate if someone can help me with this question: if factors
  extracted from a factor analysis were found to be reliable (using an
  internal consistency test like a Cronbach alpha), can they be used to
  represent a measure of the latent construct? If yes, are there any
  references or books that justify this technique?
 
 
 
 
 
 
  =
  Instructions for joining and leaving this list and remarks about
  the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
  =




 =
 Instructions for joining and leaving this list and remarks about
 the problem of INAPPROPRIATE MESSAGES are available at
   http://jse.stat.ncsu.edu/
 =



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Normality in Factor Analysis

2001-06-16 Thread haytham siala

Hi,

I have a question regarding factor analysis: Is normality an important
precondition for using factor analysis?

If no, are there any books that justify this.




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-16 Thread Eric Bohlman

In sci.stat.consult haytham siala [EMAIL PROTECTED] wrote:
 I have a question regarding factor analysis: Is normality an important
 precondition for using factor analysis?

It's necessary for testing hypotheses about factors extracted by 
Joreskog's maximum-likelihood method.  Otherwise, no.

 If no, are there any books that justify this.

Any book on factor analysis or multivariate statistics in general.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-15 Thread Tracey Continelli

Hi there,

would someone please explain in lay person's terms the difference
betwn.
principal components, commom factors, and maximum likelihood
estimation
procedures for factor analyses?

Should I expect my factors obtained through maximum likelihood
estimation
tobe highly correlated?  Why?  When should I use a Maximum likelihood
estimation procedure, and when should I not use it?

Thanks.

Rita

[EMAIL PROTECTED]


Unlike the other methods, maximum likelihood allows you to estimate
the entire structural model *simultaneously* [i.e., the effects of
every independent variable upon every dependent variable in your
model].  Most other methods only permit you to estimate the model in
pieces, i.e., as a series of regressions whereby you regress every
dependent variable upon every independent variable that has an arrow
directly pointing to it.  Moreover, maximum likelihood actually
provides a statistical test of significance, unlike many other methods
which only provide generally accepted cut-off points but not an actual
test of statistical significance.  There are very few cases in which I
would use anything except a maximum likelihood approach, which you can
use in either LISREL or if you use SPSS you can add on the module AMOS
which will do this as well.


Tracey


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-15 Thread Timothy W. Victor

_Psychometric Theory_, by Jum Nunnally to name one.

haytham siala wrote:
 
 Hi,
 I will appreciate if someone can help me with this question: if factors
 extracted from a factor analysis were found to be reliable (using an
 internal consistency test like a Cronbach alpha), can they be used to
 represent a measure of the latent construct? If yes, are there any
 references or books that justify this technique?

-- 
Timothy Victor
[EMAIL PROTECTED]
Policy Research, Evaluation, and Measurement
Graduate School of Education
University of Pennsylvania


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis of dichotomous variables

2001-05-01 Thread John Uebersax

A list of such programs and discussion can be found at:

http://ourworld.compuserve.com/homepages/jsuebersax/binary.htm

The results of Knol  Berger (1991) and Parry  MacArdle (1991) 
(see above web page for citations) suggest that there is not much 
difference in results between the Muthen method and the simpler 
method of factoring tetrachoric correlations.  For additional 
information (including examples using PRELIS/LISREL and SAS) on 
factoring tetrachorics, see

http://ourworld.compuserve.com/homepages/jsuebersax/irt.htm 

Hope this helps.

John Uebersax


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



PCA and factor analysis: when to use which

2001-04-18 Thread Ken Reed

What is the basis for deciding when to use principal components analysis and
when to use factor analysis. Could anyone describe a problem that
illustrates the difference?



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: maximum likelihood factor analysis

2000-04-24 Thread Chuck Cleland

[EMAIL PROTECTED] wrote:
 A self-report scale was constructed to measure work ethic and included three
 conceptually derived components of work ethic.  Maximum likelihood factor
 analysis was then applied with the request of 3 factors to determine if the
 conceptually derived components actually represent empirical factors.  Is
 this an appropriate/acceptable manner of evaluating the factor structure of
 the scale?  Also, my version of SPSS (6.0) reports percent of variance
 accounted for by each factor, but doesn't indicate if this is common variance
 or total variance.  Does someone know which variance is reported.  Is maximum
 likelihood factor analysis used with either principle components or principal
 factors analysis?  I would appreciate any explanation someone might offer.  I
 have had difficulty finding any explanation on the web concerning these
 issues. Kary

Kary:
  You might want to check out chapter 13 in Tabachnick, B.G., 
Fidell, L.S.(1996). Using multivariate statistics (3rd Edition). New
York: Harper Collins.  This chapter has a nice discussion of the
differences and similarities between principal components and common
factor analysis and it has some stuff on different estimation
procedures.  It sounds like you really want to do a confirmatory
factor analysis in which you could specify which items load on which
of the three factors.  I don't think SPSS 6.0 will do CFA, but you may
want to look into it anyway.

HTH,

Chuck
 
--
Chuck Cleland
Institute for the Study of Child Development
UMDNJ-Robert Wood Johnson Medical School
97 Paterson Street
New Brunswick, NJ 08903
phone: (732) 235-7699
  fax: (732) 235-6189
http://www2.umdnj.edu/iscdweb/
--


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



maximum likelihood factor analysis

2000-04-22 Thread kjessup

A self-report scale was constructed to measure work ethic and included three
conceptually derived components of work ethic.  Maximum likelihood factor
analysis was then applied with the request of 3 factors to determine if the
conceptually derived components actually represent empirical factors.  Is
this an appropriate/acceptable manner of evaluating the factor structure of
the scale?  Also, my version of SPSS (6.0) reports percent of variance
accounted for by each factor, but doesn't indicate if this is common variance
or total variance.  Does someone know which variance is reported.  Is maximum
likelihood factor analysis used with either principle components or principal
factors analysis?  I would appreciate any explanation someone might offer.  I
have had difficulty finding any explanation on the web concerning these
issues. Kary


Sent via Deja.com http://www.deja.com/
Before you buy.


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Texts: Factor Analysis

2000-04-05 Thread Gottfried Helms

 [EMAIL PROTECTED] wrote:
 
  What are your favorite book(s) on factor analysis?
 
  What do you think of R. Gorsuch's book?
 

My favorite is Stan Mulaik "The foundations of factor analysis".
It is comprehensive and still straightforward from the introduction
to all covered themes. I have tried different others, but none
was like that. Not being educated mathematician I felt I got
most that I needed with a good insight of the principles.

One similar is from Dirk Revenstorf, but I doubt it is available
in english.

Gottfried Helms.


-- 
   -
Gottfried Helms Soz.Päd./Soz.Arb. 
FB04 // FG Prevention  Rehabilitation at University
D-34109 Kassel  Moenchebergstr. 19 B

email: mailto:[EMAIL PROTECTED]
www:   http://www.uni-kassel.de/~helms



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Texts: Factor Analysis

2000-04-05 Thread dennis roberts

go to http://www.sagepub.com/

search on ... factor analysis ... some nice short books here

At 03:06 PM 4/5/00 +0200, Gottfried Helms wrote:
  [EMAIL PROTECTED] wrote:
 
   What are your favorite book(s) on factor analysis?
  
   What do you think of R. Gorsuch's book?
  

My favorite is Stan Mulaik "The foundations of factor analysis".
It is comprehensive and still straightforward from the introduction
to all covered themes. I have tried different others, but none
was like that. Not being educated mathematician I felt I got
most that I needed with a good insight of the principles.

One similar is from Dirk Revenstorf, but I doubt it is available
in english.

Gottfried Helms.


--
-
Gottfried Helms Soz.Päd./Soz.Arb.
FB04 // FG Prevention  Rehabilitation at University
D-34109 Kassel  Moenchebergstr. 19 B

email: mailto:[EMAIL PROTECTED]
www:   http://www.uni-kassel.de/~helms



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Texts: Factor Analysis

2000-04-05 Thread Gene Gallagher

For the natural sciences, try Reyment  Joreskog Applied Factor analysis
for the natural sciences, Cambridge Univ Press.

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] wrote:
  [EMAIL PROTECTED] wrote:
 
   What are your favorite book(s) on factor analysis?
  
   What do you think of R. Gorsuch's book?
  

 My favorite is Stan Mulaik "The foundations of factor analysis".
 It is comprehensive and still straightforward from the introduction
 to all covered themes. I have tried different others, but none
 was like that. Not being educated mathematician I felt I got
 most that I needed with a good insight of the principles.

 One similar is from Dirk Revenstorf, but I doubt it is available
 in english.

 Gottfried Helms.

 --
-
 Gottfried Helms Soz.Päd./Soz.Arb.
 FB04 // FG Prevention  Rehabilitation at University
 D-34109 Kassel  Moenchebergstr. 19 B
 
 email: mailto:[EMAIL PROTECTED]
 www:   http://www.uni-kassel.de/~helms
 


--
Eugene D. Gallagher
ECOS, UMASS/Boston


Sent via Deja.com http://www.deja.com/
Before you buy.


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Texts: Factor Analysis

2000-04-04 Thread James E. Strouse

Check out 'Multivariate Data Analysis' (4th Ed.)
Hair, Anderson, Tatham  Black
Great book.

[EMAIL PROTECTED]


[EMAIL PROTECTED] wrote:

 What are your favorite book(s) on factor analysis?

 What do you think of R. Gorsuch's book?

 Thanks,
 Scott Millis

 ===
 This list is open to everyone.  Occasionally, less thoughtful
 people send inappropriate messages.  Please DO NOT COMPLAIN TO
 THE POSTMASTER about these messages because the postmaster has no
 way of controlling them, and excessive complaints will result in
 termination of the list.

 For information about this list, including information about the
 problem of inappropriate messages and information about how to
 unsubscribe, please see the web page at
 http://jse.stat.ncsu.edu/
 ===



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: FACTOR ANALYSIS

2000-01-19 Thread lthayer

If these factors were length measured in feet and in yards, would it
make sense to have both in the same model. No

If these factors were measure of ability like IQ, IQ test 1 and IQ test
2, then the question depends on how the two test are related.  If they
are highly correlated, drop one. If they measure different things then
they should be included, if significant.  If they overlap, look at your
hypothesis and make a judgment based on the results.


In article 864hr0$805$[EMAIL PROTECTED],
  "haytham siala" [EMAIL PROTECTED] wrote:
 Hi,

 I have a question related to factor analysis.

 If a questionnaire item was found to load significantly on more than
one
 factor and let us assume that each factor represents a potential
measurement
 scale for a particular construct, should I retain the same item for
both
 factors (scales) i.e should that same item be included in the two
 measurement scales? Or should I take the highest loading of the item
as the
 decisive solution to which factor it should belong?

 Cheers.




Sent via Deja.com http://www.deja.com/
Before you buy.



FACTOR ANALYSIS 2

2000-01-19 Thread haytham siala

When I perform a factor analysis on the items of a questionnaire should I
include items that make up the Dependent Variables (DVs) as well as the
Independent Variables (IVs) in the analysis or should I perform two separate
factor analysis, one on the items making up the Dependent Variables and
another on the items making up the Independent Variables.






FACTOR ANALYSIS

2000-01-19 Thread haytham siala

When I perform a factor analysis on the items of a questionnaire should I
include items that make up the Dependent Variables (DVs) as well as the
Independent Variables (IVs) in the analysis or should I perform two separate
factor analysis, one on the items making up the Dependent Variables and
another on the items making up the Independent Variables.






Factor Analysis 3

2000-01-19 Thread haytham siala

Hi,

I am sorry that I am sending a lot of questions related to this subject and
here is another question:

If some dissimilar iets load on a common factor (the factor does not seem to
make sense since it consists of some related and some completely unrrelated
items), should I ignore that factor or should I delete the unrrelated items
from the factor analysis?

Thanks in advance.





Re: SEM and Confirmatory factor analysis

2000-01-01 Thread Stanley Mulaik



--
In article 83ftvs$qjq$[EMAIL PROTECTED], "Haider Al-Katem"
[EMAIL PROTECTED] wrote:


 What is the difference between SEM and Confirmatory factor analysis?
 Can I perform either of those statistical analyses on a sample size of 50?


SEM (Structural Equation Modelling) is more general than confirmatory factor
analysis.  Confirmatory factor analysis only models causal relations from
latents to manifest indicators, leaving the latent variables simply
correlated with one another.  SEM allows causal relations between latents,
where some latents are effects of others, and they in turn of even others.

 



Re: Factor analysis

2000-01-01 Thread Stanley Mulaik



--
In article 83ftip$qdf$[EMAIL PROTECTED], "Haider Al-Katem"
[EMAIL PROTECTED] wrote:


 Hi,

 I have conducted a factor analysis on some questionnaire items. The
 dependent variables that I am measuring for example ('Intention To Buy',
 'Attitude towards a product'  and 'Trust in buying the product from a
 merchant' ) seem to load significantly high on two factors which leaves me
 with a NOT SIMPLE FACTOR STRUCTURE.

 I am assuming that since 'Intention To Buy', 'Attitude towards a product'
 and 'Trust in buying the product from a merchant'  all seem to be some type
 of an ATTITUDE , the significantly high factor loadings on the two factors
 may be justifiable.

 My questions are:

 1. Are my above interpretations of the result correct?

 2. If not, is there a statistical method that can help me overcome this
 'non-simple factor structure'?



You haven't indicated exactly what the indicators are of these dependent
variables.  If you only have three indicators  then you can only get one
common factor for them.  Two factors are underidentified for three
indicators.

Also beware of a possible simplex for your variables or subset of them.
In that case a common factor model is not sufficient but may be misleading
in fitting fairly well.





Re: Factor analysis

1999-12-20 Thread Chuck Cleland

Haider Al-Katem wrote:
 I have conducted a factor analysis on some questionnaire items. The
 dependent variables that I am measuring for example ('Intention To Buy',
 'Attitude towards a product'  and 'Trust in buying the product from a
 merchant' ) seem to load significantly high on two factors which leaves me
 with a NOT SIMPLE FACTOR STRUCTURE.
 
 I am assuming that since 'Intention To Buy', 'Attitude towards a product'
 and 'Trust in buying the product from a merchant'  all seem to be some type
 of an ATTITUDE , the significantly high factor loadings on the two factors
 may be justifiable.

Simple structure is present when each item loads high on one factor and
low on all of the others.  You have not said whether the two factors you
extracted can be named (if the first factor is ATTITUDE TOWARD PRODUCT
X, then what is the second factor?).  Confirmatory factor analysis (CFA)
is a special case of SEM (specifically the measurement model part of
SEM).  I would say that 50 cases is probably too low to warrant much
confidence in the results of an exploratory factor analysis or CFA.

Chuck 

--
Chuck Cleland
Institute for the Study of Child Development
UMDNJ-Robert Wood Johnson Medical School
97 Paterson Street
New Brunswick, NJ 08903
phone: (732) 235-7699
  fax: (732) 235-6189
http://www2.umdnj.edu/iscdweb/
--



Re: Factor analysis

1999-12-19 Thread Rich Ulrich

On Sat, 18 Dec 1999 12:00:52 -, "Haider Al-Katem"
[EMAIL PROTECTED] wrote:

 I have conducted a factor analysis on some questionnaire items. The
 dependent variables that I am measuring for example ('Intention To Buy',
 'Attitude towards a product'  and 'Trust in buying the product from a
 merchant' ) seem to load significantly high on two factors which leaves me
 with a NOT SIMPLE FACTOR STRUCTURE.
 
 - Hey, two factors is pretty simple, if you start with a few dozen
items ...

 I am assuming that since 'Intention To Buy', 'Attitude towards a product'
 and 'Trust in buying the product from a merchant'  all seem to be some type
 of an ATTITUDE , the significantly high factor loadings on the two factors
 may be justifiable.
 
 My questions are:
 
 1. Are my above interpretations of the result correct?

Well, if "not simple" is an interpretation, it seems premature or
impossible for us readers to comment, because there is no content
worth commenting on.  If "may be justifiable" is an interpretation, it
is wimpy enough that I wouldn't claim it is incorrect.

 2. If not, is there a statistical method that can help me overcome this
 'non-simple factor structure'?

 And what is goal is "overcome" supposed to indicate?  If there are
two factors, you can provide the outcome of your survey as two
composite scores instead of just one.
-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html



Factor analysis

1999-12-18 Thread Haider Al-Katem

Hi,

I have conducted a factor analysis on some questionnaire items. The
dependent variables that I am measuring for example ('Intention To Buy',
'Attitude towards a product'  and 'Trust in buying the product from a
merchant' ) seem to load significantly high on two factors which leaves me
with a NOT SIMPLE FACTOR STRUCTURE.

I am assuming that since 'Intention To Buy', 'Attitude towards a product'
and 'Trust in buying the product from a merchant'  all seem to be some type
of an ATTITUDE , the significantly high factor loadings on the two factors
may be justifiable.

My questions are:

1. Are my above interpretations of the result correct?

2. If not, is there a statistical method that can help me overcome this
'non-simple factor structure'?

Thanks.