I've created an attitudinal survey using a 5-point Likert scale and a "Not Applicable" category. Originally I selected "Not Applicable" as a missing value but now believe that was not correct...it substantially reduces the cases for inter-item correlations. 1. What is the normal practice for coding such a "Not Applicable" scale option?
I will then conduct a reliability and factor analysis test. Practice I've read in the literature is to first conduct reliability on the entire survey (say 50 to 60 items) then factor analysis to determine number of factors in survey. After reading about Cronbach's alpha I'm not sure that practice of reliability then factor analysis is correct. Seems the order should be factor analysis then reliability within designated factors. Reliability seems to refer to reliability of a group of common items (a factor), not groups of common items (as in reliability run on the entire survey of say 50 to 60 items). 2. What is common practice with reliability and factor analysis? . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
