hi, Tricia

some content analysis software has an inter-rater reliability feature
that will provide you with a coefficient of agreement (can't recall the
name of the specific test right now) between raters. i don't know what
is considered an acceptable threshold, but i would suspect the higher,
the better.

what is an example of the discrete variable that you are coding for?

the hypothesis shouldn't change due to the coding; if you have
established a hypothesis prior to conducting the research, then the
coding (and the inter-rater reliability coefficient) subsequently
provide evidence to support or not support the hypothesis just like in
other research endeavors. i suspect it would be a difficult argument to
satisfactorily accept or reject the hypothesis based solely on a content
analysis, however. the reliability coefficient simply identifies the
level of agreement, it doesn't indicate the extent to which the
agreement is related to anything else, i.e., relationship between
variables that are coded for.

john

John E. Glass, Ph.D.
Professor of Sociology
Treasurer, CCCC Faculty Association
Division of Social & Behavioral Sciences
Colin County Community College
Preston Ridge Campus
9700 Wade Boulevard
Frisco, TX 75035
+1-972-377-1622
http://iws.ccccd.edu/jglass/
[EMAIL PROTECTED]

"We are more concerned about the discovery of knowledge than with its
dissemination"
B. F. Skinner

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Teaching Sociology" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/teachsoc
-~----------~----~----~----~------~----~------~--~---

Reply via email to