[no subject]
I, too, prefer closed-book tests in statistical methods courses. I also like short-answer items, some of which may be multiple-choice items. [Please don't gripe that all multiple-choice items assessonly memory recall; such items, if constructed well, may be very helpful in assessing learning!] I think that a very important aspect of evaluation of student performance and knowledge pertains to variability; variability in the sense of class performance. If assessment of student learning does not reflect some variability in student performance, there is a very serious problem with the assessment process used! Of course, variability may be expected to decrease as we get into more advanced courses. For whatever it is worth. Carl Huberty
Fw: We need your help!!!
It would be greatly appreciated if I could get references for the six topics mentioned in the message below. I assume that Conover (1999) discusses the first topic. But beyond that I am at a loss. Thanks in advance. Carl Huberty - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Sunday, November 11, 2001 9:10 AM Subject: We need your help!!! Dear Sincere Prof. Huberty I hope that you are spending a happy time . I have a sincere friend from Egypt. He is going to prepare for his proposal and he is going to make his defense in the next few days. He asked our help to provide him with some information about: Equations used to make Non-parametric factorial ANOVA :- Bradley's collapsed and reduce test. Harwell-Serlin's L test. Blair-Sawilowsky's adjusted rank transform test. Puri-Sen test. Fawcett and Salter's aligned rank test. Please, Dr. Huberty, if you have a book about this or you have an access to this information , please please help us as you could. Looking forward to hearing from you as soon as possible. Fine Regards, Fawzy Ebrahim = Instructions for joining and leaving this list and remarks about the problem of INAPPROPRIATE MESSAGES are available at http://jse.stat.ncsu.edu/ =
No Subject
Why do articles appear in print when study methods, analyses, results, and conclusions are somewhat faulty? [This may be considered as a follow-up to an earlier edstat interchange.] My first, and perhaps overly critical, response is that the editorial practices are faulty. I don't find Dennis Roberts' "reasons" in his 27 Apr message too satisfying. I regularly have students write critiques of articles in their respective areas of study. And I discover many, many, ... errors in reporting. I often ask myself, WHY? I can think of two reasons: 1) journal editors can not or do not send manuscripts to reviewers with statistical analysis expertise; and 2) manuscript originators do not regularly seek methodologists as co-authors. Which is more prevalent? For whatever it is worth ... Carl Huberty