Rushing in where angles fear to tread...... The reported/calculated p-values are valid (if at all) _only_ inasmuch as the core assumptions regarding data are maintained. Critical evaluation of the validity of those assumptions spell the difference between simplistic and useful applications of p-values.
I don't think that is the issue at hand. One of the things that fascinates me about "statistics" is the way that an apparently straightforward concrete question slides over to heavy philosophical abstractions before we even notice. Is there such a thing as an operational definition of "probability" when it is applied to an observed average from a population with a different hypothesized mean? And if not, are we guilty of logical presumptions on the order of Newton's fixed cartesian reference frame? I sure hope not. Now, is there a considered analysis on this topic, which includes the p-value issues, and can put it to rest? Jay dennis roberts wrote: > jerry ... how can you avoid them? generally, software won't let you and, if > you are doing work for clients ... they won't think it is legit without them > > also, if you fail to instruct students about them ... others will claim > that you are not providing adequate instruction > > so, i would say that we are in a real bind ... > > try getting an empirical sort of article published ... or even considered > ... without them > > what if you wanted to use confidence intervals in your paper ... and > decided that NO null hypotheses were necessary to make your points ... the > editors would NOT let your article into their journal > > there is a dominant ... clear ... editorial and publishing bias ... that > dictates that you MUST talk about statistical significance ... you really > have no choice IF you want to publish in refereed sources > > that is NOT proof that p values are useful or valuable ... > > i can say that when i look at a paper ... it is not the statistics that my > attention is drawn to ... it is/are the method or methods they have used > that produce the data/results > > rarely can "bad" statistics kill you BUT, far too often, using bad methods > for collecting data or ... losing control over your (experimental) > conditions will do it faster than you can shout ... p value > > At 04:20 PM 3/24/2003, Jerry Dallal wrote: > >In some recent threads, many people have been critical of P values. > >While I don't base decisions solely on P values, I find them > >useful. I use P values in my work. > > > >So, I ask those critical of P values, "Do you use them in your > >work?" I'm not asking whether you are aware of them, but whether > >you generate them and report them as part of your assessment of > >data. I do. Do any of those who are critical of P values avoid > >using them altogether? If so, what do you do instead? This is not a > >question about what we might like to do or what would be preferable > >from a theoretical viewpoint. I'm curious to hear what people > >actually do when analyzing data. > > > >Thanks! -- Jay Warner Principal Scientist Warner Consulting, Inc. 4444 North Green Bay Road Racine, WI 53404-1216 USA Ph: (262) 634-9100 FAX: (262) 681-1133 email: [EMAIL PROTECTED] web: http://www.a2q.com The A2Q Method (tm) -- What do you want to improve today? . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
