Dear James:
Thank you for your comments. I believe you are right, and that there is little 
justification for applying ANOVA for groups with only 2 replications each. As 
you pointed, how to judge if they come from normal distributions, or if they 
have equal variances?However, when we test for normality it is very easy to 
find that 2 different values (say 3 and 4) will be normally distributed. How to 
say they are not? It is difficult I guess. Then, the test for normality will 
say that the population is normal, and that for homoscedasticity will say that 
variances are not unequal. Then, theoretically I can use ANOVA. Differences 
were significant, and the power test showed high power for a significance level 
of 5%. As you said, this should be due some obvious pattern in the data. 
Indeed, it was obvious, a glance in the plot would show the trend, no ANOVA was 
needed. Was the ANOVA useless? No. After doing it, I realized a feature in the 
plot that was a little more subtle
 and I had not notice before. For me, this was the biggest merit of the test. 
Of course that if somebody asks me whether I trust my results to do predictions 
for future observations I will say no. For such a case, replication is 
essential.
I remember a paper that stated that "inferential statistics are a courtesy to 
the reader". I will not say that I agree with that totally, but in some cases I 
think it could really be like that. I think that in my paper this could be the 
case. Using the ANOVA one can see the trends in the data more clearly. Make 
decisions based only on my experiment? No, I would suggest confirmatory 
observations. Here is the paper whose author says that statistics can be 
courtesy:
Logic of experiments in ecology: is pseudoreplication a pseudoissue? 
Costs and Gains of Recent Progress in Ecology - an Oikos Seminar 
Oikos. 94(1):27-38, July 2001.
Oksanen, Lauri

Here is my paper, if anybody wants to check:
Journal of Applied Phycology, Vol 20, No 5, 245–253.
Thank you and the others that made comments on the issue. Regards,
Matheus C. Carvalho

Postdoctoral Fellow
Research Center for Environmental Changes

Academia Sinica

Taipei, Taiwan

--- Em sex, 10/7/09, James J. Roper <[email protected]> escreveu:

De: James J. Roper <[email protected]>
Assunto: Re: [ECOLOG-L] ANOVA - too many treatments
Para: [email protected]
Data: Sexta-feira, 10 de Julho de 2009, 11:36

Matheus,

Yes, your test was flawed.  Remember the assumptions of ANOVA - normal 
residuals, equality of variances. Two replicates are too few to 
adequately test the assumption of equality of  variance among treatments 
(and we know nothing of the residual test). If you are unable to test 
the assumptions of the anova due to small sample size, the anova should 
not be done.  A power of 1 or 0.99 usually means that there was some 
trivial and self-evident result of your ANOVA, but it can also mean that 
your data were also insufficient to test power.

I have been teaching biostatistics to grad students for several years 
now.  In this class, for every topic the students must find a research 
paper published in a top journal on the same topic and analyze the 
analysis.  We have found that a very significant portion (> 25%) of the 
papers analyzed have statistics have flaws that range from minor to 
major.  ALL of these are peer reviewed.

Cheers,

Jim

Matheus Carvalho wrote on 09-Jul-09 20:01:
> Changing a little the topic, I have a question about the statement of Edwin. 
> He wrote:
> "If the statistics are grossly inappropriate (for example running an
> ANOVA with 12 treatments, but only 1 or two replicates per treatment),
> adequate peer review was clearly not in place."
> Well, I published a paper in which I used 2 way ANOVA with a total of 18 
> groups and 2 replicates per groups. It was peer reviewed, and one of the 
> reviewers complained about my statistics, asking for measurements of power, 
> perhaps with the expectation that that particular test would have no enough 
> power to draw any conclusions. I used a software to measure the power of the 
> test (G*power 3), and found that power was the maximum possible (1.00) for 
> the effects due to factors 1 and 2, and 0.99 for the interaction effect.Was 
> my test flawed? It was peer reviewed!
> Best,
>
> Matheus C. Carvalho
>
> Postdoctoral Fellow
> Research Center for Environmental Changes
>
> Academia Sinica
>
> Taipei, Taiwan
>
> --- Em qui, 9/7/09, Edwin Cruz-Rivera <[email protected]> escreveu:
>
> De: Edwin Cruz-Rivera <[email protected]>
> Assunto: Re: [ECOLOG-L] "real" versus "fake" peer-reviewed journals
> Para: [email protected]
> Data: Quinta-feira, 9 de Julho de 2009, 10:37
>
> I believe one of the original questions was how to discern reputable
> journals from those that publish dubious or biased results...or do not
> accomplish proper peer review.  I can point to a couple of red flags that
> can be noticed without too much effort and I have observed:
>
> 1) If the articles in the journal come mostly from the same institution in
> which the editor in chief is located, chances are the buddy system has
> overwhelmed objectivity...especially if the editor is a co-author in most.
>
> 2) If orthographic and syntax errors are widespread, probably the review
> process was not thorough.
>
> 3) If the statistics are grossly inappropriate (for example running an
> ANOVA with 12 treatments, but only 1 or two replicates per treatment),
> adequate peer review was clearly not in place.
>
> Now these may look like extreme cases, but I have seen too many examples
> similar to the above to wonder how widespread these cases are.  I have
> even received requests to review papers for certain journals in which I
> have been asked to be more lenient than if I was reviewing for a major
> journal.  This poses a particular dilemma: Is all science not supposed to
> be measured by the same standards of quality control regardless of whether
> the journal is institutional, regional, national or international?
> I would like to think it should be...
>
> Edwin
> ------------------------------------------------------------------
> Dr. Edwin Cruz-Rivera
> Assist. Prof./Director, Marine Sciences Program
> Department of Biology
> Jackson State University
> JSU Box18540
> Jackson, MS 39217
> Tel: (601) 979-3461
> Fax: (601) 979-5853
> Email: [email protected]
>
> "It is not the same to hear the devil as it is to see him coming your way"
> (Puerto Rican proverb)
>
>
>
>
>
>       
>____________________________________________________________________________________
> Veja quais são os assuntos do momento no Yahoo! +Buscados
> http://br.maisbuscados.yahoo.com
>   



      
____________________________________________________________________________________
Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Reply via email to