My comments should have been addressed to Matheus, not Edwin.  I apologize.

Jim

On Fri, Jul 10, 2009 at 11:50 AM, James Crants <[email protected]> wrote:

> Edwin,
>
> My issue with such a small sample size would not be low power, but low
> reliability.  If you show me the result for a single replicate, I have no
> confidence whatsoever that that result is typical for the treatment.  Just
> try calculating the 95% confidence intervals around your results.  If you
> can even find the 95% confidence interval for a treatment with a single
> replicate, you've done something wrong.  I'm surprised only one reviewer
> complained about your sample sizes and that you didn't need to correct this
> to publish the paper, especially since it sounds like it would have been
> logistically simple to increase the replication.
>
> I am also surprised G*power 3 would say that an experiment with just one or
> two replicates per treatment had near-maximum power.  Were you testing for
> the power to detect differences of the magnitude you actually observed?  If
> so, of course the power would be around 1, if your p-values were low (if
> your ANOVA actually detected a significant difference, its power to detect
> that difference must be high).  For a power test, you want to know the power
> of your experimental design to detect the smallest difference you would find
> biologically meaningful.
>
> Jim Crants
>
>   On Fri, Jul 10, 2009 at 3:49 AM, Edwin Cruz-Rivera <
> [email protected]> wrote:
>
>> Let me rephrase: out of 12 treatments, 10 had one replicate and 2 had 2.
>> However, these were not natural lakes or transects in geographic zones
>> that constrained replication.  These were 100 ml bottles on a table.
>> Sorry for the oversimplification.
>>
>> Edwin
>>
>>
>> > Changing a little the topic, I have a question about the statement of
>> > Edwin. He wrote:
>> > "If the statistics are grossly inappropriate (for example running an
>> > ANOVA with 12 treatments, but only 1 or two replicates per treatment),
>> > adequate peer review was clearly not in place."
>> > Well, I published a paper in which I used 2 way ANOVA with a total of 18
>> > groups and 2 replicates per groups. It was peer reviewed, and one of the
>> > reviewers complained about my statistics, asking for measurements of
>> > power, perhaps with the expectation that that particular test would have
>> > no enough power to draw any conclusions. I used a software to measure
>> the
>> > power of the test (G*power 3), and found that power was the maximum
>> > possible (1.00) for the effects due to factors 1 and 2, and 0.99 for the
>> > interaction effect.Was my test flawed? It was peer reviewed!
>> > Best,
>> >
>> > Matheus C. Carvalho
>> >
>> > Postdoctoral Fellow
>> > Research Center for Environmental Changes
>> >
>> > Academia Sinica
>> >
>> > Taipei, Taiwan
>> >
>> > --- Em qui, 9/7/09, Edwin Cruz-Rivera <[email protected]>
>> > escreveu:
>> >
>> > De: Edwin Cruz-Rivera <[email protected]>
>> > Assunto: Re: [ECOLOG-L] "real" versus "fake" peer-reviewed journals
>> > Para: [email protected]
>> > Data: Quinta-feira, 9 de Julho de 2009, 10:37
>> >
>> > I believe one of the original questions was how to discern reputable
>> > journals from those that publish dubious or biased results...or do not
>> > accomplish proper peer review.  I can point to a couple of red flags
>> that
>> > can be noticed without too much effort and I have observed:
>> >
>> > 1) If the articles in the journal come mostly from the same institution
>> in
>> > which the editor in chief is located, chances are the buddy system has
>> > overwhelmed objectivity...especially if the editor is a co-author in
>> most.
>> >
>> > 2) If orthographic and syntax errors are widespread, probably the review
>> > process was not thorough.
>> >
>> > 3) If the statistics are grossly inappropriate (for example running an
>> > ANOVA with 12 treatments, but only 1 or two replicates per treatment),
>> > adequate peer review was clearly not in place.
>> >
>> > Now these may look like extreme cases, but I have seen too many examples
>> > similar to the above to wonder how widespread these cases are.  I have
>> > even received requests to review papers for certain journals in which I
>> > have been asked to be more lenient than if I was reviewing for a major
>> > journal.  This poses a particular dilemma: Is all science not supposed
>> to
>> > be measured by the same standards of quality control regardless of
>> whether
>> > the journal is institutional, regional, national or international?
>> > I would like to think it should be...
>> >
>> > Edwin
>> > ------------------------------------------------------------------
>> > Dr. Edwin Cruz-Rivera
>> > Assist. Prof./Director, Marine Sciences Program
>> > Department of Biology
>> > Jackson State University
>> > JSU Box18540
>> > Jackson, MS 39217
>> > Tel: (601) 979-3461
>> > Fax: (601) 979-5853
>> > Email: [email protected]
>> >
>> > "It is not the same to hear the devil as it is to see him coming your
>> way"
>> > (Puerto Rican proverb)
>> >
>> >
>> >
>> >
>> >
>> >
>> ____________________________________________________________________________________
>> > Veja quais são os assuntos do momento no Yahoo! +Buscados
>> > http://br.maisbuscados.yahoo.com
>> >
>>
>
>
>
> --
> James Crants, PhD
> Scientist, University of Minnesota
> Agronomy and Plant Genetics
> Cell:  (734) 474-7478
>



-- 
James Crants, PhD
Scientist, University of Minnesota
Agronomy and Plant Genetics
Cell:  (734) 474-7478

Reply via email to