Dear Mathaeus, Thank you VERY much for putting a concrete example on the table concerning a controversial paper with two replicates that was accepted and peer published!! According to my position, your work can be scientific if it is testable, religious if it is not and artistic if it comes out of bountiful -non religious fantasies. Someone could say that it is scientific, but bad science because of having n=2 and others can ask themselves if the reviewers and editors have made an error by letting the document to be published. My point is the following: if readers are well trained, they call judge all of this in their own.
There are too many journals in the world right now making difficult to make a "black list" or to develop any thumb rule saying when a journal or article is a fake. The most pragmatic thing -on my opinion, is to train each student with the basic criteria to decide and say "OK, this one is science and this other is pseudo-science". Can you please tell me the name of your article? I am looking forward to see it because -personally, I think that your sample size was too small but I want to learn more about test power measurements. There are even institutions teaching Communication of Science to their students. This kind of courses push beyond the discussions like "Is it science or jsut fake?" by means of introducing a more relevant problem. In concrete, such courses train young scientists to read papers while asking themseves: "was the editor wrong when he/she accepted this article?" Many greetings and thank you again!! Edgardo Edgardo I. Garrido-Pérez Goettingen University, Germany > Date: Thu, 9 Jul 2009 18:01:29 -0700 > From: [email protected] > Subject: [ECOLOG-L] ANOVA - too many treatments > To: [email protected] > > Changing a little the topic, I have a question about the statement of Edwin. > He wrote: > "If the statistics are grossly inappropriate (for example running an > ANOVA with 12 treatments, but only 1 or two replicates per treatment), > adequate peer review was clearly not in place." > Well, I published a paper in which I used 2 way ANOVA with a total of 18 > groups and 2 replicates per groups. It was peer reviewed, and one of the > reviewers complained about my statistics, asking for measurements of power, > perhaps with the expectation that that particular test would have no enough > power to draw any conclusions. I used a software to measure the power of the > test (G*power 3), and found that power was the maximum possible (1.00) for > the effects due to factors 1 and 2, and 0.99 for the interaction effect.Was > my test flawed? It was peer reviewed! > Best, > > Matheus C. Carvalho > > Postdoctoral Fellow > Research Center for Environmental Changes > > Academia Sinica > > Taipei, Taiwan > > --- Em qui, 9/7/09, Edwin Cruz-Rivera <[email protected]> escreveu: > > De: Edwin Cruz-Rivera <[email protected]> > Assunto: Re: [ECOLOG-L] "real" versus "fake" peer-reviewed journals > Para: [email protected] > Data: Quinta-feira, 9 de Julho de 2009, 10:37 > > I believe one of the original questions was how to discern reputable > journals from those that publish dubious or biased results...or do not > accomplish proper peer review. I can point to a couple of red flags that > can be noticed without too much effort and I have observed: > > 1) If the articles in the journal come mostly from the same institution in > which the editor in chief is located, chances are the buddy system has > overwhelmed objectivity...especially if the editor is a co-author in most. > > 2) If orthographic and syntax errors are widespread, probably the review > process was not thorough. > > 3) If the statistics are grossly inappropriate (for example running an > ANOVA with 12 treatments, but only 1 or two replicates per treatment), > adequate peer review was clearly not in place. > > Now these may look like extreme cases, but I have seen too many examples > similar to the above to wonder how widespread these cases are. I have > even received requests to review papers for certain journals in which I > have been asked to be more lenient than if I was reviewing for a major > journal. This poses a particular dilemma: Is all science not supposed to > be measured by the same standards of quality control regardless of whether > the journal is institutional, regional, national or international? > I would like to think it should be... > > Edwin > ------------------------------------------------------------------ > Dr. Edwin Cruz-Rivera > Assist. Prof./Director, Marine Sciences Program > Department of Biology > Jackson State University > JSU Box18540 > Jackson, MS 39217 > Tel: (601) 979-3461 > Fax: (601) 979-5853 > Email: [email protected] > > "It is not the same to hear the devil as it is to see him coming your way" > (Puerto Rican proverb) > > > > > > > ____________________________________________________________________________________ > Veja quais são os assuntos do momento no Yahoo! +Buscados > http://br.maisbuscados.yahoo.com _________________________________________________________________ Show them the way! Add maps and directions to your party invites. http://www.microsoft.com/windows/windowslive/products/events.aspx
